i have long dreamed of what purity could bring to versioning of software.
attn: the details below were handwaving. here are some refinements to be slightly less wrong: https://claude.ai/public/artifacts/0f516820-88f3-4ade-a7c4-6c8b34e41bdd
______________________________ __
📘 A Runtime Where CPS, Erlang, Haskell, and Shared Memory Had a Baby
Design Memo: Motivation → Problem Space → Ideal Model → Reality →
Lessons → Architecture → Game Plan → Example
______________________________ __
1. Motivation
Modern systems must evolve continuously:
deploy new code without restarting the world
run old and new versions side‑by‑side
shadow‑test new logic
replay tasks to debug hard‑to‑reproduce bugs
inspect state and control flow
isolate failures
maintain performance
Existing languages give you fragments of this vision:
Haskell gives purity, replayability, and typed effects
Erlang gives supervision, isolation, and hot‑swap
CPS gives explicit, inspectable control flow
Shared memory gives performance
But no single runtime gives you all of these together.
We want a system where:
writing version 1 feels like normal programming
version 2 can be shadow‑tested without code changes
debugging is “replay + swap,” not “sprinkle logs + redeploy”
the engine stays out of your way until you need it
the engine becomes powerful and interactive when you do
______________________________ __
2. Problem Space
2.1 Hidden state
Most languages hide state in:
closures
stacks
mutable objects
runtime internals
Hidden state breaks:
replay
hot‑swap
debugging
determinism
2.2 Unstructured concurrency
Async/await, threads, callbacks → tasks that cannot be:
paused
serialized
inspected
replayed
2.3 Tight coupling between code and runtime
Most languages assume:
one version of code
one global heap
no routing
no versioning
2.4 Library ecosystems
Real apps need:
HTTP
DB
crypto
cloud SDKs
These are impure and not replayable.
2.5 Serialization boundaries
Crossing process/FFI/microservice boundaries means:
schemas
serialization
versioning
impedance mismatch
This is the tax you pay for isolation.
______________________________ __
3. Abstract Goals
3.1 Explicit, serializable state
All meaningful state is:
pure
versioned
serializable
replayable
3.2 Reified tasks
All work is:
first‑class
inspectable
resumable
replayable
routable
3.3 Typed effects
Effects are:
explicit
typed
isolated
mockable
replay‑aware
3.4 Hot‑swap as a primitive
The runtime supports:
multiple versions of code
routing policies
draining
shadow execution
replay into new versions
3.5 Supervision and isolation
Processes are:
isolated
supervised
restartable
3.6 Shared memory where safe
Shared memory is:
explicit
capability‑based
optionally linear
STM‑backed
3.7 A principled core SDK
The language provides:
strings
collections
time
serialization
concurrency primitives
effect types
______________________________ __
4. High‑Level Design in an Ideal World
4.1 Pure state + event sourcing
State is:
State = fold(Event)
Events are:
append‑only
versioned
typed
4.2 Reified CPS‑style tasks
A task is:
data Task = Task
{ taskId :: TaskId
, continuation :: Continuation
, mailbox :: [Message]
, history :: [Event]
}
The runtime can:
pause
serialize
resume
replay
shadow
migrate tasks
4.3 Erlang‑style processes
Each task runs in a supervised process with:
mailbox
restart strategy
versioned code
routing rules
4.4 STM‑style shared memory
Global or sharded state lives in:
TVar WorldState
4.5 Typed effects
Effects are declared like:
effect HttpGet :: Url -> HttpResponse
effect Now :: Time
effect Random :: Int -> Int
4.6 Multi‑process hot‑swap
When new code arrives:
Spin up a new process tree
Replay state/events into it
Route new tasks to it
Drain old tasks
Kill old version
______________________________ __
5. Reality: Nothing Is Ideal
5.1 Library ecosystems
You can’t rewrite DB drivers or cloud SDKs.
5.2 Serialization tax
Crossing process boundaries is unavoidable.
5.3 Hot‑swap complexity
You must handle:
in‑flight tasks
schema evolution
draining
replay failures
5.4 Debugging distributed processes
You need:
tracing
unified logs
task histories
5.5 Performance tradeoffs
Reified tasks and event logs cost memory and CPU.
______________________________ __
6. Which Approach Is Least Wrong?
Option A — Build on a dynamic language
❌ Hidden state, no replay, no typed effects.
Option B — Build a new language + runtime
❌ Impossible library ecosystem.
Option C — Build an EDSL inside Haskell
✅ Purity
✅ STM
✅ Typed effects
✅ Replayability
✅ Strong compiler
❌ Hot‑swap requires multi‑process orchestration
❌ FFI still needed
This is the least wrong option.
______________________________ __
7. Lessons From Other Systems
Erlang/Elixir
supervision
hot code upgrades
message passing
Haskell
purity
STM
typed effects
CPS languages
explicit continuations
Unison
code as data
versioned definitions
Cloud Haskell
distributed processes
Akka
actor supervision
______________________________ __
8. Putting It All Together: The Layered Model
Layer 1 — App Code (developer writes this)
Feels like normal Haskell:
handleRequest :: Request -> AppM Response
lookupUser :: UserId -> AppM User
listItems :: UserId -> AppM [Item]
Layer 2 — Router (developer registers functions)
routes :: Router AppM
routes =
register "handleRequest" handleRequest
<> register "lookupUser" lookupUser
<> register "listItems" listItems
Layer 3 — Engine Runtime (framework provides this)
task manager
process registry
supervision
version routing
event log
replay
hot‑swap
Layer 4 — Multi‑Process Orchestrator
runs version N and version N+1
routes traffic
drains old version
replays tasks
kills old version
______________________________ __
9. Concrete Haskell Types
9.1 App Monad
newtype AppM a = AppM
{ unAppM :: ReaderT Env (TaskM) a
} deriving (Functor, Applicative, Monad)
9.2 Task Monad (reified CPS)
data TaskF next
= Log Text next
| Send ProcessId Message next
| Receive (Message -> next)
| GetTime (Time -> next)
| ReadState (State -> next)
| WriteState State next
deriving Functor
type TaskM = Free TaskF
9.3 Router
data Version = Version Text -- e.g. "v1", "v2"
data RoutingPolicy
= Primary Version
| Shadow Version Version
| Percent Version Version Int -- percent for candidate
data Router m = Router
{ routes :: Map Text (Map Version (DynamicFunction m))
, policies :: Map Text RoutingPolicy
}
9.4 Engine Entry Point
runEngine :: EngineConfig -> Router AppM -> IO ()
______________________________ __
10. End‑to‑End Example
Step 1 — Developer writes v1 code
handleRequest :: Request -> AppM Response
handleRequest req = do
user <- lookupUser (reqUserId req)
items <- listItems (userId user)
pure (renderPage user items)
Step 2 — Register v1 routes
routesV1 :: Router AppM
routesV1 =
register "handleRequest" handleRequest
<> register "lookupUser" lookupUser
<> register "listItems" listItems
Step 3 — Deploy v1
main = runEngine defaultConfig routesV1
Everything feels like normal Haskell.
______________________________ __
Step 4 — Developer writes v2 of one function
lookupUserV2 :: UserId -> AppM User
lookupUserV2 uid = do
logInfo ("lookupUserV2 called with " <> show uid)
lookupUser uid -- call v1 internally
Step 5 — Register v2
routesV2 =
register "lookupUser" lookupUserV2
Step 6 — Shadow test v2
Config:
functions:
lookupUser:
routing:
mode: shadow
primary: v1
shadow: v2
Engine behavior:
v1 handles the real request
v2 runs in parallel
engine compares outputs
engine logs divergences
Developer writes no new code.
______________________________ __
Step 7 — Debug a hard‑to‑reproduce bug
Capture failing task abc123
Replay it:
engine task replay abc123
Switch only lookupUser to debug version:
engine route set lookupUser --task abc123 --version debug
Replay again and inspect logs.
No redeploy.
No code changes.
No global logging.
______________________________ __
11. The Game Plan
Phase 1 — Core EDSL
TaskF, TaskM, typed effects
CPS‑friendly continuation model
Phase 2 — Runtime
process registry
mailboxes
STM‑backed world state
supervision
Phase 3 — Router + Versioning
versioned functions
routing policies
shadow execution
Phase 4 — Event Log + Replay
append‑only log
replay engine
divergence detection
Phase 5 — Multi‑Process Hot‑Swap
orchestrator
drain protocol
replay protocol
version migration
Phase 6 — Core SDK
strings, collections, time
serialization
effect wrappers
Phase 7 — Tooling
task inspector UI
event log viewer
routing dashboard
replay debugger
📘 A Runtime Where CPS, Erlang, Haskell, and Shared Memory Had a Baby
Design Memo: Motivation → Problem Space → Ideal Model → Reality →
Lessons → Architecture → Game Plan → Example
______________________________
1. Motivation
Modern systems must evolve continuously:
deploy new code without restarting the world
run old and new versions side‑by‑side
shadow‑test new logic
replay tasks to debug hard‑to‑reproduce bugs
inspect state and control flow
isolate failures
maintain performance
Existing languages give you fragments of this vision:
Haskell gives purity, replayability, and typed effects
Erlang gives supervision, isolation, and hot‑swap
CPS gives explicit, inspectable control flow
Shared memory gives performance
But no single runtime gives you all of these together.
We want a system where:
writing version 1 feels like normal programming
version 2 can be shadow‑tested without code changes
debugging is “replay + swap,” not “sprinkle logs + redeploy”
the engine stays out of your way until you need it
the engine becomes powerful and interactive when you do
______________________________
2. Problem Space
2.1 Hidden state
Most languages hide state in:
closures
stacks
mutable objects
runtime internals
Hidden state breaks:
replay
hot‑swap
debugging
determinism
2.2 Unstructured concurrency
Async/await, threads, callbacks → tasks that cannot be:
paused
serialized
inspected
replayed
2.3 Tight coupling between code and runtime
Most languages assume:
one version of code
one global heap
no routing
no versioning
2.4 Library ecosystems
Real apps need:
HTTP
DB
crypto
cloud SDKs
These are impure and not replayable.
2.5 Serialization boundaries
Crossing process/FFI/microservice boundaries means:
schemas
serialization
versioning
impedance mismatch
This is the tax you pay for isolation.
______________________________
3. Abstract Goals
3.1 Explicit, serializable state
All meaningful state is:
pure
versioned
serializable
replayable
3.2 Reified tasks
All work is:
first‑class
inspectable
resumable
replayable
routable
3.3 Typed effects
Effects are:
explicit
typed
isolated
mockable
replay‑aware
3.4 Hot‑swap as a primitive
The runtime supports:
multiple versions of code
routing policies
draining
shadow execution
replay into new versions
3.5 Supervision and isolation
Processes are:
isolated
supervised
restartable
3.6 Shared memory where safe
Shared memory is:
explicit
capability‑based
optionally linear
STM‑backed
3.7 A principled core SDK
The language provides:
strings
collections
time
serialization
concurrency primitives
effect types
______________________________
4. High‑Level Design in an Ideal World
4.1 Pure state + event sourcing
State is:
State = fold(Event)
Events are:
append‑only
versioned
typed
4.2 Reified CPS‑style tasks
A task is:
data Task = Task
{ taskId :: TaskId
, continuation :: Continuation
, mailbox :: [Message]
, history :: [Event]
}
The runtime can:
pause
serialize
resume
replay
shadow
migrate tasks
4.3 Erlang‑style processes
Each task runs in a supervised process with:
mailbox
restart strategy
versioned code
routing rules
4.4 STM‑style shared memory
Global or sharded state lives in:
TVar WorldState
4.5 Typed effects
Effects are declared like:
effect HttpGet :: Url -> HttpResponse
effect Now :: Time
effect Random :: Int -> Int
4.6 Multi‑process hot‑swap
When new code arrives:
Spin up a new process tree
Replay state/events into it
Route new tasks to it
Drain old tasks
Kill old version
______________________________
5. Reality: Nothing Is Ideal
5.1 Library ecosystems
You can’t rewrite DB drivers or cloud SDKs.
5.2 Serialization tax
Crossing process boundaries is unavoidable.
5.3 Hot‑swap complexity
You must handle:
in‑flight tasks
schema evolution
draining
replay failures
5.4 Debugging distributed processes
You need:
tracing
unified logs
task histories
5.5 Performance tradeoffs
Reified tasks and event logs cost memory and CPU.
______________________________
6. Which Approach Is Least Wrong?
Option A — Build on a dynamic language
❌ Hidden state, no replay, no typed effects.
Option B — Build a new language + runtime
❌ Impossible library ecosystem.
Option C — Build an EDSL inside Haskell
✅ Purity
✅ STM
✅ Typed effects
✅ Replayability
✅ Strong compiler
❌ Hot‑swap requires multi‑process orchestration
❌ FFI still needed
This is the least wrong option.
______________________________
7. Lessons From Other Systems
Erlang/Elixir
supervision
hot code upgrades
message passing
Haskell
purity
STM
typed effects
CPS languages
explicit continuations
Unison
code as data
versioned definitions
Cloud Haskell
distributed processes
Akka
actor supervision
______________________________
8. Putting It All Together: The Layered Model
Layer 1 — App Code (developer writes this)
Feels like normal Haskell:
handleRequest :: Request -> AppM Response
lookupUser :: UserId -> AppM User
listItems :: UserId -> AppM [Item]
Layer 2 — Router (developer registers functions)
routes :: Router AppM
routes =
register "handleRequest" handleRequest
<> register "lookupUser" lookupUser
<> register "listItems" listItems
Layer 3 — Engine Runtime (framework provides this)
task manager
process registry
supervision
version routing
event log
replay
hot‑swap
Layer 4 — Multi‑Process Orchestrator
runs version N and version N+1
routes traffic
drains old version
replays tasks
kills old version
______________________________
9. Concrete Haskell Types
9.1 App Monad
newtype AppM a = AppM
{ unAppM :: ReaderT Env (TaskM) a
} deriving (Functor, Applicative, Monad)
9.2 Task Monad (reified CPS)
data TaskF next
= Log Text next
| Send ProcessId Message next
| Receive (Message -> next)
| GetTime (Time -> next)
| ReadState (State -> next)
| WriteState State next
deriving Functor
type TaskM = Free TaskF
9.3 Router
data Version = Version Text -- e.g. "v1", "v2"
data RoutingPolicy
= Primary Version
| Shadow Version Version
| Percent Version Version Int -- percent for candidate
data Router m = Router
{ routes :: Map Text (Map Version (DynamicFunction m))
, policies :: Map Text RoutingPolicy
}
9.4 Engine Entry Point
runEngine :: EngineConfig -> Router AppM -> IO ()
______________________________
10. End‑to‑End Example
Step 1 — Developer writes v1 code
handleRequest :: Request -> AppM Response
handleRequest req = do
user <- lookupUser (reqUserId req)
items <- listItems (userId user)
pure (renderPage user items)
Step 2 — Register v1 routes
routesV1 :: Router AppM
routesV1 =
register "handleRequest" handleRequest
<> register "lookupUser" lookupUser
<> register "listItems" listItems
Step 3 — Deploy v1
main = runEngine defaultConfig routesV1
Everything feels like normal Haskell.
______________________________
Step 4 — Developer writes v2 of one function
lookupUserV2 :: UserId -> AppM User
lookupUserV2 uid = do
logInfo ("lookupUserV2 called with " <> show uid)
lookupUser uid -- call v1 internally
Step 5 — Register v2
routesV2 =
register "lookupUser" lookupUserV2
Step 6 — Shadow test v2
Config:
functions:
lookupUser:
routing:
mode: shadow
primary: v1
shadow: v2
Engine behavior:
v1 handles the real request
v2 runs in parallel
engine compares outputs
engine logs divergences
Developer writes no new code.
______________________________
Step 7 — Debug a hard‑to‑reproduce bug
Capture failing task abc123
Replay it:
engine task replay abc123
Switch only lookupUser to debug version:
engine route set lookupUser --task abc123 --version debug
Replay again and inspect logs.
No redeploy.
No code changes.
No global logging.
______________________________
11. The Game Plan
Phase 1 — Core EDSL
TaskF, TaskM, typed effects
CPS‑friendly continuation model
Phase 2 — Runtime
process registry
mailboxes
STM‑backed world state
supervision
Phase 3 — Router + Versioning
versioned functions
routing policies
shadow execution
Phase 4 — Event Log + Replay
append‑only log
replay engine
divergence detection
Phase 5 — Multi‑Process Hot‑Swap
orchestrator
drain protocol
replay protocol
version migration
Phase 6 — Core SDK
strings, collections, time
serialization
effect wrappers
Phase 7 — Tooling
task inspector UI
event log viewer
routing dashboard
replay debugger
No comments:
Post a Comment