Introduction
Every language designed before 2023 was optimized for a single tradeoff: minimize friction between human cognitive capacity and machine execution. Assembly to C to managed runtimes to DSLs were different points on the same line. In an LLM-driven workflow, those languages don’t get cheaper to use — they get more expensive. The cost just hides in the LLM’s token count, its retry rate, and the latency it eats per turn. Pre-LLM languages are a hidden tax in the LLM era.
Most of an LLM’s per-turn effort isn’t recalling syntax. It’s translating between the user’s mental model of a system and the language’s structural shape. A language whose primitives don’t match how the system is thought about forces this translation every turn, paying full cost each time.
Aperio is built on a different premise: there exists a substrate-invariant structural model — a recursive hypergraph of typed, lifecycled units called loci — that both human reasoning and LLM reasoning operationalize when working with systems.1 A language whose primitives are that model collapses the translation layer. The mental model and the code share a substrate.
What that looks like in practice
Pick a system you already have a mental model for: the matchmaker behind a multiplayer game. In your head, the thing is a service that holds a queue of waiting players, spawns a match when enough are queued, and goes back to waiting.
Here’s that, in Aperio:
type Player { id: String; name: String; }
type MatchInfo { match_id: String; players: [Player]; }
topic JoinQueue { payload: Player; }
topic MatchReady { payload: MatchInfo; }
@form(vec)
locus Matchmaker {
params { target_size: Int = 4; }
capacity { heap waiting of Player; }
bus {
subscribe JoinQueue as on_join;
publish MatchReady;
}
fn on_join(p: Player) {
self.waiting.push(p);
if self.waiting.len() >= self.target_size {
MatchReady <- assemble_match(self.waiting, self.target_size);
}
}
}
Every clause of the mental-model description has a syntactic home in the code, in roughly the order you thought about them:
- “a service” →
locus Matchmaker - “holds a queue of waiting players” →
capacity { heap waiting of Player; }(the@form(vec)annotation gives it queue-like methods) - “receives players wanting matches” →
subscribe JoinQueue as on_join - “announces matches” →
publish MatchReady - “when enough are queued” → the inline
if
The structural correspondence is the point. The same description in Go, Rust, or TypeScript expands into more concerns: mutex selection, channel types, async/await machinery, explicit lifecycle wiring, error-handling at every channel boundary. Each of those is a translation an LLM has to perform every turn. Aperio elides them because the language commits to them at the structural layer.
The choice of @form(vec) here is itself a real design
decision, not an arbitrary one. @form(ring_buffer) gives the
same shape with a hard capacity ceiling and explicit
drop-on-full semantics; @form(hashmap) keyed by player id
gets you natural ID-based cancellation. Forms are how Aperio
exposes those choices — we cover them in Concepts.
See it on your own code
The matchmaker above is a constructed example. The claim is
testable on code you already have. In whatever LLM-coding tool
you use (Claude Code, Cursor,
whatever), drop this project’s
AGENTS.md
into the agent’s context, then ask it to re-read a module or
service from your existing codebase in terms of loci,
contracts, and bus topics.
What usually comes back is a structural decomposition that matches your mental model of the system with surprising accuracy — because the agent is using the same recursive locus vocabulary you already use when reasoning about the code. The friction you normally feel between how you think about this system and what’s literally on the page largely disappears.
If the decomposition looks wrong or unhelpful, the thesis fails for your codebase and that’s useful feedback — open an issue. If it looks right, you’ve felt the structural correspondence from the other direction: not by writing new Aperio code, but by reading your existing code through the same lens.
More than a programming language
The structural model Aperio operationalizes isn’t software-specific. The same recursive hypergraph organizes coordination at every substrate the underlying research program addresses: institutions, biological regulatory networks, physical systems, cognitive architecture. Aperio’s frontend is, in principle, a design language that can target machinery in any of those substrates. The programming-language form is the first instantiation, not the only one. (Held lightly — the immediate work is the language itself.)
Status and shape
This is an experimental language. The compiler ships native codegen via LLVM 18 and a tree-walking interpreter for fast feedback. The semantics are still moving; breaking changes are expected and welcomed.
Continue to Getting Started to install the compiler and
write your first locus. After you’ve felt the shape, the
Concepts chapters walk through the structural model in
depth. For the canonical contract — exactly what the compiler
accepts and what it does — see the Reference section
(which points at the spec/ corpus).
-
The structural model is the subject of an ongoing research program. The first formalization is Rook (2026, forthcoming), Capacity Allocation Model; preprint available on request. ↩
Install
Aperio currently builds from source. You’ll need:
- A Rust toolchain (stable or newer; tested on 1.95+).
- LLVM 18 development libraries, with
llvm-config-18onPATH(orLLVM_SYS_180_PREFIXpointing at an LLVM 18 install). The compiler links against LLVM viainkwellwith thellvm18-0feature; LLVM 17 / 19 / 20 will not work. clangonPATH. The compiler invokes it as the linker when producing native binaries (aperio build).gitonPATH. Used byaperio fetchto clone declared dependencies.
Installing the host dependencies
Debian / Ubuntu
sudo apt install llvm-18-dev libclang-18-dev clang-18 git
# Some apt layouts don't add `llvm-config-18` to PATH by default:
sudo ln -sf /usr/bin/llvm-config-18 /usr/local/bin/llvm-config
If apt doesn’t have an llvm-18-dev package for your release,
add the official LLVM apt source (https://apt.llvm.org/)
following the instructions there for your distro.
macOS (Homebrew)
brew install llvm@18 git
# Tell the build where LLVM 18 lives — Homebrew doesn't link
# llvm@18 into PATH by default to avoid colliding with system clang.
export LLVM_SYS_180_PREFIX="$(brew --prefix llvm@18)"
export PATH="$(brew --prefix llvm@18)/bin:$PATH"
Add the export lines to your shell rc file if you want them
to persist.
Fedora / RHEL
sudo dnf install llvm18-devel clang18 git
Verifying
llvm-config --version # should print 18.x.x
clang --version # should be present
Build the compiler
git clone https://github.com/aperio-lang/aperio
cd aperio
cargo build --release
The aperio binary lands at target/release/aperio. You can
either symlink it onto your PATH or always invoke it via cargo:
cargo run -p aperio-cli --bin aperio -- run hello.ap
Run the test suite
cargo test --release --workspace
The test suite is the source of truth for what the compiler supports today. If a test fails on a clean checkout, that’s a bug — please file an issue.
Project layout (when you start your own)
A project is a directory with one or more .ap files. Optional
companions:
aperio.toml— manifest listing git dependencies. Runaperio fetchto clone them intovendor/<name>/.aperio.lock— auto-generated byaperio fetch, pinning each dep to a resolved commit SHA. Commit this.vendor/— toolchain-managed clones of declared deps, one subdirectory per dep.import "vendor/<name>" as alias;picks them up.lib/(optional) — hand-vendored libraries the user maintains directly. Distinct fromvendor/;aperio fetchnever writes here.import "lib/<name>" as alias;for these.
There’s no src/, no build directory, no package metadata
beyond aperio.toml. The directory is the project.
Your first locus
Save the following as hello.ap:
locus Greeter {
params { name: String = "world"; }
birth() { println("hello, ", self.name); }
}
fn main() {
Greeter { };
Greeter { name: "Aperio" };
}
Run it interpreted:
aperio run hello.ap
You should see:
hello, world
hello, Aperio
What just happened
Greeter is a locus: a typed unit with a lifecycle.
params declares its configurable state with defaults;
birth() is the lifecycle method that runs when an instance is
constructed.
Greeter { } constructs an instance using the default name;
Greeter { name: "Aperio" } overrides it. Both instances run
their birth() body to completion, then dissolve at the end of
the surrounding statement.
That’s the smallest possible Aperio program: a locus with one
field and one lifecycle method, instantiated twice at statement
position. Every program is built out of compositions of this
same primitive — locus declarations with params, lifecycle
methods, and (as you’ll see next) bus interfaces and methods.
Next
Continue to A small program with shape to see two loci communicating across the typed bus. After that, the Concepts chapters walk through the structural model in depth.
A small program with shape
Greeter shows what one locus looks like in isolation. Real
programs are more than one. Loci coordinate over a typed bus
— a publish/subscribe channel where subjects are first-class
declarations, not strings.
Here’s a small program with three loci communicating over one topic:
type Tick { n: Int; }
topic Beats { payload: Tick; }
locus Counter {
params { sum: Int = 0; }
bus { subscribe Beats as on_beat; }
fn on_beat(t: Tick) { self.sum = self.sum + t.n; }
}
locus Echoer {
bus { subscribe Beats as on_beat; }
fn on_beat(t: Tick) { println("tick: ", t.n); }
}
locus Pulse {
params { iters: Int = 4; }
bus { publish Beats; }
run() {
let mut i = 1;
while i <= self.iters {
Beats <- Tick { n: i };
i = i + 1;
}
}
}
fn main() {
let c = Counter { };
Echoer { };
Pulse { iters: 4 };
print("sum=");
println(c.sum);
}
Save it as beats.ap and run:
aperio run beats.ap
Output:
tick: 1
tick: 2
tick: 3
tick: 4
sum=10
What’s happening
Three loci, one topic, two subscribers.
type Tickis a value-shape record. No lifecycle, no flow — pure data that crosses the bus.topic Beatsnames a typed channel carryingTickvalues. The payload type travels with the declaration, not with each subscriber.Countersubscribes toBeats; itson_beathandler accumulatest.nintoself.sum.Echoersubscribes to the same topic and prints each tick. Two subscribers, one topic, no coordination needed between them — the bus does fan-out invisibly.Pulsepublishes four ticks, then exits itsrun()body.
Notice what’s not in the program:
- No channel-creation boilerplate. The topic IS the channel.
- No subscriber-registration calls. The
bus { subscribe ... }block IS the registration. - No event-loop. The runtime drains pending bus events at
cooperative yield points;
run()and the handlers compose naturally. - No coordination between
CounterandEchoer. The fact that two loci listen to the same topic is not their concern; it’s the bus’s.
Locus lifetimes here
Three different locus shapes get instantiated in main:
let c = Counter { };— a let-bound locus.cis a handle to the locus; the binding stays valid for the rest of the function.Counterdissolves at the end ofmain.Echoer { };(no binding, but has bus subscriptions) — a long-lived anonymous child. BecauseEchoerhas a bus subscription, the runtime keeps it alive past the statement boundary so it can still receive events. It dissolves at the end ofmainalongsideCounter.Pulse { iters: 4 };(no binding, hasrun()but no subscriptions) — a statement-position literal with work to do. Itsrun()body fires synchronously, all four ticks flow through the bus, andPulsedissolves at the statement boundary.
The pending bus events fire before Pulse dissolves, so by
the time println(c.sum) runs, both subscribers have
processed all four ticks.
Where to next
This program already raises questions the Concepts chapters answer:
- What’s the rule about who subscribes vs. who publishes? — See The bus.
- Why does an anonymous
Echoerstay alive but anonymousPulsedoesn’t? — See Lifecycle & time. - What’s the right way to organize this program if there
were ten subscribers, or if
Counter’s state had to survive a restart? — See The locus and Modeling — how to think in Aperio.
The next section is Concepts, which walks through the structural model one primitive at a time.
The locus
α — What is a locus, and why is everything one?
The locus is the single structural primitive Aperio gives you.
Apps are loci. Services are loci. Handlers, caches, pools,
queues, namespaces, schedulers, libraries — all loci. There is
no class, no module, no actor, no package. There’s one
shape, and you compose it.
Anatomy
A locus is a typed unit with up to seven kinds of members. None are required; you opt in to the ones you need.
@form(vec) // optional: form lowering
locus Matchmaker : projection chunked, // optional: annotations
schedule cooperative {
params { // declared state
target_size: Int = 4;
}
contract { // typed surface across the boundary
expose pending_count: Int;
}
bus { // typed pub/sub interface
subscribe JoinQueue as on_join;
publish MatchReady;
}
capacity { // bounded storage discipline
heap waiting of Player;
}
birth() { /* setup */ } // lifecycle: 5 methods
accept(c: T) { /* on child arrival */ }
run() { /* steady state */ }
drain() { /* prepare to dissolve */ }
dissolve() { /* teardown */ }
on_failure(c: T, err: Error) { ... } // recovery policy
mode bulk(...) -> ... { ... } // optional: kernel projections
mode harmonic(...) -> ... { ... }
mode resolution(...) -> ... { ... }
closure books_balance { // structural invariants
sum(intent.pnl) ~~ sum(book.pnl) within 0.05d;
}
fn on_join(p: Player) { ... } // member functions
}
You’ll never use all of these in one locus. Most loci use three
or four. The point of the surface isn’t completeness — it’s that
every distinct kind of structural commitment a unit can make
has a syntactic home. State goes in params. What crosses the
parent ↔ child boundary goes in contract. What goes over the
bus goes in bus. Bounded storage goes in capacity. Failure
policy goes in on_failure. Invariants that must hold across
the locus’s lifetime go in closure. Each commitment is
declared, not inferred from code.
Walking through the surface
params is the locus’s state. It’s both initialized at
construction (Matchmaker { target_size: 8 }) and mutated at
runtime (self.target_size = 6; inside a method). Aperio
collapses the parameter/state distinction the way Ruby
collapses parameter/@foo-instance-variable. There is no
separate state block.
contract declares what crosses the boundary between this
locus and its parent. expose is what the parent can read;
consume is what the parent must provide (when this locus is
itself the parent of children that expose the named field). The
contract is the only surface the parent sees — internal state
not exposed is invisible.
bus declares typed pub/sub. subscribe Topic as handler
binds an incoming message stream to a handler function on the
locus body. publish Topic authorizes outbound sends on that
topic via Topic <- payload;. Subjects are first-class typed
declarations (topic JoinQueue { payload: Player; }), not
strings.
capacity declares bounded storage other than the locus’s
implicit arena. pool X of T; is fixed-shape cell recycling.
heap Y of T; is growable storage individually freed during
the locus’s lifetime. The @form(...) annotation on the locus
picks a high-level lowering — @form(vec) over a heap slot
synthesizes push / pop / len methods; @form(hashmap)
over a pool slot synthesizes keyed-store methods. You’ll
choose between forms based on access pattern; you don’t
write the storage code yourself.
Lifecycle methods are not regular fns. They’re
state-machine transitions the runtime invokes:
birth()runs once at construction.accept(c)runs when a child locus is attached (per parent policy; see the next chapter).run()is the steady-state loop, if any.drain()halts new work but lets in-flight finish.dissolve()tears down the locus’s region.
Every locus has all five available; the compiler supplies
defaults for any you omit (birth no-ops, dissolve frees the
region, etc.).
on_failure(c, err) is the parent’s recovery policy when a
child fails. The handler chooses among restart, quarantine,
bubble, dissolve, or absorbs by returning normally. (Failure
itself is covered in detail in
The two failure channels.)
mode bulk / mode harmonic / mode resolution are three
named projections of the same kernel computation — vectorized
bulk processing, per-class projection, single-decision
resolution. A locus declares whichever subset it operates in;
they share state through the same arena. You’ll rarely declare
all three.
closure is a structural invariant that must hold across
some declared epoch (e.g., every dissolve, every tick, every
duration window). The ~~ operator means “approximately
equal within tolerance.” A closure that fails routes through
on_failure like any other structural failure.
Closures also serve as named structural-failure types that
member functions can fire inline. The epoch inline variant
declares a closure whose only firing mode is explicit
violate NAME from a method body; an optional captures: f1, f2
clause names locus state to snapshot into the violation payload.
This shape is the bridge between the value channel and the
structural channel — covered in detail in
The two failure channels. (Spec reference:
F.27 in spec/design-rationale.md.)
locus vs type
If you’ve gotten this far you may be wondering when to use a
locus vs Aperio’s other declarative primitive, type.
type Player { id: String; name: String; }
type is pure shape. A record. No lifecycle, no flow, no
state machine, no bus participation. Construct, pass around by
value, compare. The bus carries types as payloads. Your locus’s
params are typed by types.
type and locus are not parallel categories — they’re
points on a gradient. A type is a locus in proto-form: shape
declared, but no flow attached yet. If the thing you’re
modeling starts as data and grows lifecycle (a Cache that’s
loaded / probed / evicted; an Order that’s submitted /
filled / cancelled), you don’t bolt methods onto the type —
you promote it to a locus. There is no third primitive.
The one-tower rule
The deepest commitment Aperio makes about modeling is this:
Every named quantity in your model must be assignable to exactly one locus in one locus tower.
State that “lives between” loci — a global variable, a shared mutable buffer, a side-channel cache nobody owns — is a signal of modeling error, not a framework gap. When the language seems to resist where you want to put a piece of state, the productive move is to find the locus that should own it, not to invent a workaround.
This rule exists because every other guarantee Aperio makes depends on it. Wholesale region freeing at dissolve, vertical- only flow, the closure-violation channel, the deterministic cleanup cascade — all of them assume each piece of state has exactly one owning locus. When state floats, those guarantees unravel at the floating point.
The rule is also what enables the structural correspondence you saw in the intro. When the mental model says “the matchmaker holds the queue,” it’s because the queue belongs to exactly one tower. The locus surface lets you write that down directly.
Modeling — how to think in Aperio develops this rule into concrete patterns and points at a forthcoming companion library that helps you make ownership decisions explicit.
Next
The next chapter, Recursive composition, shows how loci nest inside loci, what crosses the boundary, and why flow is vertical-only — siblings never see each other directly.
Recursive composition
α — How do loci nest inside loci, and why is flow vertical-only?
A program built from loci is a tree. The runtime root is at
the top; main’s implicit locus is one level down; the loci
that main instantiates are below that; their children are
below them. Every running Aperio program — your cli-demo,
your matchmaker, your trading system — is a tower of loci,
arbitrarily deep.
This chapter covers how the nesting works, what crosses the boundary between parent and child, and the single rule that makes the whole structure tractable: flow is vertical-only.
Parent and child
A parent locus declares interest in a child type by
implementing accept:
locus Matchmaker {
params { target_size: Int = 4; }
// ... bus / capacity / etc.
accept(g: GameSession) {
// runs BEFORE g's region is allocated; can reject
// by returning early or routing through on_failure
}
}
The child is brought into being by an instantiation literal:
locus Matchmaker {
// ...
fn on_join(p: Player) {
self.waiting.push(p);
if self.waiting.len() >= self.target_size {
GameSession { players: drain_players(self) };
}
}
}
When GameSession { ... } is evaluated inside a parent’s
method body, the runtime:
- Runs
accept(g)on the parent. If it returns normally, the child proceeds. - Allocates the child’s region as a sub-region of the parent’s. (Region details in Capacity & storage.)
- Runs
birth()on the child synchronously. - Schedules
run()to begin.
When the parent eventually drains, every child drains first (depth-first), then the parent does. Region cleanup is wholesale and deterministic.
What crosses the boundary
The contract block is the typed surface that bridges parent and child:
locus GameSession {
params { players: [Player]; tick_count: Int = 0; }
contract {
expose tick_count: Int; // parent can read
expose state: SessionState;
consume time_source: Time; // parent must provide
}
// ...
}
locus Matchmaker {
contract {
expose pending_count: Int;
consume time_source: Time; // routes through to GameSession
}
accept(g: GameSession) {
// g.tick_count and g.state are visible here
// — they're contract-exposed by g.
if g.tick_count > 1000 {
// ...
}
}
}
The rule is strict: the parent sees only what the child
exposes. Internal state not named in the contract is
invisible from outside the child. Conversely, the child reads
into its parent only via consume entries that the parent
agrees to provide.
This is not a convention enforced by reviewers. The
typechecker rejects an attempt to read child.private_field
when private_field isn’t in the contract. You don’t have to
think about hiding; the structural boundary does the hiding
for you.
Vertical-only flow
Here’s the single rule the whole compositional model rests on:
Within a locus tower, flow is vertical only. Parents read into children through the contract; children write upward through the contract. Siblings do not see each other directly. Cousins do not see each other directly. There is no lateral flow within a tower.
If two siblings need to coordinate, they don’t reference each other. They route through their shared parent:
locus Matchmaker {
accept(g: GameSession) { /* ... */ }
fn handle_game_end(g_id: String, winner: Player) {
// siblings — the game-sessions — do not call each other.
// The matchmaker (parent) mediates: it has both games
// visible via self.children, and it can publish to
// whichever subjects each needs.
}
}
If sibling coordination is common enough that routing through
the parent feels like ceremony, the language is telling you the
parent is missing logic. The Matchmaker should be the place
that knows how games coordinate with each other — that’s
exactly the role it’s in.
The rule exists because the substrate’s other guarantees require it:
- Memory safety without a garbage collector or borrow checker. Wholesale region cleanup at dissolve works because no pointer crosses sideways. Two siblings can dissolve in either order without worrying about one’s pointer dangling into the other.
- Failure traversal. When a child fails, the failure flows
up to the parent’s
on_failure, never sideways. The whole tree’s recovery policy is local; no failure can reach a sibling without first being absorbed (or escalated) by the shared parent. - Reasoning at scale. When you look at a locus, you know every coordination path: down to its children, up to its parent. You never have to guess whether some sibling somewhere has a back-channel.
The exception that proves the rule: the bus
You’ll notice there’s one mechanism in Aperio that does appear to let loci communicate without a direct parent-child relationship: the bus. A subscriber on one branch of the tree and a publisher on a completely unrelated branch can both reference the same topic.
This is not a violation of vertical-only flow — it’s the mediation of lateral coordination through a substrate that’s structurally above both parties. The bus router runs at the runtime root; topics are declared globally; every send and every dispatch passes through a substrate locus higher than any subscriber. The two loci don’t see each other; they see the topic, which the substrate sees.
This is how Aperio reconciles “everything is a tower of vertical relationships” with “real systems need many-to-many event flow.” The bus is covered in detail in The bus.
Region nesting
A side effect of strict vertical flow is that memory nests the same way the loci do. Each locus owns a region; each child’s region is a sub-region of its parent’s:
runtime root region
├── main's implicit-locus region
│ ├── Matchmaker region
│ │ ├── GameSession A region
│ │ ├── GameSession B region
│ │ └── GameSession C region
│ └── (other top-level loci)
When a locus dissolves, its entire sub-tree of regions is freed wholesale. No traversal, no per-object cleanup, no “did I forget something?” — the cleanup is structural.
This is one of the load-bearing reasons Aperio doesn’t need a garbage collector or a borrow checker. The hierarchy is the ownership graph; vertical-only flow guarantees no foreign pointer crosses the boundaries; wholesale free-on-dissolve is sound.
Next
The next chapter, The bus, covers how typed pub/sub flows through the substrate and connects loci that have no direct parent-child relationship — without violating the vertical-flow rule that makes the whole structure tractable.
The bus
α — How do loci communicate without referring to each other by name?
The bus is Aperio’s typed pub/sub channel: the way two loci
that don’t sit in a parent ↔ child relationship still
coordinate. It’s not a library, not a std::* namespace — it’s
a first-class language primitive with grammar and typecheck
support.
This chapter covers what a topic is, how subscribe / publish fit into a locus body, how a topic that’s purely in-process by default can be wired to a network transport at deployment time without changing any code, and the optimization the compiler runs when a topic happens to be used only within one locus.
Topics are first-class
Where most actor / pub-sub systems use string subjects, Aperio uses typed topic declarations:
type Player { id: String; name: String; }
type MatchInfo { match_id: String; players: [Player]; }
topic JoinQueue { payload: Player; }
topic MatchReady { payload: MatchInfo; }
A topic names a channel. The payload: T field declares the
type that flows on it. Topics are top-level declarations, the
same shape as type or locus. They live in the program’s
namespace and are referenced by name, not by string.
This buys you four things:
- Type-checking at the publish site.
JoinQueue <- valuetypechecksvalueagainstPlayerbefore any code runs. No “I forgot to update the subject when the payload changed” bugs. - Type-checking at the handler. A
subscribe JoinQueue as on_joinrequiresfn on_join(p: Player)somewhere on the locus body. Wrong type → diagnostic at the locus, not at runtime. - Refactoring works. Rename
JoinQueue→PlayerJoinand every reference moves with it. Subject names aren’t strings sprinkled across the codebase. - No protocol drift. Publisher and subscriber compile from the same source; the type is the contract.
Subscribing and publishing
A locus declares its bus interface in a bus block:
locus Matchmaker {
capacity { heap waiting of Player; }
bus {
subscribe JoinQueue as on_join; // inbound
publish MatchReady; // outbound authorization
}
fn on_join(p: Player) {
self.waiting.push(p);
if self.waiting.len() >= 4 {
MatchReady <- assemble_match(self.waiting); // <- is the send
}
}
}
Three constructs:
subscribe TOPIC as HANDLER;— wires inbound messages onTOPICto the handler functionHANDLERon this locus. The handler is a regularfnsomewhere on the body with signaturefn HANDLER(payload: T)whereTis the topic’s declared payload type.publish TOPIC;— authorizes this locus to emit onTOPIC. Without the declaration, a<-send to the topic is a typecheck error.TOPIC <- value;— the send. Statement-shape only; produces no value. The Erlang-shape (Pid ! Msg) one-directional send.
Subscribing is declarative. There’s no subscribe() function
to call at runtime; the registration happens when the locus is
constructed, before birth() runs. Unsubscribing happens
automatically when the locus dissolves.
Why this preserves vertical-only flow
You may notice the bus connects two loci that aren’t parent and child. Doesn’t that break the vertical-only flow rule from the previous chapter?
It doesn’t, because publishers and subscribers don’t actually see each other. They see the topic. The topic is a declaration at the runtime root — structurally above every locus that participates. Every send goes up through the bus router (which lives in the substrate); every dispatch comes down into the subscriber. From any participant’s view, the bus is vertical flow through a shared root, not lateral flow to a sibling.
This is the productive shape because it gives you many-to-many event flow without back-channels. Two loci on opposite branches of a deeply nested tree can coordinate by both referencing the same topic — no shared pointer, no global registry, no name lookup at runtime.
Bindings — same topic, different transport
Here’s where the bus pays for itself. The publisher and
subscriber in the matchmaker example look identical regardless
of whether the topic is delivered in-process, over a Unix
socket, over TCP, or over NATS. The choice of transport is a
deployment-time decision made in one place — the program’s
main locus:
main locus App {
bindings {
JoinQueue: in_memory; // default
MatchReady: unix("/tmp/matches.sock") : listen; // AF_UNIX
}
run() {
Matchmaker { target_size: 4 };
}
}
The bindings block is only legal in a main-modified locus.
Each entry pairs a topic with a transport spec. Four shapes
ship in v1:
in_memory— same-binary cooperative queue. The default when a topic has no binding; the publisher’s send enqueues on a queue that the subscriber drains at its next yield point.unix("/path") : listen | connect— AF_UNIX framed-byte transport.listenspawns a reader thread;connectopens a write side. Same topic name, two binaries, one on each side of the socket.tcp("host", port) : listen | connect— TCP variant (parses but unimplemented in v1; coming).nats("nats://...", ...)— NATS subject mapping (also parses-but-unimplemented).
The point isn’t the transport list — it’s that the publisher
code and subscriber code don’t change when you flip the
binding. A locus that subscribes to JoinQueue doesn’t know
whether the publisher is in the same process or on the other
side of a Unix socket. The deployment seam is the only place
that knows.
This is what makes the same locus code reusable across test (in-memory), single-binary (in-memory), and multi-binary (unix / tcp / nats) deployments. The library writer doesn’t choose; the application writer does.
Hierarchical topics + wildcards
Topics can declare a parent and inherit a dotted wire-subject hierarchy:
topic Events { payload: Event; subject: "events"; }
topic Login : Events { payload: Login; subject: "login"; }
topic Logout : Events { payload: Logout; subject: "logout"; }
Login’s wire subject is "events.login". Logout is
"events.logout". The hierarchy is purely a subject naming
convention — each topic is still its own typed declaration.
Subscribers can use ** wildcards to catch a whole subtree:
locus AuditLog {
bus { subscribe "events.**" as on_event; }
fn on_event(payload: Bytes) { /* log every event */ }
}
Where the literal-subject form ("events.**") accepts any
matching topic by wire subject, the typed-topic form
(subscribe Events as ...) keeps the strict-type discipline.
The closed-world optimization
If a topic is only used inside one locus type — same locus
publishes and subscribes, no binding to an external transport —
the compiler can prove that every send necessarily routes back
to a handler on the same locus instance. In that case, the
desugar pass rewrites the <- send into a direct method call.
The bus is elided.
This means you can use topics freely for internal event flow inside a complex locus without paying the bus dispatch cost. When a workload later sprouts a second subscriber or gets a deployment binding, the optimization stops applying automatically and the bus path comes back. The user-visible code doesn’t change.
Cross-thread bus semantics
Most loci default to : schedule cooperative and share a
single scheduler thread. Bus dispatch between cooperative
subscribers is a fast in-process enqueue.
A locus annotated : schedule pinned owns its own OS thread.
Bus traffic to or from a pinned locus crosses a thread
boundary via a lock-protected mailbox. The semantics are
identical from the user’s view — Topic <- payload; still
works the same way — but the substrate adapts. Schedule
classes are covered in Lifecycle & time.
Next
The next chapter, Capacity & storage,
covers what else a locus can hold besides its params —
bounded storage slots, projection classes, and the form
library that gives you growable buffers, hashmaps, and ring
buffers without parametric collection types.
Capacity & storage
α — How does a locus declare what it holds, and how does that commitment shape its lowering?
A locus’s params declare its baseline state — typed fields,
mutable from any of its methods, alive for the locus’s
lifetime. That’s enough for many loci. But once a locus needs
to hold a collection — a queue of pending work, a hashmap of
sessions, a recent-events ring buffer — params runs out. You
need bounded storage with a discipline.
This chapter covers three layered concepts: capacity slots
(the substrate-level storage primitives), projection
classes (how a locus declares the resolution at which it
serves observations of its children), and forms (the
application-layer storage discipline annotations: @form(vec),
@form(hashmap), @form(ring_buffer)).
The implicit Arena
Every locus has an implicit slot 0: its Arena. The Arena is a bump allocator for everything the locus’s body short-livedly allocates — string concatenations, struct literals constructed inside a method, transient values. Allocations into the Arena are freed wholesale when the locus dissolves; nothing else needs to track them.
You never write the Arena down. It’s there because it’s universal. When this chapter talks about capacity slots it means slots 1..N, the storage commitments above the implicit floor.
Slot kinds: pool and heap
A capacity { ... } block declares 1..N storage slots:
locus Matchmaker {
capacity {
heap waiting of Player; // slot 1: growable, locus-bounded
pool sessions of Cell; // slot 2: fixed-shape, recyclable
}
}
Two slot kinds, two commitments:
heap X of T;— growable storage bounded by my own lifetime. Individual cells alloc and free during the locus’s life; the whole region frees wholesale at dissolve. This is the right shape for things whose retained size isn’t known at construction.pool Y of T;— bounded recyclable cells of a fixed shape. The population is bounded; individual values come and go, but the slot doesn’t grow indefinitely. Right for map-style buckets, fixed-shape registries, per-handler scratch frames.
The slot name is yours; idiomatic names are waiting,
entries, bindings, routes, bytes. The cell type can be
any value-shape: a primitive, a type struct, a generic
parameter. Slots cannot hold locus references. Locus
membership goes through accept(c: Child), not slots — slots
are for values.
At this layer the user-facing API is method-shaped. A heap
slot exposes alloc() and free(); a pool slot exposes
acquire() and release():
let cell = self.entries.acquire();
// ... mutate cell ...
self.entries.release(cell);
This is fine for some uses, but verbose for most. The forms layer replaces it with method sets that match how you’d normally think about the storage.
Forms — the high-level annotation
A @form(...) annotation on a locus picks a high-level
lowering for one of its capacity slots and synthesizes a
matching method set. The user writes the locus once; the
compiler emits a tight, hand-rolled-C-class implementation.
Three forms ship in v1:
@form(vec) — growable contiguous buffer
@form(vec)
locus PlayerQueue {
capacity { heap items of Player; }
// synthesized: push, get, pop, len, is_empty
}
fn main() {
let q = PlayerQueue { };
q.push(Player { id: "p1", name: "Anna" });
q.push(Player { id: "p2", name: "Bo" });
let first = q.get(0) or raise;
}
The Aperio analogue of Vec<T> / std::vector<T> / Go slices.
Backed by a doubling-realloc buffer. push is amortized O(1).
get and pop are fallible(IndexError) — see the next
chapter on the failure channels for what or raise means.
@form(vec) requires exactly one heap slot. The slot’s cell
type becomes the vec’s element type.
@form(hashmap) — intrusive open-addressing table
type CmdEntry { name: String; handler: Int; }
@form(hashmap)
locus CmdRegistry {
capacity { pool entries of CmdEntry indexed_by name; }
// synthesized: set, get, has, remove, len, is_empty
}
fn main() {
let r = CmdRegistry { };
r.set(CmdEntry { name: "spawn", handler: 1 });
let entry = r.get("spawn") or raise;
}
The Aperio analogue of Map<K, V> / std::unordered_map. The
key is intrusive — the cell type carries its own key as a
named field declared via indexed_by. set(value) takes the
whole value and extracts the key. This shape is structurally
different from HashMap<K, V> (no separate K and V slots) and
reflects how real keyed stores almost always look in practice:
the key is one of the fields.
@form(hashmap) requires exactly one pool slot with an
indexed_by FIELD clause. The slot’s cell type must be a
user-declared struct; the indexed-by field must be Int or
String.
@form(ring_buffer, cap = N) — fixed-capacity FIFO
@form(ring_buffer, cap = 64)
locus RecentCmds {
capacity { pool history of CmdEntry; }
// synthesized: push -> Bool, pop -> fallible(EmptyError),
// len, is_full
}
A bounded circular buffer. push returns Bool — true on
success, false when the buffer is at capacity (so callers
choose drop-vs-backpressure). pop is fallible-on-empty.
@form(ring_buffer) requires a pool slot and the
annotation arg cap = N (positive integer literal).
Why forms instead of Vec<T>?
Two reasons.
The structural reason. A growable buffer is a storage
discipline, not just a parameterized type. Vec<T> in Rust
glues “contiguous memory, dynamic length, owning the cells”
into one type. But in Aperio’s substrate, every one of those
commitments is a separate decision: who owns the memory (the
locus does), where it lives (in the locus’s slot), how it
grows (doubling realloc), what happens on dissolve (region
freed). The @form(vec) annotation makes those decisions
explicit at the declaration site.
The pragmatic reason. Each form has a single canonical
lowering tuned for the substrate. @form(vec)’s lowering is
within a few percent of hand-written C for push-heavy
workloads (verified by a microbench in bench/micro/). You
don’t get a slow generic implementation that “works for any
type”; you get a tight implementation specialized for your
cell type via monomorphization.
The downside, in fairness: you can’t pass a @form(vec) of
Player as an argument of type Vec<Player> to some library
function expecting a generic collection. The forms are
locus-shaped: each form is a locus type. If you want shared
APIs across forms, you write an interface (see
The locus on interface I { ... }).
Projection classes
Forms are about how a locus stores cells of a type. Projection classes are about something different: how a parent locus serves observations of its accepted children to the observer above it.
locus Pool : projection chunked {
accept(w: Worker) { /* ... */ }
}
Three projection classes:
rich— fine-grained. The parent serves observations of named individual children. Typical N ≈ 4-10. Each child carries its own state worth observing in detail. Storage consequence: per-child arenas, low churn.chunked— mid-grained. The parent serves observations over chunks or ranges of its children. Typical N ≈ 10-30. Storage consequence: per-coordinatee sub-regions inside the parent’s arena, freed on each child dissolution.recognition— aggregate. The parent serves population-level views (“represent as a histogram”, “as a curve”, “as a count”). Typical N ≈ 100-500. Individual children are not addressed by name. Storage consequence: pre-allocated fixed pool sized at parent birth; cell stride derived from the accept-method type union.
The projection class affects allocator strategy, sub-region
nesting, and the cost of iterating self.children. It does
not affect the surface methods on the parent or the
children — same code reads from a rich pool or a chunked
pool. The annotation is a commitment about resolution; the
compiler picks the allocator that makes that resolution cheap.
You rarely need to think about projection classes when writing ordinary application code. They become load-bearing when you’re designing a parent that genuinely has many children (workers, sessions, agents) and you want to commit to the observation resolution upfront.
Forms and projection classes are orthogonal
Both annotations can appear on the same locus:
@form(hashmap)
locus SessionPool : projection chunked {
capacity { pool sessions of Session indexed_by id; }
accept(w: Worker) { /* ... */ }
}
@form(hashmap) controls how sessions slot’s storage is
laid out and what methods get synthesized. projection chunked controls how the parent serves observations of its
accepted Worker children. The two operate on different
slots of different shape and don’t interfere.
When to use what
| You need | Reach for |
|---|---|
| One value per field | params |
| Growable list of T | @form(vec) |
| Keyed store, key is a field of T | @form(hashmap) |
| Bounded FIFO, drop-on-full | @form(ring_buffer) |
| Parent holds many children, named | accept + rich projection |
| Parent holds many children, chunked | accept + chunked projection |
| Parent holds many children, aggregate | accept + recognition projection |
| Raw cell recycling with custom logic | pool X of T directly |
Next
The next chapter, The two failure channels,
covers the two orthogonal failure mechanisms — closures /
on_failure for structural failure, and fallible(E) /
or-disposition for value-level errors — and the rule for
which one to use where.
The two failure channels
α — Why does Aperio have two separate failure mechanisms, and how do you choose between them?
Failure-handling is where most languages quietly accumulate
the largest amount of accidental complexity. Exceptions vs.
sentinels vs. error returns vs. Result<T, E> vs. panics —
many languages have several of these layered, with different
disciplines for when to use which, often in the same codebase.
Aperio carves the space cleanly into two orthogonal channels, with strict rules about which is allowed where:
- The structural channel (
↑): a locus’s declared invariant breaks. The runtime constructs a typed event and routes it upward to the parent’son_failurehandler. Recovery primitives (restart,quarantine,bubble,dissolve) decide what to do. - The value channel (
fallible(E)): an individual call can fail with a payload. The caller MUST address the error inline via anorclause before consuming the value.
There is no panic, no assert, no try/catch, no
implicitly-propagating exception system. The two channels
above cover every legitimate failure case; anything else
indicates a category error in the modeling.
The structural channel
A locus has commitments it must hold across its lifetime.
Those commitments are declared in closure blocks:
locus PnLAttribution {
params { intent_pnl: Decimal = 0.00d; book_pnl: Decimal = 0.00d; }
closure books_balance {
self.intent_pnl ~~ self.book_pnl within 0.05d;
epoch tick;
}
}
The ~~ operator is approximate equality within tolerance.
The closure says: at each tick, my intent PnL and book PnL
must agree within five cents. The runtime evaluates the
expression at each declared epoch; if it holds, nothing
happens (closures are silent on success). If it doesn’t, the
runtime constructs a typed ClosureViolation event and routes
it to the parent’s on_failure:
locus TradingDesk {
accept(p: PnLAttribution) { /* ... */ }
on_failure(p: PnLAttribution, err: Error) {
match err {
Error::ClosureViolation(v) -> {
// err.closure is "books_balance"
// err.left, err.right are the two values
// err.tolerance is 0.05d
// err.diff is left - right
quarantine(p) for 60s;
}
_ -> bubble(err);
}
}
}
The parent’s recovery options:
- Absorb — return from
on_failurewithout calling any recovery primitive. The child’s failure is treated as “noted, not propagating.” restart(child)— dissolve the child and instantiate a fresh one with the same declared params.restart_in_place(child)— reset the child to post-birth state while preserving its arena.quarantine(child) for d— pause the child but preserve its state, optionally auto-restart afterd.bubble(err)— pass the failure up to this locus’s parent. Recursive propagation.dissolve(child)— force-dissolve the child.
If a failure bubbles all the way past the runtime root with no handler absorbing, the process exits non-zero with a structured violation report on stderr. That’s the only way the program “crashes” — and it’s a deliberate, structured event, not an unexpected exception.
This is Erlang’s let-it-crash philosophy with one important addition: the parent’s policy is typed and declared. You write the recovery rule next to the locus it applies to, and it can be different for different child types. The runtime enforces the state machine — a child can’t be running and quarantined at the same time, can’t accept while draining, etc.
The value channel
Sometimes a function can fail in a way that’s not a structural event — just “this call didn’t produce a value, here’s why”:
fn parse_player_id(s: String) -> PlayerId fallible(ParseError) {
if !std::str::can_parse_int(s) {
fail ParseError { kind: "not_int", input: s };
}
return PlayerId { value: std::str::parse_int(s) };
}
A function declared fallible(E) returns either a value of
the success type or a FallibleErr(E) payload. The caller
must address the error — the typechecker rejects a bare
call result:
let id = parse_player_id(input); // ERROR: "error not addressed"
You address it with an or clause, in one of three motions:
let id = parse_player_id(input) or raise; // propagate up
let id = parse_player_id(input) or default_id(); // substitute
let id = parse_player_id(input) or handle(err); // hand off
or raise— propagate the error one frame up the static call stack. The enclosing function must itself befallible(E)(with the same payload type or a compatible one) so the error has somewhere to go. This is the value channel’s version of “let it propagate.”or <expression>— substitute a fallback value of the success type.erris implicitly bound to the payload inside the fallback expression. The fallback can be a literal (or 0), an expression (or default_id()), or a call (or handle(err)).- The error’s payload type is fully typed. You don’t need
to downcast or pattern-match a generic Error; the
fallible(E)declaration says exactly what shape the payload has.
Chains work right-associatively:
let id = parse_player_id(input) or lookup_default() or raise;
Reads as: try parse; on failure, try lookup_default(); on
that failure, propagate up. Each or disposes one fallible
in turn, reducing the chain toward a non-fallible value.
The value channel is value-level. It propagates through the
static call stack, not the locus tower. Two functions that
both fallible(ParseError) and call each other share the
same payload type and pass it up the stack until something
addresses it.
Where each channel lives
This is the rule that often surprises people coming from other languages:
fallible(E)may be declared on free functions and on stdlib-synthesized@form(...)methods. It may NOT be declared on user-declared locus methods.
Why the restriction? Because locus methods are
substrate-facing. They participate in the locus’s lifecycle
— bus subscription handlers, mode projections, contract reads.
Failures at this layer are structural events, not
value-level errors. They belong on the closure-violation
channel, where the parent’s on_failure is the policy
handler.
If a locus method needs to expose application-layer failure semantics, it wraps a fallible free function:
fn parse_message(b: Bytes) -> Message fallible(ParseError) { ... }
locus Reader {
bus { subscribe Input as on_input; }
fn on_input(b: Bytes) {
let m = parse_message(b) or default_message();
// ... handle m
}
}
The typechecker enforces this. Trying to declare fn ... -> T fallible(E) on a user locus method produces a focused
diagnostic naming the rule.
The reverse direction has a complementary rule: only stdlib-
synthesized form methods (@form(vec).get, @form(vec).pop,
@form(hashmap).get, @form(hashmap).remove,
@form(ring_buffer).pop) declare fallible(E). These are
application-layer storage substrate, not lifecycle-bearing
loci, so the value channel fits.
Bridging the channels: structural failure from value-error context
The two-channel rule keeps locus methods off the value channel —
but real systems regularly need to cross from one to the other.
A locus method catches a value error in an or clause, decides
the error is unrecoverable, and wants to immediately escalate
into the structural channel so the parent’s on_failure policy
takes over.
Aperio’s primitive for this is inline closure violation: a
locus declares a named structural-failure type as an
assertion-less closure with epoch inline, then any member
function can fire it with the violate statement.
type Query { sql: String; }
type Row { data: String; }
type DbError { kind: String; detail: String; }
topic ExecuteQuery { payload: Query; }
topic QueryResult { payload: Row; }
fn send_query(fd: Int, q: Query) -> Row fallible(DbError) {
let sent = std::io::tcp::send_bytes(fd, std::bytes::from_string(q.sql));
if sent < 0 { fail DbError { kind: "send_failed", detail: "connection lost" }; }
let resp = std::io::tcp::recv_bytes(fd, 4096);
if len(resp) == 0 { fail DbError { kind: "recv_empty", detail: "peer closed" }; }
return Row { data: std::str::from_bytes(resp) };
}
locus DbConnection {
params {
host: String = "127.0.0.1";
port: Int = 5432;
conn_fd: Int = -1;
last_error: String = "";
}
bus { subscribe ExecuteQuery as on_query; publish QueryResult; }
// Named structural-failure type. No assertion body; the fire
// IS the violation. The captures clause snapshots state into
// the ClosureViolation payload at the violate site.
closure fatal_io {
captures: last_error;
epoch inline;
}
birth() { self.conn_fd = std::io::tcp::connect(self.host, self.port); }
dissolve() { if self.conn_fd >= 0 { std::io::tcp::close_fd(self.conn_fd); } }
// The "error-check function": takes the error type, returns
// the success type expected at the call site, and chooses
// recovery (return a value) or escalation (violate).
fn handle_io(e: DbError) -> Row {
self.last_error = e.detail;
if e.kind == "send_failed" || e.kind == "recv_empty" {
violate fatal_io; // diverges — no return needed
}
return Row { data: "" }; // transient; substitute
}
fn on_query(q: Query) {
let r = send_query(self.conn_fd, q) or self.handle_io(err);
if !self.draining { QueryResult <- r; }
}
}
Three primitives are doing the work:
closure fatal_io { ... epoch inline; }— the vocabulary. A named structural-failure type local to this locus. Thecaptures:clause names locus state to snapshot when fired.fn handle_io(e: DbError) -> Row— the policy. A member fn shaped exactly for theorclause: takes the error type, returns the success type. Inside, the body decides between recovery (return a value) and escalation (violate). One function can be reused across every fallible call site on this locus that producesRowfromDbError.violate fatal_io— the trigger. Statement-level, divergent (typechecker treats asNever, same asfailin fallible fns andbubbleinon_failure). At the next cooperative yield, the runtime transitions this locus to drain. At dissolve, the parent receives the typedClosureViolationwith the capturedlast_error.
The flow when a value error propagates up:
send_query(self.conn_fd, q)fails — returnsFallibleErr(DbError {...}).- The
or self.handle_io(err)clause fires —errbinds to theDbError;handle_ioruns. handle_iowritese.detailtoself.last_error, sees the fatal kind, and executesviolate fatal_io.- The runtime constructs
ClosureViolation { locus: "DbConnection", closure: "fatal_io", captures: { last_error: "connection lost" } }and sets the locus’s internal__drain_requestedflag. Control diverges —handle_ionever returns to its caller. - At the next cooperative yield, the runtime begins drain.
dissolve()runs, closing the fd. - The parent’s
on_failure(c, ClosureViolation { ... })fires with the snapshot, decides policy (restart/quarantine/bubble/ absorb).
Why this composes well
Three roles, three slots, no double duty:
| Slot | Role | Reusable across |
|---|---|---|
| Closure declaration | Vocabulary — named failure type with optional payload schema | The locus type |
| Member fn (error-check) | Policy — decide recovery vs escalation per error kind | Every call site on the locus with same (ErrType, SuccessType) |
or self.handler(err) at call site | Binding — typechecker-enforced disposition | Every fallible call returning the matching success type |
Compare to the older workaround pattern (a should_exit: Bool
flag, a fatal_error: Bool flag, a while !should_exit { yield; }
loop in run(), a separate diagnostic field, plus a closure to
audit at dissolve): five pieces of state doing what one
closure + one violate + one member fn now do.
A note on Never
violate NAME; is divergent. The typechecker treats it
as the Never type: code after a violate is unreachable
within the current function. This is the same shape fail E;
takes inside a fallible function and bubble(err); takes
inside an on_failure handler — three statement forms whose
“return type” is “control doesn’t return through here.”
That’s what makes the error-check function work cleanly:
fn handle_io(e: DbError) -> Row {
if e.kind == "fatal" {
violate fatal_io; // Never; no return required
}
return Row { data: "" }; // Row; required on the other branch
}
The branches that violate don’t need a return; the branches
that return must provide a value of the declared type. The
typechecker enforces total coverage exactly as it would for a
function that mixes fail and return.
Why two channels and not one?
Languages that have only structural failure (Erlang) make
value-level errors awkward — you end up modeling “couldn’t
parse this int” as a process crash, which is too heavy.
Languages that have only value failure (Rust, Go) make
structural errors awkward — invariant violations end up
sprinkled across every call site as Result<T, Error>
returns, which is too granular and loses the parent-policy-
oriented recovery model.
Aperio splits the concern: structural failure routes up the locus tower with typed policy, and value failure routes up the static call stack with required inline disposition. The two never mix at intermediate frames; the only place they meet is the implicit root boundary (where any unhandled error of either kind ends the process).
In practice the rule of thumb is:
| Failure shape | Channel |
|---|---|
| “This invariant I declared broke” | structural (closure → on_failure) |
| “This individual call can fail and the caller should choose” | value (fallible(E)) |
| “Couldn’t parse” / “key not found” / “out of bounds” | value |
| “Books don’t balance” / “k_max exceeded” / “child wedged” | structural |
No panic / assert
Aperio has no panic(msg), no assert(cond), no throw.
“Impossible state” becomes “a closure asserting the state is
possible” — and when it isn’t, the runtime constructs the
typed violation and routes it up. “Bail from this function”
becomes either or raise (value channel) or “make this a
closure on the locus” (structural channel).
This isn’t asceticism. It’s that every legitimate use of
panic falls cleanly into one of the two channels above,
with better typing and better recovery shape than panic
itself provides.
Next
The next chapter, Lifecycle & time, covers how loci come into being, run, and dissolve — the state machine the failure channels operate over.
Lifecycle & time
α — How does a locus come into being, run, and dissolve? And what does “concurrent” mean here?
A locus isn’t a static record. It moves through five named
states from construction to teardown, and the runtime
guarantees the ordering. Concurrency in Aperio is not
async/await; it’s the cooperative scheduling of many
loci through their lifecycles, coordinated by bus events and
yield points.
This chapter covers the five lifecycle methods, the two
schedule classes, the cooperative yield model, drain cascade,
the rules for when an unbound locus dissolves vs. stays
alive, and why there’s no async keyword.
The five lifecycle methods
Every locus type has five available lifecycle methods. None are required; the compiler supplies defaults for any you omit.
locus GameSession {
birth() { /* once at construction */ }
accept(c: Player) { /* per child arrival */ }
run() { /* steady-state work */ }
drain() { /* prepare to dissolve */ }
dissolve() { /* teardown */ }
}
birth() runs once, synchronously, at the very start of
the locus’s life. By the time it returns, the locus’s region
is allocated, its params are initialized, and its bus
subscriptions are wired. birth is where you acquire
resources: open files, listen on sockets, allocate large
buffers. State you mutate in birth is visible to every
subsequent method via self.
If birth() panics or routes an error upward, the region is
freed, no dissolve runs, and the parent’s on_failure
receives the structural-failure event.
accept(c) runs before child c’s region is
allocated. It’s the parent’s gatekeeper: the parent can read
c’s declared params and contract surface, and either accept
(return normally) or reject (route through on_failure). If
accept rejects, the child instantiation expression fails and
no resources are committed.
run() is the steady-state body. It may loop, wait for
bus events, time-sleep, publish, do work. It’s a cooperative
function: it runs to completion or yields at a cooperative
yield point and lets the scheduler hand control to another
locus. If run returns naturally, the locus proceeds to
drain.
If run is omitted, the locus has no steady-state loop; it
still receives bus events (its handlers run whenever messages
arrive) and stays alive until the enclosing scope dissolves
it.
drain() runs when the locus is asked to shut down. It
cascades depth-first: every child of this locus drains first
(synchronously), then this locus drains. During drain, new
child accepts are refused, new bus messages aren’t accepted,
but in-flight handler invocations complete. The default
drain is a no-op — the runtime’s draining-state guard is
already enough for many loci.
dissolve() runs after drain completes. User-supplied
cleanup runs here. After dissolve returns, the locus’s
region is freed wholesale. The default dissolve is also a
no-op (the region cleanup happens regardless).
Together these five form a state machine the runtime enforces. You can’t accept after drain has begun. You can’t run before birth completed. The compiler and runtime jointly guarantee the ordering — you don’t have to defensively code against impossible transitions.
Default lifecycle methods
A locus that omits a lifecycle method gets a compiler-supplied default:
| Method | Default behavior |
|---|---|
birth() | no-op |
accept(c) | register c in self.children; no policy |
run() | empty steady-state; locus waits for events or signals |
drain() | refuse new work, wait for in-flight |
dissolve() | free the region wholesale |
on_failure(c, err) | bubble(err) |
A locus with only params and birth is fully valid — that
was the Greeter from
Your first locus. The
compiler fills in the rest.
Schedule classes
The lifecycle methods of multiple loci execute under a scheduler. Aperio commits to a bimodal scheduling model:
locus Matchmaker : schedule cooperative { ... } // default
locus DataIngest : schedule pinned { ... }
locus Bursty : schedule pinned(core = 3) { ... }
Two classes, with no third option:
cooperative(default) — Shares a scheduler thread with other cooperative loci. Yields at substrate-cell boundaries: between handler invocations, between lifecycle transitions, on bus dispatch, ontime::sleep, on explicityield. Handler bodies are atomic — no preemption inside one.pinned— Owns its own OS thread. No yielding to siblings inside the same scheduler; the locus runs as long as it has work and the OS thread runs it. Cross-thread bus traffic crosses through a per-locus lock-protected mailbox. Optionally CPU-affinitized viapinned(core = N).
There is no greedy or third class. A locus that “shares
the scheduler thread but doesn’t yield between handlers” would
be a structural compromise — cooperative already guarantees
handler-atomicity, so the only additional thing it could do
is refuse to yield between cells, which means “I don’t
share.” That’s what pinned is.
The rule of thumb: cooperative is the default for almost everything; pinned is for latency-critical work that genuinely shouldn’t share the scheduler thread (real-time data ingest, high-frequency tick handling).
Cooperative yield points
Inside a cooperative locus, the substrate yields between “substrate cells” — atomic units of locus work. The yield points:
- Handler exit. After a bus handler returns, the scheduler may pick up another locus.
- Lifecycle transitions. Between
birth→run→drain→dissolve. - Bus dispatch. A
<-send enqueues for the subscriber; the subscriber’s handler runs at its scheduler’s next yield point. time::sleep(d). Yields for at leastdreal time.- Explicit
yield;— a statement-level construct that lets you insert a cooperative yield inside a long internal loop.
Between yield points, the cooperative locus has the scheduler thread exclusively. No other locus’s code runs on that thread until the current one yields. This makes most data races at the application layer structurally impossible: within a single cooperative locus, there is no parallelism to race against.
Drain cascade
Drain has one rule and one rule only:
drain()always cascades depth-first.
When drain() is called on a locus L:
- The runtime walks L’s children depth-first, calling
drain()on each (which recursively walks their children). - After every child has drained, L’s own
drain()body runs. - After L’s drain completes, L’s
dissolve()runs.
There is no separate drain_cascade() syntax — drain is
always cascading. This rule is what makes SIGINT handling
trivial: the signal handler calls drain() on the runtime
root locus, the whole tree cascades, every locus shuts down
in dependency order, and the process exits cleanly. From the
user’s perspective, “Ctrl-C and the program exits cleanly” is
the default.
In flight during drain:
- New child accepts are refused.
- In-flight bus messages on subscriptions are delivered; no new messages accepted.
- Closure tests at
tickepoch may fire (if not already). - Closure tests at the
dissolveepoch will fire as part of the dissolve sequence.
Dissolve timing rules
When does a locus actually dissolve? Three shapes:
fn main() {
Greeter { name: "Aperio" }; // statement position
let s = Stream { fd: connect(...) }; // let-bound
Counter { }; // anonymous w/ subscriptions
}
- Statement-position literal (no binding, no bus
subscriptions, no
run()body to outlive birth): runs birth → drain → dissolve immediately at the statement boundary. Fire-and-forget. The handle is discarded. - Let-bound literal (
let s = ...): birth + run + drain fire at construction, but dissolve defers to the enclosing function’s scope-exit flush. The binding stays valid for method calls between construction and dissolve. - Long-lived (the locus has
bus subscribedeclarations, or arun()body that hasn’t returned): always defers to scope exit, regardless of binding. The locus must stay alive to receive published events between birth and the enclosing scope’s exit.
The user-visible rule: let-binding keeps the locus alive
for the scope. Statement-position is fire-and-forget unless
the locus has post-birth work (subscriptions, run-loop). Two
quick examples:
fn main() {
let c = Counter { }; // Counter alive for main's scope
Echoer { }; // Echoer alive for main's scope
// (has bus subscriptions → long-lived)
Pulse { iters: 4 }; // Pulse: run() to completion, dissolve immediately
println(c.sum); // c still valid here
} // Counter + Echoer drain + dissolve at scope exit
Multiple deferred dissolves in the same scope fire in reverse instantiation order at scope exit (LIFO), matching the depth-first cascade rule.
Why no async / await
Other languages put concurrency in async/await: a function
declares it might block; a caller awaits it; the runtime
suspends and resumes via state-machine compilation.
Aperio doesn’t have async/await (the keywords are
reserved). Why?
Because the substrate already gives you what async is for —
without the function-coloring problem.
- Cooperative yield points play the role of
await. A bus handler running on the cooperative scheduler is exactly an async-style task. It runs, yields between handlers, and is resumed when its next message arrives. The scheduler handles the dispatch. - Lifecycle methods play the role of structured
concurrency.
birth/run/drain/dissolveare the spawning and joining of a “task” — but with typed state and a parent supervisor. - The bus plays the role of channels. Typed pub/sub between loci, with the runtime handling dispatch ordering.
- Pinned scheduling plays the role of “spawn on a thread pool.” A pinned locus owns an OS thread; bus traffic crosses thread boundaries through a mailbox.
The function-coloring problem in async languages — the fact that calling an async function from a sync function requires special machinery — disappears because there are no async functions. There are loci, which are structurally aware of when they should yield. The yield is at the locus boundary, not inside a sync-vs-async function call site.
The cost is that you can’t write code that looks like synchronous-with-occasional-blocking. You write loci that communicate, which is a different shape. For most systems, the locus shape is more honest — your code already had loci in it implicitly; Aperio just makes them syntactic.
Next
The next chapter, Perspective & observation, covers Aperio’s mechanism for serializable observation — how a locus exposes a versioned, schema-shared view of itself that can travel across process boundaries.
Perspective & observation
α — How does a locus expose a serializable view of itself across process boundaries?
A locus’s state is private to its region. Its contract-exposed surface is visible to its parent. But what about an observer that isn’t the parent? A separate analytics binary that wants to read the locus’s state? A parameter-fitting service that produces values for a parameter-applying service?
The perspective primitive is Aperio’s answer: a typed,
serializable view of a locus that can travel across
process boundaries — with the compile-time guarantee that
the producer and consumer share a schema, because they
compile from the same source.
This is the smallest chapter in Concepts because the
underlying machinery is small. Perspective is a sharp tool
used by a few specific designs (parameter-fitting,
hot-loading kernels, cross-process state propagation); most
locus authors will never declare a perspective.
What a perspective declares
perspective Kernel {
params {
scale_row: [Decimal; 8];
sigma_factor: Decimal;
regime_id: Int;
}
stable_when {
return self.num_validated >= 3 && self.closure_status == "ok";
}
serialize_as KernelV1;
}
Three pieces:
params— a parameter bundle. Same shape as a locus’sparamsblock: typed fields with defaults or: inferred. This is the serialized payload — the schema is the type.stable_when— a boolean predicate the runtime evaluates to decide whether the perspective is “ready to ship.” This is where multi-perspective stability lives in the source: the perspective tells the runtime, in its own voice, what conditions it must meet before being published.serialize_as TypeName— optional annotation declaring a stable name for the wire format (lets the perspective’s identifier be renamed without breaking serialization).
A perspective is not a locus. It has no lifecycle, no
contract block, no bus interface, no methods beyond
stable_when. It’s a typed parameter bundle the substrate
knows how to validate and ship.
The fitter / applier pattern
The canonical use case is the fitter/applier split. Two binaries:
// fitter.ap — observes inputs, fits Kernel parameters
perspective Kernel { /* ... as above ... */ }
topic KernelUpdates { payload: Kernel; }
locus Fitter {
bus { publish KernelUpdates; }
run() {
let mut k = compute_kernel(observations);
while !k.is_stable() {
k = refine_kernel(k, more_observations());
}
KernelUpdates <- k;
}
}
// applier.ap — applies the latest Kernel at high frequency
perspective Kernel { /* same declaration, same source */ }
topic KernelUpdates { payload: Kernel; }
locus Applier {
params { current_kernel: Kernel = default_kernel(); }
bus { subscribe KernelUpdates as on_update; }
fn on_update(k: Kernel) {
self.current_kernel = k; // atomic swap; readers see consistent state
}
run() {
// high-frequency loop using self.current_kernel
}
}
Both binaries compile from the same Kernel perspective
declaration. The type is the protocol — there’s no
schema-versioning handshake, no protocol-buffer regen step,
no risk of fitter and applier disagreeing about the shape.
If you change the perspective, both rebuild from the same
source.
The runtime guarantees the swap on the consumer side is atomic: readers in the consumer locus see the pre-swap perspective or the post-swap perspective, never a torn read.
stable_when — multi-perspective stability
The stable_when predicate lets a perspective decline to
ship until it’s earned the right. In the Kernel example, it
requires at least three independent validations and a passing
closure check before it’ll be considered stable. The
publishing locus can check k.is_stable() (an implicit
method on every perspective) before deciding to publish.
The predicate is just a Bool-returning block. It can
reference self (the perspective’s params), and free
functions in scope. The runtime evaluates it on demand —
typically before each potential publish, and once at the
consumer side after a candidate perspective is decoded but
before it’s atomically installed.
This makes “perspective is ready to ship” a property of the data declared in the data’s own type, not a flag in the publisher’s code or an off-by-default config. It’s stable when it says it’s stable.
Cross-depth observation
There’s a deeper structural point hiding in the perspective primitive. An observer of a locus is itself a locus somewhere in the tower — possibly far above. The depth gap between observer and observed determines what shape the observation takes:
- Small depth gap — the observer is the locus’s direct parent. Observation goes through the contract block, in the same process, with the parent reading exposed fields directly.
- Medium gap — the observer is several layers above,
possibly across the bus. A
perspectivedeclaration is the right shape: typed, serializable, validated. - Large gap — the observer is in a completely separate process or binary, possibly across the network. Same perspective primitive; the transport binding (Unix socket, TCP, NATS) carries it across.
What looks at a casual glance like “different mechanisms for local-vs-remote observation” is one mechanism — the perspective — applied at different depths in the locus tower. The content changes (which fields are useful to ship across a process boundary vs. within one) but the form doesn’t.
This is also why “cross-depth observation” reads as a projection axis in Aperio: the depth of the observer relative to the observed determines the resolution at which observation happens, just as projection class determines the resolution at which a parent serves observations of its children (see Capacity & storage).
When you’ll use this
In practice, a perspective is the right tool when:
- You have a parameter-fitting pipeline and a separate parameter-applying binary.
- You want to hot-reload configuration into a long-running service without restarting it, with strong type guarantees about the new config matching the schema.
- You need cross-process state propagation between cooperating binaries that share source.
For most application code — single-binary services, in-
process bus communication, local handlers — you won’t reach
for perspective. Your locus’s params and bus
subscriptions cover the surface. Perspective is the tool you
pull out when state needs to cross a process boundary with
schema discipline intact.
Next
The final Concepts chapter, Modeling — how to think in Aperio, is the synthesis: how to take everything from the previous chapters and use it to model a real system, what the idiomatic patterns look like, what to do when the language seems to resist your design.
Modeling — how to think in Aperio
α — Given the primitives, how do you actually use them to model a real system?
This is the synthesis chapter. The previous seven cover what the primitives are. This one is about how to compose them into idiomatic programs — and, just as importantly, what to do when the language seems to resist your design.
The one-tower rule
The deepest commitment Aperio makes about modeling is this:
Every named quantity in your model must be assignable to exactly one locus in one locus tower.
This isn’t a style guideline. It’s structural: every other guarantee Aperio makes (wholesale-free at dissolve, vertical-only flow, the closure-violation channel, deterministic cleanup cascade) depends on each piece of state having exactly one owning locus. When state floats — when some buffer is “shared” between two loci, or there’s a global registry nobody owns, or a configuration value “lives in the environment” — the guarantees unravel at the floating point.
When the language seems to resist where you want to put a piece of state, the productive move is not to invent a workaround. It’s to ask: which locus should own this? That question almost always has a structural answer; the answer is the productive move.
(A forthcoming pond
library, memory-owner-architecture, develops this rule into
concrete patterns and helpers for declaring ownership and
verifying the assignment. This chapter will link to it when
it ships.)
The seven idiomatic patterns
Every well-shaped Aperio program is composed of seven recurring patterns. If your code doesn’t fit one of these, reconsider before inventing — the catalog is small on purpose, and most “I need an eighth pattern” instincts turn out to be one of the seven in a foreign shape.
1. App locus — outer encapsulation
Every app’s main.ap defines a top-level locus that owns the
whole run. fn main() reads argv, instantiates the locus,
exits.
locus Onboard {
params {
dir: String = "fixture";
flavor: String = "go";
}
run() {
drive(self.dir, self.flavor);
}
}
fn main() {
let mut dir = "fixture";
let mut flavor = "go";
if std::env::args_count() > 1 { dir = std::env::arg(1); }
if std::env::args_count() > 2 { flavor = std::env::arg(2); }
Onboard { dir: dir, flavor: flavor };
}
Conventions:
- Locus name is the file stem in PascalCase.
paramsholds argv-derived config with reasonable defaults (so the app self-demos with no flags).run()is the only lifecycle method needed for most apps.main()does argv parsing, then a single statement-position locus literal kicks the run.
2. Namespace lotus — empty params, methods only
When a coherent vocabulary of pure helpers forms, wrap them
in a locus with empty (or config-only) params { } and
methods only. Instantiate once, dispatch through it. The
language’s substitute for “module of functions” / “static
class” / “stateless service object.”
locus Morpheme {
params {
flavor: String = "go";
overrides: String = "";
}
fn lookup_morpheme(m: String) -> String { ... }
fn name_to_motion(name: String) -> String {
let hit = self.lookup_morpheme(name);
// ...
}
}
fn main() {
let r = std::lang::Morpheme { flavor: "go" };
let motion = r.name_to_motion("OrderProcessor");
}
The point isn’t that params is literally empty — no
lifecycle state mutated by birth/run/dissolve. Config params
are fine. Self-method calls compose within the namespace.
One alloc per instantiation; negligible.
3. Service locus — long-lived with lifecycle + bus
When the thing genuinely runs over time and participates in the bus, write the full lifecycle.
locus Listener {
params {
host: String = "127.0.0.1";
port: Int = 0;
listen_fd: Int = -1;
max_accepts: Int = 1;
on_connection: fn(std::io::tcp::Stream) = default_on_connection;
}
birth() {
self.listen_fd = std::io::tcp::listen_socket(self.host, self.port);
}
run() {
let mut accepted = 0;
while self.max_accepts < 0 || accepted < self.max_accepts {
let conn = std::io::tcp::accept_one(self.listen_fd);
handle_one_connection(conn, self.on_connection);
accepted = accepted + 1;
}
}
dissolve() {
std::io::tcp::close_fd(self.listen_fd);
}
}
Conventions:
birth()acquires resources; mutatesself.field.run()does the long-lived work; often a loop bounded by config.dissolve()releases whatbirth()acquired.- Sentinel values (
-1for “not yet bound”) letdissolve()safely no-op on partially-constructed loci.
4. Spawned child — let-bound, scope-dissolves
When a parent’s work produces children that need their own lifecycles, let-bind. The let-bound locus’s dissolve fires at the enclosing function’s scope exit; the binding stays valid for method calls in between.
fn handle_one_connection(conn_fd: Int, on_conn: fn(std::io::tcp::Stream)) {
let s = std::io::tcp::Stream { conn_fd: conn_fd };
on_conn(s);
}
The let s = ... binds the Stream locus to the fn’s scope;
when handle_one_connection returns, s.dissolve() fires
(which closes conn_fd). No explicit cleanup call needed.
Conventions:
- Use let-binding when the locus needs to live for a fn body’s full duration. Statement-position literals dissolve at end of expression — rarely what’s wanted for a usable handle.
- Per-iteration cleanup uses a free helper fn whose return is the per-iteration boundary (the example above is exactly this pattern).
5. Shape type — pure data, no flow
When a thing IS data, not flow, declare it as type.
type Request {
method: String;
path: String;
version: String;
body: String;
}
Construct via struct literal:
let req = std::http::Request {
method: "GET", path: "/", version: "HTTP/1.1", body: ""
};
Conventions: PascalCase, snake_case fields, returnable by
value, no lifecycle implications. Types may hold fn(...)
fields — dispatch via record.field(args). If methods
accumulate, the thing has flow — promote type to locus.
6. Free fn — first-class seed member
Free fns are first-class seed members. Every top-level decl in a seed is visible to every file in the seed. Use a free fn when the operation has no flow and isn’t naturally a method on an existing locus.
Common shapes:
- Return-bearing helpers called from lifecycle method
bodies (which reject
returnat v0). - Extension hooks passed via fn-pointer params (e.g.,
on_connection: fn(Stream)). The hook is named at the top level so a caller can pass it by name. - Standalone helpers that compose with the rest of the seed: format / parse / convert / classify utilities that don’t carry state.
When a coherent vocabulary of three or more free fns forms, the namespace-lotus form (pattern 2) often reads better.
7. Error-check function — bridging the channels
A locus member fn whose signature is fn(ErrType) -> SuccessType,
used as the fallback in an or self.handler(err) clause at a
fallible call site. Internally, it examines the error and
chooses: return a value (substitute, continue) or
violate NAME (escalate to the structural channel).
locus DbConnection {
params { conn_fd: Int = -1; last_error: String = ""; /* ... */ }
bus { subscribe ExecuteQuery as on_query; publish QueryResult; }
closure fatal_io { captures: last_error; epoch inline; }
fn handle_io(e: DbError) -> Row {
self.last_error = e.detail;
if e.kind == "send_failed" || e.kind == "recv_empty" {
violate fatal_io;
}
return Row { data: "" };
}
fn on_query(q: Query) {
let r = send_query(self.conn_fd, q) or self.handle_io(err);
if !self.draining { QueryResult <- r; }
}
}
This is the canonical bridge between the value channel and the structural channel — see The two failure channels §“Bridging the channels” for the full treatment.
Conventions:
- Naming. snake_case. Name for what is being handled,
not what’s being done:
handle_io,handle_parse,handle_timeout— notrecover_or_die. - Signature. The return type is the success type of the
call sites that use this handler. One handler per
(ErrType, SuccessType)pair on a given locus. - Body shape.
if-chain ormatchon the error kind; each arm eitherviolates a named closure orreturns a substitute value. The two motions are exhaustive — the typechecker ensures every path either returns the success type or diverges viaviolate. - Closure references.
violate NAMEis locus-scoped — the named closure must be declared on the same locus. This is why the handler is a member fn, not a free fn.
Anti-patterns
The shapes below are almost always “an old habit from another language smuggled past the substrate.” When you catch yourself reaching for one, reconsider.
- Bare
fn main()with helpers and no outer locus. The app’s outer encapsulation must be a locus per pattern 1. - Coherent helper vocabulary stranded as free fns when it forms a namespace. Lift into a namespace lotus once the coherence is visible (pattern 2).
typefor things that have flow. If the noun has a lifecycle implied (a Cache that’s loaded/probed/evicted, a Server that starts/serves/stops), it is a locus, not a type.- Methods on a
typerecord. Not supported at v0 — the language is telling you “this wanted to be a locus.” - “Util” namespaces of unrelated helpers. Group by vocabulary, not by “everything that didn’t fit elsewhere.” A namespace lotus should answer one question (“noun-to-motion”, “tagged-accumulator parsing”), not many.
- Floating quantities. Per the one-tower rule: every named quantity should be assignable to one locus. State that “lives between” loci is modeling error.
- Tagged-locus dispatch. A single locus with a
kind: Stringparam branching on every method, instead of an interface and multiple loci. The structural-interface primitive (F.20) is the right tool. - Fluent-builder chains that mutate self. If you’re
writing
obj.with(x).with(y).build(), the thing wanted to be a locus with properparamsand lifecycle.
A worked example: choosing the model
To make the modeling rules concrete, here’s a small system walked through pattern-by-pattern:
“I need a rate-limiter that bounds a downstream service’s request rate. Requests come in over the bus. When the downstream is overloaded, the limiter should emit a backpressure signal upstream.”
Step 1: identify the loci.
The rate-limiter is a service locus (pattern 3): it has state (the recent-request window), lifecycle (birth → run → dissolve), and bus participation. One locus.
What about the downstream service? Probably a separate locus, also pattern 3. The two coordinate through the bus, not through direct reference.
The backpressure signal: not a locus, it’s an event. A topic
(Backpressure { payload: ... }).
The “request”: same — a topic (Request { payload: ... }).
The “recent-request window”: held by the rate-limiter, in a
capacity slot. @form(ring_buffer) is the right shape — we
want a bounded window with drop-on-full.
Step 2: sketch the locus.
type Req { id: String; ts: Time; }
topic Request { payload: Req; }
topic Backpressure { payload: Req; }
@form(ring_buffer, cap = 100)
locus RateLimiter {
params { window_ms: Int = 1000; threshold: Int = 50; }
capacity { pool recent of Req; }
bus {
subscribe Request as on_request;
publish Backpressure;
}
fn on_request(r: Req) {
self.recent.push(r);
if self.over_threshold() {
Backpressure <- r;
}
}
fn over_threshold(self) -> Bool {
// ...
}
}
Step 3: check against the patterns.
- Pattern 3 (service locus): ✓
- Capacity slot for the window: ✓ (
pool recent of Reqwith@form(ring_buffer)) - Bus subscribe / publish: ✓
- One-tower:
recent,window_ms,thresholdall owned byRateLimiter. No floating quantities. - Anti-patterns: none.
Step 4: where would friction surface?
- If the rate-limiter needs to track which client was
rate-limited, we’d add per-client state — maybe a
@form(hashmap)keyed by client ID. That’s a second capacity slot, still one-tower. - If multiple rate-limiters need to coordinate (one per
service, sharing a global cap), they’d coordinate
through a parent locus that holds the global budget. Bus
topic
GlobalBudgetbetween them. - If we wanted to deploy the limiter as a separate binary
from the downstream, we’d add a
bindingsblock in main to route the Request topic through a Unix socket.
Notice how each “what if” stays inside the pattern catalog. You don’t reach for a new primitive; you compose what you have.
A reading order, going forward
You’ve finished Concepts. The two natural next steps:
- Read the Reference section for the canonical formal definitions of every construct. The spec corpus is the source of truth.
- Read working examples. The
apps/directory has 11 real programs exercising every pattern in this chapter. Pick one close to what you want to build and read it end-to-end.
If you’re building a multiplayer game, the matchmaker example from the introduction grows into a complete system — matchmaker locus, per-match game session loci, terminal client loci, all composed through the bus. The “Build a real app” tutorial walks through that build (forthcoming).
Language reference
This page is a reference index — high-level pointers to the
canonical formal definitions in the spec/ corpus. Where
Concepts is pedagogical (how to
think in Aperio), this page is for looking up what the
compiler actually accepts.
The spec is the source of truth. If something here disagrees with the spec, the spec wins.
Grammar and syntax
spec/grammar.ebnf— formal grammar in EBNF. Every syntactic construct the parser accepts.spec/tokens.md— lexical structure: identifier rules, reserved words, literal forms (integer / float / decimal / string / bytes / time / duration / f-string), operators, contextual keywords.spec/precedence.md— expression precedence and associativity table.
Semantics
spec/semantics.md— operational semantics. Program startup, locus instantiation, lifecycle method dispatch, bus dispatch, closure-test evaluation, recovery primitives, dissolve timing rules, fallible call semantics, topic declarations.spec/runtime.md— what the runtime ships with: region allocator, scheduler, bus router, time primitives, schedule classes, perspective hot-load machinery.
Types
spec/types.md— the type system: primitive types, compound types, projection-class types, locus types, perspective types, structural interfaces, fallible typing.- Numeric coercion: Int → Float widening at let-binding type
ascriptions and fn-arg sites (one-way; Decimal never
participates). See
types.md§ “Numeric coercion”.
Storage and memory
spec/memory.md— the memory model. Hierarchical regions, per-projection- class allocators, capacity slots (pool/heap), bookkeeping reclamation, drain cascade, region-escape rules. Includes the codegen ABI summary.spec/forms.md— the@form(...)annotation system:@form(vec),@form(hashmap),@form(ring_buffer). Contract, lowering, performance bands, anti-patterns.
Projects and packaging
spec/projects.md— project layout, per-directory seed model (F.19), cross-seed imports (F.25), workspace fallback, resolution order, mangling scheme.spec/packages.md— the v1 package surface.aperio.tomlmanifest,aperio.lock,aperio fetchgit-based dependency fetcher.
Style and conventions
spec/styleguide.md— idiomatic Aperio. The full version of the patterns introduced in Modeling — how to think in Aperio; full naming conventions; expanded anti-patterns.
Testing
spec/testing.md— the testing pipeline. Three layers of correctness, thestd::testassertion library, benchmark surface.
Design rationale
spec/design-rationale.md— why the language is shaped the way it is. Numbered commitments F.0 through F.26 cover every design decision the compiler currently makes — from projection-class semantics to capacity slots to structural interfaces to the package model — with a “considered and rejected” section for each.
This is the longest single document in the corpus and the most useful for understanding the rationale behind a particular surface choice. Worth reading once, end-to-end, once you’ve internalized Concepts.
Standard library
- Standard library overview — companion reference page on this site.
spec/stdlib.md— full surface, phase by phase. Authoritative list of what ships in the bundled stdlib.
Standard library
Aperio’s stdlib ships bundled with every binary — no separate
install, no manual import for stdlib namespaces (just inline
std::* paths in your code). This page indexes the shipped
surface. The authoritative phase-by-phase history lives at
spec/stdlib.md.
Two shapes
The stdlib comes in two structurally distinct shapes, with a clear rule for which is which:
Path-call dispatch
Inline calls through std::* paths that route directly to C
runtime primitives. No .ap source backing them — they’re
extern bridges into lotus_* C functions:
let pid = std::process::pid();
let content = std::io::fs::read_file("config.toml");
let n = std::str::parse_int("42");
Namespaces with path-call shape:
| Namespace | Surface |
|---|---|
std::process | pid(), exit(code) |
std::env | args_count(), arg(i), var(name), var_exists(name) |
std::time | monotonic() → Duration, sleep(d) |
std::str | parse_int / can_parse_int / parse_float / can_parse_float, index_of, lower / upper, trim, replace, repeat, pad_left / pad_right, from_bytes, builder_new / builder_append / builder_len / builder_finish |
std::bytes | at(b, i), slice(b, lo, hi), from_string(s) |
std::io::fs | read_file, write_file, write_file_append, read_bytes, file_size, file_exists, mkdir, list_dir, list_dir_count, list_dir_at, read_file_status |
std::math | sqrt, exp, log, floor, ceil, pow |
std::ts | tree-sitter bindings (Go grammar shipped) |
Path-call surfaces are appropriate for value-shaped operations that don’t need lifecycle. A file read returns bytes; a math op returns a number; argv access returns a string. No locus required.
Namespace lotus
When the operation has a lifetime — a stream that’s open
across multiple reads, a sink that has setup and teardown — the
stdlib provides a namespace lotus: an Aperio-sourced locus
under runtime/stdlib/. You instantiate it the same way you
instantiate any other locus:
let l = std::io::tcp::Listener {
host: "127.0.0.1",
port: 8080,
on_connection: my_handler,
};
Namespaces with namespace-lotus shape:
| Namespace | Loci / interfaces shipped |
|---|---|
std::io::tcp | Listener, Stream, plus send / send_bytes / recv_bytes methods |
std::http | Request and Response types, parse_request, write_response, case-insensitive header lookup |
std::text | md_to_html, base64::encode / decode, Sink interface with StdoutSink / StringSink / FileSink implementations |
std::cli | Resolver for argv parsing |
std::iter | Lines iterator over text |
std::json | Builder for JSON output |
std::lang | Morpheme, Vocabulary, etc. for language utilities |
std::log | Logger, LogEvent, StdoutSink (subscribes to log.**) |
std::yaml | YAML parsing surface |
std::test | assert(cond, msg), assert_eq_int, assert_eq_str |
Source for namespace-lotus stdlib lives at
crates/aperio-codegen/runtime/stdlib/.
Read it directly — it’s idiomatic Aperio that exercises every
pattern Concepts covers.
Built-in identifiers (no path needed)
A handful of functions and types are always in scope without
any std::* qualification:
| Name | Purpose |
|---|---|
print, println, eprint, eprintln | stdout / stderr output |
len(x) | length of String / Bytes / array |
to_string(x) | format any printable value to String |
min(a, b), max(a, b), abs(x) | numeric helpers |
starts_with(s, prefix), contains(s, needle) | string predicates |
sum(expr), prod(expr) | reductions (also closure-test primitives) |
Int(x) | explicit Float → Int narrowing (truncate toward zero) |
Primitive types (Int, Uint, Float, Decimal, String,
Bool, Time, Duration, Bytes) are valid only in type
position.
Form-synthesized types
When any locus in your program uses @form(...), the
resolver injects companion error types into the top scope:
| Form | Synthesized type | Fields |
|---|---|---|
@form(vec) | IndexError | kind: String, index: Int, len: Int |
@form(hashmap) | KeyError | kind: String |
@form(ring_buffer) | EmptyError | kind: String |
You can reference these as ordinary types — pattern-match
them in match, declare fn parameters typed by them,
construct them in fallback expressions.
What’s NOT in stdlib
Aperio’s stdlib follows Go’s batteries-included approach:
table-stakes functionality ships. Specifically not in
stdlib (and intended for the
aperio-lang/pond
contrib monorepo or third-party):
- ML / learning libraries
- Database drivers (Postgres, MySQL, …)
- Web frameworks beyond basic HTTP
- Image / audio / video processing
- Cloud SDKs (AWS, GCP, …)
- GUI / TUI frameworks beyond what
std::io::tcpenables - Cryptography beyond TLS basics
- Compression formats beyond gzip (used internally by HTTP)
Aperio also doesn’t have parametric collection types in
stdlib — no Vec<T> / Map<K, V> / Set<T> / Option<T> /
Result<T, E> as user-facing tagged enums. Storage is
locus-shaped via @form(...). See
Capacity & storage for
the rationale.
Reading order
If you’re writing application code and want to discover what’s available, the productive order is:
- Skim this page to know what namespaces exist.
- Read the spec section
(
spec/stdlib.md) for the namespace you need; it’s the authoritative surface. - Read the namespace-lotus source for any lotus you’ll use — it’s a few hundred lines per namespace, and it’s the clearest documentation of how the surface composes.
Contributing
Aperio is in an experimental phase; breaking changes are common and expected.
Picking a role
The contributor flow is organized by role. Pick the one that matches what you’re trying to do, and read the corresponding brief at the repo root:
AGENTS.md— if you’re writing an Aperio program (also the load-bearing prompt for AI agents authoring.apcode).agents/library-dev.md— if you’re extending the stdlib or writing a reusable Aperio library.agents/compiler-dev.md— if you’re working on the compiler or runtime itself.
Each brief is self-contained. Read the one for your task; you shouldn’t need the others.
Running the test suite
Before opening a PR:
cargo build --release
cargo test --release --workspace
The test suite is the source of truth for what the compiler
supports. If you’re changing a language feature, add a test
under crates/aperio-codegen/tests/ that exercises the new
behavior. If you’re changing the parser, the
crates/aperio-syntax/tests/examples.rs test will exercise
your change against every example fixture.
Spec discipline
Surface-language and runtime behavior is documented in spec/.
If you change behavior, update the spec in the same commit.
The spec is not aspirational — it describes what’s shipped.