I have somewhere around twenty side projects in various states of half-finished. Most of them need the same unglamorous substrate: durable state, an API, background jobs, and a realtime path where two browsers can see the same update without a refresh. My default stack has usually been a Next.js frontend, a separate Node backend, BullMQ for jobs, and an AWS deployment wired together with SST.
Those are reasonable choices. The friction is that the application model tends to split across them. The CRUD path has one schema. The realtime path has another shape. Authorization gets reimplemented in handlers, middleware, and subscription code. You can make that work, but the glue becomes part of the product.
One Saturday I got tired of paying that tax and wrote some Rust.
The question I started with
What if the database, the API, and the realtime notification path were the same runtime? A write would go through one process, commit to storage, append to an ordered change log, and emit a notification as an optimization. If a client missed or dropped the live event, it could catch up from its cursor. No external queue. No separate pub/sub server. No version skew between what the API accepted and what the realtime layer exposed.
That is not the architecture I would start with for a global, multi-tenant cloud service. But for a single product, an internal tool, a multiplayer prototype, or a side project with dozens of users instead of millions, splitting the stack early is often operational overhead disguised as maturity.
The first thing I got wrong
I treated transport as one problem. It is not. Durable application sync and tick-driven simulation traffic have different failure modes.
The sync path wants debuggable, policy-filtered change events with cursor and resync semantics. The shard path wants compact snapshot and input frames, and binary WebSocket frames make sense there. Trying to make one elegant protocol cover both blurred the boundary.
Once that split was clear, the hard problems moved back where they belonged: ordering, authorization, replay, and backpressure. Binary framing was not the mistake. Making transport the center of the design too early was.
What's actually hard
Three parts ended up mattering much more than the transport.
Ordering and resync. The server keeps an append-only change log and clients track a cursor. That sounds simple until you handle reconnects, server restarts, retained-log windows, local persistence, and deletes. If a client advances its cursor before the row is safely written to IndexedDB, it can skip data forever after a crash. If an old insert replays after a delete, the client can resurrect a row that should stay gone. Most of the real sync work is in those edge cases.
Policy checks. Sync is not allowed to become a side door around authorization. I did not want the database to be the authorization boundary; I wanted the runtime to own policy decisions near the API and sync boundaries, so reads, writes, and catch-up paths did not drift. That pushed the policy language toward a small expression grammar: auth.userId != null, auth.userId == data.authorId, role checks, boolean operators. It is not a general-purpose programming language, and that is the point. It should fail closed and be readable during an incident.
Idempotency and backpressure. Clients retry. Networks lie. Browsers suspend tabs. Any at-least-once write path needs an operation id so the server can avoid applying the same logical write twice. If a WebSocket subscriber is slow, the server cannot let an unbounded queue eat the process. These are not glamorous problems, but they determine whether the demo survives contact with real users.
What I kept simple
SQLite is the default storage path. There is an experimental Postgres adapter path, but SQLite is the path I trust first because it keeps the deployment model honest: one binary, one database file, one thing to back up.
The runtime is Rust. The app definitions and client surface are TypeScript. That is the right split for this project: Rust owns the server, storage, and concurrency; TypeScript owns the ergonomics developers touch every day.
Policies are expressions, not TypeScript functions. I had a branch where policies ran as user-defined code in a worker. It was flexible, but it made static analysis worse, deployment larger, and failure modes less obvious. A constrained expression language is less exciting and easier to operate.
The realtime protocol sends per-row change events, while the cursor protocol remains the correctness layer. The client maintains a local replica and React hooks derive the view from that replica. That is less magical than server-side live-query invalidation, but it made the first version much easier to reason about.
Where I am now
Pylon has a working sync engine, an admin studio, auth with magic codes and OAuth, file uploads, jobs, background workflows, realtime shard primitives, and a set of example apps that all run against the same binary: CRM, ERP, chat, a trading dashboard, a 3D world, and a browser-based load-test harness.
The benchmark app can drive thousands of mutations per second from Web Workers and graph p50, p95, and p99 latency. I am deliberately not turning that into a universal throughput claim. Local benchmarks mostly tell you where your own bottlenecks are. They are useful for regression testing, not for winning arguments on the internet.
The codebase is still small enough that I can hold the design in my head, which is part of the appeal. When a feature breaks, I can usually follow the write from the TypeScript function, through the Rust transaction, into the change log, and out to the browser without crossing a service boundary.
The honest version of why I built this
I do not think Pylon is going to beat Convex at being Convex. They have a team, a cloud, and years of production scars. I wanted to understand the shape of the problem by building a smaller version of it.
That was worth it. I now have much better instincts for what belongs in a realtime backend, what should stay boring, and which parts of "just sync it" are doing the actual work. The project also gave me a reusable backend for the kind of apps I keep building: dashboards, internal tools, games, and collaborative surfaces that need shared state without a pile of infrastructure.
Docs are at docs.pylonsync.com. The chat example is the smallest thing that demonstrates the full stack end to end.