RUNTIME
- Runtime Bun
- Containers Podman (rootless)
- Deployment Vercel / Netlify
born to be fast & light
Instantiate Through Emulation
Containerized Full-Stack Pattern Library
Sunset · Editorial
A combinable technology set — lightweight and fast, documented and tested
compose from the full stack or a slice of it
v10r is not cloned — it's instantiated. An AI agent reads the tested patterns, architecture, and documentation, then emulates only the pieces the new project needs. The reference stays live. The instance is purpose-built.
keyboard firmware documentation — static prerendered
v4.lynxware.orgComposition and organization of the living system
Specialized Claude Code agents with custom prompts, skills, and persistent memory.
archy architecture & system designarty aesthetic refinement & visual polishbuny Bun runtime & toolingdaty database schemas & data modelingdocy documentation & technical writingresy technology research & evaluationscout real-world usage researchsecy security review & threat modelingsvey SvelteKit application patternstray debugging & error tracinguxy UI/UX design & accessibilitysvelte5-runes sveltekit unocss biome core frameworkdrizzle db-relational db-graph db-files data layerbetter-auth security auth & securityvalibot-superforms design-system forms & designai-tools 3d AI & visualizationEach directory has a README.md navigation hub — brief intro, then a topic table linking to specific files.
docs/foundation/ core concepts & conventionsdocs/stack/ technology documentation per layerdocs/blueprint/ architecture decisionsdocs/implementation/ build detailsdocs/patterns/ reusable patternsdocs/guides/ how-to guidesagents/ 11 specialized Claude Code agentsskills/ 14 post-training knowledge modulesmemory/ persistent agent memory across sessionsfoundation/ core concepts & conventionsstack/ per-technology documentationblueprint/ architecture decisionsimplementation/ build detailspatterns/ reusable patternsMulti-client core — domain modules serve UI form actions, AI tool calls, REST API, and background jobs through a single business-logic layer.
Retrieval-Augmented Generation where the model chooses its own depth.
Two layers: a standard chunk index below, and above it an LLM-authored wiki with typed pointers into specific chunk IDs. The model reads TLDRs first and calls a drill-down tool only when it needs the raw source.
raw-RAG hybrid vector + BM25 retrieval over raw document chunksLLM-Wiki synthesized pages with typed pointers back to chunk IDsdrill-down tool-call escalation when the TLDR is not enough — budget 3 per turnverify post-hoc citation taxonomy — quote, paraphrase, drifted, uncited