NURL
Neural Unified Representation Language

A programming language
built for language models.

NURL is a small, regular, LLVM-backed language whose syntax is optimised for the way LLMs read and write code — terse prefix notation, no redundant keywords, and a grammar that fits on a single page.

Open the Playground View on GitHub

Driving an LLM agent? The compiler is also exposed as an MCP server — your assistant can build and run NURL directly.

Designed for how LLMs actually write code

Existing languages were shaped by human ergonomics. NURL drops the parts that exist only for humans and keeps what matters for correctness and compilation.

⟨⟩

Token-efficient

Every construct is encoded in the fewest tokens that preserve full meaning. Less context burned on punctuation, more on the actual problem.

Regular grammar

One way to do each thing. The same construct always parses the same way, so generators don't have to remember exceptions.

Local semantics

A token's meaning is derivable from a handful of preceding tokens. No long-range dependencies that break mid-generation.

Deterministic compiler

Identical source produces byte-identical output. No undefined behaviour, no platform-specific surprises — code does what it says.

LLVM-backed

The compiler emits LLVM IR and delegates codegen to clang. You get optimisation and portability for free, native speed by default.

Self-hosting

The compiler is written in NURL itself. Bootstrap runs it twice over its own source and requires byte-identical IR before accepting the build.

Prefix notation, end to end

Everything follows the same shape: OP ARG1 ARG2 …. No operator precedence to memorise, no parser surprises, no hidden costs.

Add two integers function
// One line. No keywords.
@ add i a i b  i { ^ + a b }

// Calls use the same prefix shape.
( add 3 4 )     // → 7
FizzBuzz loops + conditionals
@ fizzbuzz i n  v {
    : ~ i i 1
    ~ <= i n {
        : b d3 == 0 % i 3
        : b d5 == 0 % i 5
        ? & d3 d5 { ( nurl_print `FizzBuzz\n` ) }
        ? d3      { ( nurl_print `Fizz\n` ) }
        ? d5      { ( nurl_print `Buzz\n` ) }
                  { ( nurl_print ( nurl_str_int i ) ) }
        = i + i 1
    }
}
Algebraic data types & pattern match enum + ??
// Tagged enum: sum type with payloads.
: | Expr {
    Num  i
    Add  *Expr  *Expr
    Mul  *Expr  *Expr
}

@ eval *Expr e  i {
    ^ ?? . e 0 {
        Num n      n
        Add l r    + ( eval l ) ( eval r )
        Mul l r    * ( eval l ) ( eval r )
    }
}
Closures & higher-order functions lambda
// Function-type literal: (@ ret_ty arg_tys)
: (@ i i) square \ i x  i { * x x }

( square 7 )     // → 49

// Pass closures to higher-order helpers.
@ apply (@ i i) f i x  i { ^ ( f x ) }

Fewer tokens, same program

An LLM pays for every token it reads and writes. NURL drops boilerplate without losing information — the compiler still produces fully-typed native code.

Language Sum 1…N — approx. tokens Runtime Targets
Python ~46 Interpreted Host platform
C ~30 Native Many (per port)
NURL ~13 Native (LLVM) Any LLVM target

Write once, target everything

The compiler emits LLVM IR. Any backend clang supports is reachable in principle — the platforms below ship with build scripts and are tested on every release.

Linux x86_64native
Windows x86_64native
macOS x86_64Mach-O
WebAssemblywasm32-wasi
macOS ARM64via clang
Embeddedplanned

The browser playground compiles your code to wasm32-wasi and runs it directly in the page — no install, no server-side execution. The self-hosting compiler also builds to wasm, so the entire toolchain can run in a sandbox.

Start writing NURL

Three paths, in order of friction.

Step 1 · Easiest

Try it in the browser

An online editor with examples, build, and run — everything compiles to WebAssembly and runs locally in your tab.

Open the Playground
Step 2 · Local build

Clone and bootstrap

Requires Python 3 and clang. The build script bootstraps the self-hosting compiler twice and verifies byte-identical IR.

git clone https://github.com/nurl-lang/nurl
cd nurl
./build.sh
./nurl.sh examples/fizzbuzz.nu
View on GitHub
Step 3 · Editor

Syntax highlighting

A VS Code / Windsurf extension lives under tooling/vscode-nurl/. The playground ships the same Monaco tokenizer, so you'll feel at home in either.

Get the extension

Plug it into your AI assistant

NURL exposes the entire compiler toolchain as a hosted Model Context Protocol server. Any MCP-aware client — Claude Desktop, Claude Code, Cursor, Windsurf, Zed — can read the docs, browse examples, and build native or wasm binaries on your behalf. Nothing to install locally.

URL  https://play.nurl-lang.org/mcp

Claude Code CLI

Add it as a remote MCP server in one command.

claude mcp add --transport http \
  nurl https://play.nurl-lang.org/mcp

Claude Desktop / Cursor / Windsurf JSON config

Drop into the client's mcpServers block.

{
  "mcpServers": {
    "nurl": {
      "type": "http",
      "url": "https://play.nurl-lang.org/mcp"
    }
  }
}

claude.ai web

Open Settings → Connectors → Add custom connector and paste the URL above. Streamable HTTP transport, no auth needed.

Open connectors →
Build tools · native Linux ELF, Windows .exe, macOS Mach-O, WebAssembly
Browse · stdlib modules, curated examples, compiler tests
Read · grammar (EBNF), README, roadmap, gotchas
Prompt · nurl_coding_assistant primes the model with the grammar

Go deeper

Pointers into the repository for the curious.

The README

A complete tour of the language, runtime, and pipeline — the canonical reference document.

github.com/nurl-lang/nurl

Formal grammar

EBNF spec lives in spec/grammar.ebnf. Historical snapshots track the language's evolution.

spec/

Examples

Curated .nu programs that the playground surfaces — fizzbuzz, calculators, ASCII demos, agent hosts, and more.

examples/

Roadmap

What's done, what's next. Maintained alongside the source so it doesn't drift from reality.

ROADMAP.md