What Concrete Makes Worse
Series note: this is the tradeoffs entry in the Concrete series. For the foundation, start with Why Concrete Exists. For the most practical demo, read When the Compiler Is the Oracle.
The previous articles in this series argued that Concrete’s design constraints are worth it. Explicit capabilities make code auditable. Linear types prevent resource leaks at compile time. No hidden behavior means the compiler can report what your program actually does. I believe all of that. But I have been writing Concrete code for long enough to know where the constraints bite, and I have not been honest enough about that in public.
This article is about what Concrete makes worse. Not in theory, not as an abstract “it’s stricter.” Specific code that is uglier, longer, or more painful to write in Concrete than in Rust or Zig. If you are considering whether these tradeoffs are worth it for your domain, you deserve to see the cost up front.
#Linear cleanup is verbose and repetitive
In Rust, RAII handles resource cleanup. You open a file, use it, and when the scope ends the compiler inserts a Drop call. You never write the cleanup. Three resources, zero cleanup lines:
In Concrete, every resource needs explicit cleanup. Owned values must be consumed exactly once. If you forget, the program does not compile. That is the point, but here is what it looks like:
fn process(path: &String) with(File, Alloc) -> Result<Report, Error> {
let config = open("config.toml")?
defer destroy(config)
let input = open(path)?
defer destroy(input)
let output = create("report.txt")?
defer destroy(output)
let settings = parse_config(&config)
defer destroy(settings)
let data = read_all(&input)
defer destroy(data)
let report = analyze(&settings, &data)
write(&mut output, &report)?
Ok(report)
}
Six defer destroy lines. The function’s logic is the same, but half the lines are cleanup ceremony. This is not a cherry-picked example. This is what real Concrete code looks like when you work with multiple resources. The ratio gets worse as functions get more complex.
It is tempting to say “well, at least you can see every cleanup site.” That is true. It is the reason the alloc report works, the reason auditors can trace resource lifetimes without reading the implementation of every type, the reason the oracle experiment could identify unnecessary allocations mechanically. But when you are writing the code, you feel the weight.
In Rust, you trust that Drop runs at scope exit and you move on. In Concrete, you think about destruction order, you type defer destroy for every owning binding, and you occasionally stare at a function wondering if there is a way to factor out the ceremony. There usually is not.
The worst case is error paths. If a function opens resource A, then tries to open resource B and fails, the error propagation with ? runs the deferred cleanup for A. That part works. But if you need conditional cleanup, different paths owning different subsets of resources, the linearity checker forces you to handle every case explicitly. Rust’s Drop handles this invisibly. Concrete makes you write it out.
I think the tradeoff is correct for the domains Concrete targets. But I no longer describe it as “more annoying to write” as if it were a minor inconvenience. It is a substantial ergonomic cost that you pay on every function that manages resources.
#No closures hurts composition
In Rust, filtering a list is one line:
let active: = users.iter.filter.collect;
In Concrete, there are no closures. No lambdas. No anonymous functions. You write a named function and pass it:
fn is_active(user: &User) -> Bool {
return user.active
}
let active: Vec<User> = filter<User>(&users, is_active) with(Alloc)
This is fine for is_active. It is a meaningful predicate that deserves a name. But what about filtering by a threshold that changes?
In Rust:
let expensive: = items.iter.filter.collect;
The closure captures threshold from the enclosing scope. One line, obvious what it does.
In Concrete, you cannot capture. The function you pass to filter can only use its arguments. If you need context, you have to restructure: pass the threshold as an additional parameter, or write a specialized function, or redesign the API to take a function pointer plus a context value. Each option is more code and more indirection for something that was one line with closures.
The rationale is real. Closures are hidden captures. A closure that captures a mutable reference is implicit aliasing. A closure that captures an owned value is an implicit move. A closure that captures by clone is an implicit allocation. In Concrete, all data flow is visible: function arguments go in, return values come out. Nothing is smuggled through a captured environment.
But expressiveness has a floor. Below that floor, code stops being clear and starts being bureaucratic. Simple data transformations, map/filter/reduce chains, callback patterns, event handlers, all of these are natural with closures and awkward without them. Concrete is below the floor for this class of problems.
I have thought about whether there is a restricted form of closures that preserves the properties I care about. A closure that can only capture immutable borrows, cannot allocate, and cannot outlive its scope would avoid most of the problems. I have not added it. Partly because I am not sure the complexity is worth it, partly because every feature you add to a small language makes it less small. But the pain is real enough that I keep thinking about it.
#The missing ecosystem
If you try Concrete today, you will hit walls that have nothing to do with the language’s design.
There is no package manager. Dependencies are manual. There is no formatter. Code style is whatever you decide it is. There is no LSP, so your editor gives you nothing: no autocomplete, no inline errors, no go-to-definition. The standard library has more than 30 modules, which sounds like a lot until you need something it does not cover and realize you are writing it from scratch or calling C through FFI.
Rust has crates.io, cargo, rustfmt, rust-analyzer, and a library for nearly anything. Zig has a package manager and a growing ecosystem. Concrete has a compiler and a test runner.
This is not a design problem. It is a maturity problem. The compiler works. The language is real. But the surrounding infrastructure that makes a language livable for daily work is early. If you pick Concrete for a project today, you are signing up to build some of that infrastructure yourself, or to wait.
I am not going to pretend this does not matter. Tooling is not secondary to language design. A language without a formatter means every team argues about style. A language without an LSP means slower feedback loops. A language without a package manager means dependency management is manual labor. These are not luxuries. They are the difference between a language you can advocate for and a language you use alone.
The plan is to build all of it. The formatter is next after the current compiler work stabilizes. The LSP will follow. A package manager is further out. But plans are not tools, and I would rather be honest about what exists today than let someone discover the gaps after committing.
#The cost and the payoff come from the same source
Every pain point in this article traces back to the same design decisions that make Concrete’s strengths possible.
Linear cleanup is verbose because every resource lifetime is explicit. That explicitness is why the alloc report works, why the compiler can tell you exactly where allocation happens and through which call chain.
No closures is painful because it eliminates a natural composition pattern. That elimination is why all data flow is visible, why capabilities are trackable through the call graph, why the proof surface is not contaminated by hidden captures.
The missing ecosystem is the cost of building a new language instead of extending an existing one. That independence is why the grammar is LL(1), why capabilities are built in from the start, why the compiler can be an oracle instead of a gatekeeper.
These are not separate tradeoffs. They are the same tradeoff viewed from different angles. You cannot have the reports without the verbosity. You cannot have the trackable call graph without giving up closures. You cannot have a language designed for machine reasoning without starting from scratch.
For firmware, security boundaries, cryptographic policy, safety-critical components, I still think the tradeoff is right. The previous articles in this series explain why. This one explains what it costs. Both are true at the same time.