What I want in a programming language
First, let me say this: I'm not a programming language designer, not a compiler engineer or anything like this.
As most software engineers, I happen to have opinions about what I like and don't like in the many different languages I've worked in.
In my case those are (roughly chronological in the order of how much I used them for something productive and how recently): C++, Python, Golang, C#, C, Java, TypeScript, JavaScript, Kotlin, Elixir, Rust, Lua
I also have mostly theoretical familiarity with the following: Zig, Odin, Jai, D, Circle, Hylo (Val)
And probably looked at even more stuff I simply forgot again.
I've always been someone who liked to analyse and compare the tools I've used for work and tried to find out from the very beginning what pitfalls they have and which best practices I should adopt. As a result, I've been consistently disappointed with any programming language I've worked with. Some more, some less.
And, as most software engineers would, I've come up with some idea of what kind of programming language I'd like to work with. I've even made a few attempts to prototype some of these aspects.
A programming language is a kind of API. We do have well-known and mostly well-understood API design principles. Let's apply them to programming languages!
So, here's a list of things I'd like to see in a programming language. Most is vague, some items might even conflict. But I try to explain the rationale behind all of these things. It's an abstract requirement list - raw and unrefined and in no particular order.
Language design philosophy 6 principles covering simplicity, safety, consistency, performance, expressiveness and extensibility.
- Simplicity
I believe if something is simple and elegant in implementation and fits well with existing simple abstractions, it is more likely to be a good design and abstraction. I want to have the Esperanto of programming languages. It may borrow ideas from everywhere, but if the specification and implementation isn't simple in the end, something went wrong. - Safety
Like other modern programming languages, my ideal programming language should be absolutely safe by default. The only error category that a compiler should allow for are logic errors. An engineer should not be able to compile undefined behaviour, potential memory errors, race conditions, deadlocks etc. Interfacing with other systems that allow for such things should be done in an explicitly isolated and recognizable "unsafe" scope that needs special attention. - Consistency
Each keyword or language syntax construct has one single purpose within the language. There is one correct and obvious way of doing any particular thing. Let's try to apply API design best practices here! - Performance
Performance should never be an afterthought. Efficient software matters for economics, ecology and user experience. Wasting computational resources wastes money, energy and people's time. The language should make it natural to write efficient code. - Expressiveness
I want to be able to express well what I mean. Writing well-readable code should be natural. The language should have features that directly express intent and avoid that people have to invent "patterns" with which they have to express common intents with boilerplate code. - Extensibility
The language itself should be its own metaprogramming language. No separate macro system, no external code generators. And beyond metaprogramming, the language syntax should lend itself to being extended from within, so that library authors can write APIs that read naturally without compiler magic.
Given all that, these are programming language design rationales that I think fit these goals:
Simplicity A language small enough to fully comprehend. Simple specification, simple implementation.
Rob Pike captured this idea well in his 2012 talk "Less is exponentially more". When the C++11 committee announced 35 new features, Pike asked: did they really believe the problem with C++ was that it didn't have enough features? That question led directly to Go, a language designed by removing things rather than adding them. The ideal is a language small enough for one person to fully comprehend: specification, compiler, and all.
Every language feature request makes the language more complex, and "just one more thing" compounds until you get C++ with its 1800-page standard. Go took this approach seriously. Its specification is around 50 pages, and it deliberately left out generics for over a decade to keep things simple. The trade-off was real (people wrote code generators and copy-pasted), but the resulting language was learnable in a weekend.
Odin's creator gingerBill articulates this well: Odin doesn't try to be fundamentally innovative. It does many things right, and the result is a language that's hard to pitch as a single "killer feature" but feels right when you use it. Karl Zylinski captures this in his post "Odin: A programming language made for me". It's the accumulation of good defaults, not any single radical idea.
A simple language also means a smaller attack surface for bugs in the compiler itself, easier formal verification, and more predictable behaviour. Simplicity isn't a concession. It's a feature.
That being said, I feeling the irony of asking for a lot of features here. What I'm hoping for is that the core of the language is simple and the advanced feature I make request here can be naturally created out of the core design and features. In other words: they should be good, zero-cost abstractions.
Context-free grammar Parseable without semantic knowledge. Trivial tooling, precise errors, small specification.
C and C++ have famously context-sensitive grammars. The expression a * b could be a multiplication or a pointer declaration depending on whether a is a type. T(x) could be a function call or a cast. This ambiguity infects every layer of tooling.
A context-free grammar means the parser can fully understand the syntax structure of a program without any semantic knowledge. No symbol tables, no type information, no forward declarations. The parser just parses.
This has real benefits:
- Tooling becomes trivial. Syntax highlighting, code formatters, tree-sitter grammars, linters: they all work perfectly without needing a full compiler front-end. This is why Go, Zig, and Odin have excellent tooling despite being much younger than C++.
- Error messages improve. When the parser doesn't need to guess context, it can give precise, unambiguous error messages.
- The specification stays small. A context-free grammar is expressible in a few pages of BNF. C++'s grammar requires hundreds of pages of prose to disambiguate.
- Incremental and parallel parsing become straightforward. IDE responsiveness depends on being able to re-parse changed regions without re-analysing the whole file.
Zig and Odin both have context-free grammars. Go's grammar is nearly context-free (with minor exceptions). Rust's grammar is mostly context-free but has some macro-related complexities. C++ is the cautionary tale of what happens when you don't prioritise this.
Strong types Algebraic data types, generics via monomorphization, no implicit conversions. Types encode invariants, the compiler enforces them.
A strong type system is the backbone of everything else on this list. Types encode invariants. The compiler enforces them. The programmer communicates intent through them.
Concretely, I want:
- Algebraic data types. Sum types (tagged unions / enums with data) and product types (structs / tuples). Sum types are what make Option and Result work. Pattern matching with exhaustiveness checking means the compiler tells you when you've forgotten a case.
- No implicit conversions. If you want to convert between types, you say so explicitly. Implicit integer widening, string-to-number coercion, truthy/falsy: all gone.
- Generics / parametric polymorphism. Functions and types that work over type parameters. Creating and using types should be natural, easy and preferred. Types might remain local to their module by default.
- Newtype / distinct types. Wrapping a primitive to give it domain meaning (a
UserIdis not ani64) should be cheap and natural, not ceremonial. - Traits / type classes. Shared behaviour expressed through interfaces that the type system checks at compile time.
Rust's type system is the closest existing model to what I want. It proves that a rich type system and systems-level programming are not in conflict. The main criticism is the learning curve and the verbosity of lifetime annotations, which is a separate concern from the type system's expressiveness.
Value-oriented programming Data as values, not objects with identity. Side effects at explicit boundaries.
In the 2015 WWDC talk "Protocol-Oriented Programming in Swift", Dave Abrahams showed how value types eliminate entire categories of bugs caused by shared mutable state. Matt Diephouse took this further in his post "Value-Oriented Programming", arguing that separating logic from effects by returning values instead of performing side effects gives you testing without mocks, composable transformations, and easier debugging.
The idea is simple: functions take values and return values. Side effects happen at explicit boundaries. You don't need TestRenderer mocks when your drawing function returns a Set<Path> that you can compare directly.
Hylo (formerly Val) takes this to its logical conclusion. Designed by Dave Abrahams and Dimi Racordon, Hylo is built entirely on mutable value semantics, a model where every value is independently owned and mutations don't affect other values. Dave Abrahams' C++ On Sea keynote (2024) demonstrates how this model achieves memory safety without a garbage collector and without Rust's reference-centric borrow checker. They've even shown that doubly linked lists with value semantics can be safe and faster than reference-based implementations.
Value-oriented programming ties directly into separation of state and logic: when data flows through transformations as values, it's naturally clear where state changes and where it doesn't. It also informs the next chapter: if data is values rather than objects with identity, the traditional OOP model of class hierarchies and inherited behaviour becomes unnecessary.
Generics Monomorphization by default for zero-cost performance. Type erasure available when ABI stability matters.
Generics should default to monomorphization, where the compiler generates a specialised copy of the function for each concrete type it's called with. This is what Rust and C++ do, and it's what Zig achieves through comptime. The result is zero-cost: the generic code compiles down to the same machine code you'd write by hand for each specific type. No vtables, no indirection, no runtime dispatch.
But monomorphization has well-known costs: binary size grows with each instantiation, and compilation slows down because the compiler does more work. When ABI stability matters (shared libraries, plugin systems, stable FFI boundaries) monomorphization is actively harmful because every change to a generic function recompiles all its instantiations.
The language should give developers a choice. Monomorphization is the default because it aligns with zero-cost abstractions. But a developer should be able to opt into type erasure (trait objects in Rust, interfaces with vtables) when they consciously decide that ABI stability, binary size, or compilation speed matters more than peak runtime performance for a given abstraction. This is a per-use-site decision, not a global one.
This fits naturally with compile-time metaprogramming: monomorphization is compile-time code generation. The generic function is essentially a comptime template that the compiler instantiates. Type erasure is the escape hatch for when you need dynamism across compilation boundaries.
Explicit optionality Replace nullable references with explicit sum types. If the type system is expressive enough, null pointers cannot exist.
In 2009, Tony Hoare called his invention of null references his "billion dollar mistake" in his QCon London talk. He was being conservative. The actual cost in bugs, security vulnerabilities, and wasted developer hours is incalculable. Hoare's key insight was that programming language designers should be responsible for the errors in programs written in their language. Null was easy to implement but shifted an enormous burden onto every programmer using the language forever after.
The solution is well-understood by now: replace nullable references with explicit sum types. Rust's Option<T>, Haskell's Maybe, Swift's optionals, C++'s std::optional. They all express the same idea. A value is either present or it isn't, and the type system forces you to handle both cases before you can access the value.
This follows directly from a strong type system. If your types are expressive enough and your compiler enforces exhaustive pattern matching, null pointers simply cannot exist. You don't need a special "null safety" feature bolted on after the fact like Kotlin's ? or C#'s nullable reference types. You need a type system that makes the absent-value case explicit from the start. Though I think I'm in favor of making these things part of the language specification somehow. Because of the abundance of required explicit error handling in good code, it would improve the ergonomics a lot.
Composition and delegation Traits, interfaces and explicit delegation instead of class hierarchies and inheritance.
Object-oriented programming as practiced (class hierarchies, virtual methods, inheritance of implementation) has well-documented limitations:
- The fragile base class problem. Changing a base class can silently break subclasses in ways that aren't caught at compile time.
- Tight coupling. Inheritance couples interface to implementation. Subclasses depend on the internal details of their parents.
- The expression problem. Adding new types is easy with OOP; adding new operations is hard. The reverse is true for functional approaches. Neither is strictly better; a good language should make both possible.
- Diamond inheritance and multiple inheritance. Languages either disallow it (Java, C#) or make it a minefield (C++).
These aren't just theoretical concerns. Decades of experience across many codebases have shown that deep inheritance hierarchies become brittle and hard to reason about. The industry has gradually moved towards composition, and modern languages reflect that.
The alternative is well-understood: composition via traits, interfaces, and delegation.
- Traits (Rust, Scala) define shared behaviour without implementation inheritance. Types can implement multiple traits. There's no hierarchy.
- Interfaces (Go, TypeScript) define contracts. Implementations are flat.
- Delegation (Kotlin's
bykeyword) lets you compose behaviour explicitly: "I wantFooto have the same interface asBar, and I'll delegate the implementation to this specific instance ofBar." This gets you code reuse without inheritance.
The language should make it natural to compose types from traits and delegate to implementations rather than inheriting method implementations from a parent type.
Errors as values Errors tracked by the type system with first-class syntax for propagation. No exceptions, no invisible control flow.
Exceptions, as implemented in Java, Python, C# and most mainstream languages, have a fundamental problem: they introduce non-local control flow. When you call a function, you can't tell from the call site whether it might throw, what it might throw, or where the error will actually be handled. The "unhappy path" becomes invisible at the point where it matters most.
In practice, exceptions are frequently misused: as control flow mechanisms, as lazy ways to skip validation, as invisible goto statements that jump across half a codebase. The core issue is that exceptions make the error path opt-in to think about rather than opt-in to ignore.
Errors as values, like Rust's Result<T, E>, Zig's error unions, or even Go's explicit (value, error) returns, flip this around. The error is right there at the call site. You have to make a conscious decision about it. The type system tracks it. The compiler won't let you forget.
But treating errors as values alone isn't enough. Go proved that. Its if err != nil boilerplate is universally disliked even by Go enthusiasts. The language needs first-class syntax for error propagation. Rust's ? operator is a good example: it makes the common case (propagate the error up) a single character while keeping the error path visible. Zig's try keyword serves a similar role.
Error handling is too pervasive to be a pure library implementation. It must be a first-class language aspect with dedicated syntax that makes the happy path clean while keeping errors visible and trackable through the type system.
Mandatory init Every variable initialised at declaration. No out-parameters, no declare-then-assign.
Every variable must be initialised at the point of declaration. No exceptions in safe code.
This eliminates an entire class of bugs (uninitialised reads) by construction rather than by analysis. It also eliminates the need for C++-style "out" parameters, where you declare a variable, pass a pointer to it into a function, and hope the function fills it in. If a function produces a value, it returns a value. The language's calling convention and optimisation passes can make this as efficient as filling in a pointer. The programmer shouldn't need to compromise safety for performance at this level.
For the rare cases where deferred initialisation is genuinely needed (performance-critical code, FFI buffers, memory-mapped regions) the language provides an explicit uninitialized marker type. Using it is an unsafe operation. The compiler tracks the variable's initialisation state and refuses to compile any read path that hasn't provably been preceded by a write. This makes the decision conscious, auditable, and contained.
The design principle is: code structure should make it unnecessary to define a variable before its value is known. If you find yourself wanting to declare-then-assign, the language should offer a way to restructure (like if/match expressions that return values, multiple return values, structured bindings) so that the value flows to the binding directly.
Deterministic lifetimes Every resource has a known lifetime. Freed at the precise point when no longer needed, never earlier or later.
Garbage collection trades determinism for convenience. You don't know when memory will be freed. You don't know when finalizers will run. You can't reason about allocation patterns in performance-critical code. And GC pauses, even the "low latency" ones, are unacceptable in certain domains.
The alternative is deterministic lifetimes. Every resource (memory, file handles, sockets, locks) has a known lifetime, and the language ensures it is freed at the precise point when it's no longer needed.
There are several approaches in the wild:
- Rust's ownership and borrowing. The compiler tracks ownership through the type system and inserts drops automatically. Extremely powerful but comes with a steep learning curve and lifetime annotation complexity.
- Odin's allocator-based approach. Custom allocators (including arena allocators and a temporary allocator) paired with
deferfor cleanup. Simpler than Rust but relies more on programmer discipline. gingerBill's philosophy is that custom allocators eliminate most of the malloc/free pain. - Hylo's mutable value semantics. Values are independently owned by construction, so lifetime tracking is simpler than Rust's reference-centric model.
- C++'s RAII. Constructors and destructors manage resources. The idea is sound; the execution in C++ is undermined by copy semantics, implicit constructors, and the lack of move-by-default.
I actually think C++ constructors and destructors are basically a good idea with a few failures in execution (I wrote about this before). Rust's Drop trait demonstrates that RAII works when the type system actually enforces ownership. The arena allocator pattern, as described by Ryan Fleury, is another powerful tool: bulk allocation and deallocation for groups of related objects, avoiding per-object lifetime tracking entirely.
As for how strict the ownership/borrowing system should be: as strict as necessary to guarantee safety at runtime. If the compiler can't prove that a piece of code is safe, it shouldn't compile. This is the same philosophy as the safety principle at the top of this list. The only acceptable error category is logic errors. Whether the specific mechanism is Rust-style borrow checking, Hylo-style mutable value semantics, or something new doesn't matter as much as the guarantee: if it compiles, it's safe.
Pure by default Structural separation of pure functions and state-altering code. The compiler knows which is which.
Mainstream OOP encouraged bundling state and behaviour into objects. Methods mutate this. State is scattered across object graphs. Testing requires elaborate setup. Reasoning about what a function does requires understanding the entire mutable context it has access to.
This model has well-known drawbacks. A better alternative: pure functions that take inputs and produce outputs, separate from the code that manages state transitions. This is not a new idea (functional programming has advocated for it forever) but it doesn't require a functional language to implement. It requires a language that makes the distinction structural.
Concretely:
- Functions should be pure by default. A function that takes values and returns values, with no access to mutable state, should be the easy and natural thing to write.
- State-altering code should be explicitly marked. Whether through effect annotations,
mutrequirements (like Rust's&mut), or a separate syntactic construct, the compiler should always know whether a function can alter state. - This separation gives the compiler powerful capabilities: pure functions can be freely reordered, memoised, parallelised, and inlined. Lifetime analysis becomes simpler. Concurrency safety becomes provable.
Matt Diephouse's value-oriented programming post demonstrates this practically: when drawing logic returns a Set<Path> instead of calling methods on a renderer, you can test without mocks, transform the result, and inspect it in a debugger. All because you separated the "what" (values) from the "how" (effects).
This leads to an uncomfortable but honest conclusion: the language probably needs some form of effect system. I'd prefer to avoid it. Effect systems add complexity, and no mainstream language has shipped one that feels natural. But if pure functions and functions with effects are declared differently, treated differently by the compiler, and a pure function can never call an effectful function without itself becoming effectful, that is an effect system, whether you call it one or not. The key is to keep it minimal and structural rather than reaching for the full generality of algebraic effects. Something closer to Rust's unsafe propagation model than to Eff's handler-based approach: effects propagate through call chains, and the compiler enforces that you can't hide them.
Zero-cost abstractions What you don't use, you don't pay for. What you do use compiles to optimal code.
The term "zero-cost abstractions" was coined by Bjarne Stroustrup for C++, originally formulated as two rules: "What you don't use, you don't pay for. And further: What you do use, you couldn't hand code any better."
withoutboats elaborated on this in a blog post on zero-cost abstractions (2019), arguing that there's actually a third requirement: a zero-cost abstraction must also improve the user's experience compared to handwriting the equivalent code. Otherwise, why bother? This is important because zero-cost abstractions compete on two fronts. They must be better than handwriting the low-level code and within spitting distance of the ergonomics of non-zero-cost abstractions.
Rust's great zero-cost abstractions are well known: ownership/borrowing (memory safety without GC), iterator/closure APIs (map/filter compiling to the same code as handwritten C loops), and async/await (futures without allocating per-operation). The unsafe and module boundary system is what withoutboats calls "the zero-cost abstraction that is the mother of all other zero-cost abstractions in Rust", the ability to locally break the rules to extend the system beyond what the type checker handles.
It's worth being honest: truly achieving all three criteria (no global cost, optimal performance, better UX) is rare and extremely difficult. Most abstractions achieve two out of three. The aspiration is still the right one though.
Safe concurrency Whatever the model, the compiler prevents data races and deadlocks. No function coloring.
I don't have a strong opinion on the specific concurrency model yet. There are several well-proven approaches:
- Channels as in Go, communicate by sharing messages, not by sharing memory
- Actors as in Erlang/Elixir, isolated processes with message passing
- Ownership-based concurrency as in Rust, the type system prevents data races at compile time
- Structured concurrency as explored by Hylo and Java's Project Loom, concurrency scopes that mirror lexical scopes
Each comes with trade-offs in ergonomics, performance and expressiveness. What I do care about is that whatever model is chosen, it must be safe by default. The compiler must prevent data races and deadlocks structurally, not through convention or runtime checks.
One specific problem worth calling out: the function coloring problem, described in Bob Nystrom's essay "What Color Is Your Function?". In languages with async/await (JavaScript, Python, C#, Rust), every function is either "sync" or "async," and async functions can only be called from other async functions. This infects entire call chains and splits the ecosystem into two worlds.
I'm not convinced that coroutines as implemented in most current languages are a good solution. Go solves this by making everything concurrent-capable through goroutines, so there's no coloring because all functions are the same "color." Zig similarly avoids coloring. Hylo claims to achieve this through structured concurrency with "no function coloring." Java's virtual threads (Project Loom) are another approach that makes blocking calls cheap, avoiding the need for async/await entirely.
Maybe concurrency shouldn't be implemented at the function level at all. Effect systems and algebraic effects, as explored in languages like Eff and Koka, offer a different model where concurrency is an effect that can be handled at scope boundaries rather than infecting function signatures.
Unambiguous call sites If two functions do different things, they get different names. No default arguments, no overloading.
Default arguments look harmless but introduce real ambiguity:
- At the call site, you can't tell which arguments were explicitly passed and which were defaulted without consulting the function signature.
- When defaults change, every call site that relied on the old default changes behaviour silently.
- In combination with overloading, they create an explosion of possible resolutions that make call sites genuinely ambiguous.
Function overloading shares the same core problem: the same function name does different things. The resolution rules are always complex (C++ overload resolution is notoriously Byzantine), and at the call site you need to mentally run the resolution algorithm to know which function you're actually calling.
If two functions do different things, they should have different names. If you want to avoid writing boilerplate, functions call each other. A three-argument version calls the five-argument version with defaults. This is explicit, readable, and unambiguous.
Zig, Odin, and Go all take this stance. It looks like a limitation until you use it, and then you realise how much simpler every call site becomes when there's exactly one function with that name.
Compile-time introspection All reflection at compile time. Runtime type introspection replaced by type-safe code generation.
Runtime reflection lets code inspect and manipulate types, methods, and fields at runtime. Java's java.lang.reflect, C#'s System.Reflection, Python's getattr/setattr. They all share the same fundamental problems:
- They break static guarantees. The whole point of a type system is to catch errors at compile time. Reflection bypasses this, turning type errors into runtime errors.
- They prevent optimisation. The compiler can't inline, devirtualise, or dead-code-eliminate code that's discovered at runtime.
- They create invisible coupling. Code that uses reflection couples to the internal structure of types in ways that aren't expressed in any interface. Rename a field and something breaks at runtime three layers away.
- They're a security surface. Runtime reflection can access private fields, invoke private methods, and bypass access control.
The common counterargument is: "what about serialization?" Compile-time reflection solves this entirely. Zig's @typeInfo and @Type, Jai's compile-time type introspection, and Circle's compile-time reflection all let you generate serialization code at compile time that's fully type-checked and fully optimised. The generated code is indistinguishable from handwritten code.
Even OOP best practices have always been clear that runtime reflection beyond debugging undermines the design principles it claims to support. In my experience, production uses of runtime reflection consistently point to a missing abstraction that would be better solved at the type level.
Integrated tooling Build system, formatter, test runner, package manager ship with the language. Written in the language itself.
The build system should be written in the language itself. Full stop. Not in CMake, not in Make, not in some YAML/TOML DSL, not in a separate scripting language.
Zig does this with build.zig, where your build script is a regular Zig program. Jai takes it further: Jonathan Blow's #run directive lets you execute arbitrary code at compile time, and the build system is simply a Jai program that runs during compilation.
This is popular for good reason:
- You already know the language. No second set of syntax, semantics and quirks to learn.
- Build logic can use the same type system, error handling, and libraries as the rest of your code. We are building a great language here - why would I want to use something else?
- Tooling (LSP, debugger, etc.) works on your build scripts the same as on your application code.
- No impedance mismatch between what the build system can express and what the language can express.
Everyone who has used CMake or dealt with Gradle's Groovy DSL knows the pain of fighting a build system that's almost but not quite a real programming language. Just make it one. Beyond the build system, the language should ship with all essential project management tools: a formatter, a test runner, and a package manager. Go and Zig both demonstrate that integrated tooling reduces friction enormously. With all the safety and correctness features proposed here, the need for external linters should be minimal. The compiler itself catches most of what linters traditionally flag. The goal is that a developer can be productive with nothing beyond the language's own toolchain, without reaching into the internet for third-party build tools or project scaffolding.
Extensibility The language is its own metaprogramming tool. Library authors write APIs that read like DSLs.
In concrete terms, extensibility means two things:
Compile-time metaprogramming along the lines of Zig's comptime or Jai's #run. The language itself should be powerful enough to generate code, perform static checks, and transform data structures during compilation, without a separate macro language or build-time code generator. Zig's approach, where the same language constructs work at both compile time and runtime, is particularly elegant. The article "Zig's comptime is bonkers good" captures why this matters. At the same time, matklad's "Things Zig comptime won't do" is a useful reminder that deliberate limitations (e.g. no I/O at comptime) exist for good reasons.
Language-level ergonomics for expressive code. I especially like what Kotlin does here. A few examples:
- Extension functions let you add methods to existing types without inheritance or wrappers. You can write
"hello".removePrefix("he")as ifremovePrefixwere built intoString, but it's just a function you defined. - Infix functions let you write
1 to "one"instead of1.to("one"), making DSL-like syntax possible without parser changes. - Trailing lambdas let you write
list.filter { it > 0 }instead oflist.filter({ it > 0 }). This small syntactic choice makes builder patterns and scoping constructs read like language keywords. - Destructuring declarations let you unpack data classes and maps naturally:
val (name, age) = person.
These features serve expressiveness directly: they let you name and structure operations in ways that match the domain rather than fighting the language's syntax. The result is that Kotlin libraries can create APIs that read almost like DSLs (Gradle's Kotlin DSL, Ktor, Exposed) without requiring any actual code generation. The language should adopt similar patterns, not as clever tricks, but as first-class design considerations for readability.
Supply chain safety Compile-time sandboxing and permissions for third-party packages. Batteries-included standard library.
In order for large systems to be buildable and maintainable, a simple yet safe and easily expandable module system needs to exist. Modules should provide clear boundaries, explicit exports, and straightforward dependency management.
But beyond the basics, any modern language needs to address supply chain security, and that problem gets worse every year. Supply chain attacks happen almost daily now. A compromised or malicious third-party package can exfiltrate data, spawn processes, or modify the file system, and in most ecosystems the developer who pulls it in has no idea what it's doing under the hood.
The language should implement compile-time sandboxing and a permission model for third-party packages:
- Every package must declare what kind of code it contains and what system interactions it requires (file system access, network, process spawning, native FFI).
- Any interaction with system APIs must be explicitly approved by the developer integrating the package.
- If a package that previously didn't spawn processes starts doing so after an update, the build should refuse to compile until the developer reviews and approves the new permission.
- The standard library should provide the canonical, sandboxed APIs for system interactions. If a package uses native FFI to escape the language and interact with the OS directly, the compiler should only compile that code if explicitly approved.
Deno's permission model is an inspiration here, but integrated at the language and compiler level rather than only at the runtime. This is a hard problem to solve well, but the industry desperately needs safe-by-default guardrails.
The standard library plays a big role here. It should be comprehensive and well-designed, closer to Python or Go's "batteries included" philosophy than to Rust's minimal std. A rich standard library means fewer third-party dependencies, which directly reduces supply chain attack surface. Networking, HTTP, JSON, cryptography, file system operations, compression, date/time: these should all be there and be good enough that reaching for a third-party crate is a deliberate choice, not a necessity. That's a large maintenance commitment, but it's the right trade-off for a language that takes supply chain security seriously. And with the permission system described above, the cases where developers do pull in third-party code become safer by default.
FFI sandbox C interop exists but lives in clearly marked unsafe scopes with granular permissions.
For a systems language without GC, native code interop with C (and by extension, the entire existing software ecosystem) is table stakes. There's no way around it.
But it needs to be isolated. Calling into C means leaving behind every safety guarantee the language provides. The approach should be similar to Rust's unsafe blocks: FFI code lives in clearly marked scopes, the developer explicitly opts into unsafety, and the rest of the codebase remains provably safe.
There's room to do better than Rust, though. Rust's unsafe is powerful but somewhat coarse-grained. It's an all-or-nothing switch that turns off multiple safety checks at once. A more granular model could distinguish between "this accesses raw memory" and "this calls a C function with an unchecked ABI." The sandbox concept from the module/package security model could extend here too: FFI usage could require explicit permissions declared at the module or package level, making it visible in dependency audits without having to grep through code.
Zig, Odin and Hylo all invest heavily in C interop through their foreign function import systems. Odin's foreign import is notably ergonomic. Whatever the specific mechanism, the key principle is: interop must exist, but it must be clearly separated, explicitly approved, and never accidentally pulled into safe code.
Memory layout Explicit control over struct layout, alignment, padding and SOA/AOS strategies.
When you need performance, you need control over how data is laid out in memory. Cache-friendliness, SIMD alignment, and choosing between Array-of-Structs (AOS) and Struct-of-Arrays (SOA) representations can make orders-of-magnitude differences in tight loops.
Odin makes #soa a first-class language construct. You annotate a type and the compiler rearranges its layout for data-oriented access patterns. Zig gives you packed structs, explicit alignment control, and the ability to examine memory layout at comptime. Jai similarly provides SOA transformations.
The programmer should have explicit control over struct layout, alignment, padding, and data layout strategies. This doesn't mean every programmer needs to use these features all the time, but when you need them, they should be there without having to drop into unsafe code or platform-specific hacks.
Fast compilation Simple languages compile fast. Never at the expense of safety.
Fast compilation is something to aspire to. A tight edit-compile-run cycle matters enormously for productivity.
But it must not come at the expense of safety or zero-cost abstractions. A compiler that skips important analyses to be fast is doing its users a disservice.
I believe that a simple specification and a simple implementation are reasonably fast to compile by nature. Languages that compile slowly (C++ being the poster child) tend to do so because of accumulated specification complexity: templates, overload resolution, header includes, macro expansion. If the language design is clean and the grammar is context-free, compilation speed follows as a natural consequence rather than requiring heroic optimisation effort.
Comments