I hate deadlocks. Maybe you do too. Back at Fission, whenever someone would suggest a mutex weβd start a chant of βI say mutex, you say deadlock: Mutex! DEADLOCK! Mutex! DEADLOCK!β. Deadlocks lurk β perfectly invisible in code review, happy to pass CI a thousand times, then lock your system up at 3am under a request pattern that no one anticipated.
They have their own tradeoffs, but I miss TVars from Haskell. AFAICT thereβs no way to do proper TVars in languages that have no way to enforce purity. We βshouldβ prefer lock-free programming, but mutexes are very common in the Rust standard style. I often hear that actors eliminate deadlocks, but as someone whoβs written her fair share of Elixir, this is 100% a lie (though they are less frequent).
Rust famously catches data races at compile time, but for deadlocks? You get a Mutex, a pat on the back, and βgood luck, babeβ. There are some tools that help analyze your code that are fairly good, but I want feedback during development. Iβve been thinking about a better approach to this problem for a while, looked at a bunch of other attempts and have come up with what I hope is a decent ergonomic balance that covers many common use cases in Rust: surelock, a deadlock-freedom library. If your code compiles, it doesnβt deadlock. No Result, no Option, no runtime panic on the lock path. Every acquisition is either proven safe by the compiler or rejected at build time1.
NOTE
This is a pre-release. I think the core design is solid, but I wouldnβt be surprised if there are rough edges. Feedback and contributions welcome!
TL;DR
- Deadlocks happen when all four Coffman Conditions occur simultaneously
- Surelock breaks one of them β circular wait β using two complementary mechanisms
- Same-level locks are acquired atomically in a deterministic order (
LockSet) - Cross-level locks are acquired incrementally with compile-time ordering enforcement (
Level<N>) - The entire public API is safe β
unsafeis confined to the raw mutex internals no_stdcompatible, zero required runtime dependencies2
Why Not Just Be Careful?
The honest answer is that being careful doesnβt scale. Itβs easy to shoot yourself in the foot while composing code. But wait, isnβt this the kind of thing that Rust us supposed to help us with? The borrow checker catches data races at compile time β why shouldnβt we expect the same for deadlocks?

The fundamental problem is well-understood. Coffman et al. identified four necessary conditions for deadlock back in 1971:
| Condition | Meaning |
|---|---|
| Mutual exclusion | At least one resource is held exclusively |
| Hold and wait | A thread holds one lock while waiting for another |
| No preemption | Locks canβt be forcibly taken away |
| Circular wait | A cycle exists in the wait-for graph |
Break any one of these, and deadlocks become impossible. Mutual exclusion is the whole point of a mutex. Preemption introduces its own class of bugs. That leaves hold-and-wait and circular wait.
QUOTE
A language that doesnβt affect the way you think about programming is not worth knowing.
β Alan Perlis
We identified the solution space over 50 years ago(!). Itβs a thorny enough problem that we have no single agreed solution, and yet mutexes are sufficiently useful that we still use them. This suggests that until something comes along that completely replaces mutexes, weβre in the domain of improving the ergonomics for using mutexes safely.
Only rarely do safety techniques exactly match safe use. Type systems somewhat famously either allow some unsound behaviour, or rule out legitimate use (sometimes both) β hence all of the effort going into making fancier type systems that can more directly match all safe uses while ruling out unsafe ones. The trick is in lining those up as closely as possible while introducing the easiest model to work with: minimal ceremony and/or easy to reason about. I would argue that we want our tools to help us to think about the problem.
The Key Metaphor
Surelock is built around a physical-world analogy: to interact with locks, you need a key. in our case, weβre going to keep that key while the mutex is in use. You only get that key back when you unlock it.
We call this a MutexKey β a linear3 scope token. You get one when you enter a locking scope. When you call .lock(), the key is consumed and a new one is returned alongside the guard. The new key carries a type-level record of what youβve already locked, so the compiler knows what youβre still allowed to acquire. Try to go backwards and the code doesnβt compile.
π‘ This is the core trick: by making the key a move-only value that threads through every acquisition, we get a compile-time witness of the current lock state. No global analysis, no runtime tracking β just the type checker doing what it does best.
This analogy only goes so far: MutexKey actually grants you the ability to lock multiple mutexes together atomically. Locks in surelock may be grouped into levels to enable incremental acquisition, and locking returns an attenuated key that can lock fewer levels.
Prior Art
Two existing libraries informed the design, and I want to give them proper credit.
happylock
happylock by botahamec introduced the key insight that a capability token combined with sorted multi-lock acquisition can prevent deadlocks. Surelock borrows this pattern directly.
Where the approaches diverge: happylock breaks the hold-and-wait condition. When you lock through the collection, your key is consumed β you canβt go acquire more locks at all until itβs released. This is safe, but it means you MUST acquire all of your locks at once. You canβt do things like βlock the config, read which account to update, then lock that accountβ. In concurrent systems that need incremental acquisition, this is a real limitation β when you need incremental locks, you really need incremental locks.
happylock also sorts locks by memory address, which is not stable across Vec reallocations or moves. The library goes to some length to maintain address stability with unsafe (Box::leak, transmute), and that unsafety propagates to the public API.
lock_tree
lock_tree from Googleβs Fuchsia project introduced compile-time lock ordering via a LockAfter trait. Declare that level A comes before level B, and the compiler rejects any code that tries to acquire them in the wrong order.
Surelock extends this in a few ways: same-level multi-lock (which lock_tree canβt express), per-instance level assignment via new_higher, and scope-uniqueness enforcement. Surelock also deliberately drops lock_treeβs DAG-based ordering in favour of a strict total order β more on why below.
The Dual Mechanism Design
Surelock uses two mechanisms that cover complementary cases. Neither overlaps with the other:
| Mechanism | When | Acquisition | Enforcement |
|---|---|---|---|
LockSet | Multiple locks at the same level | Atomic (all-or-nothing) | Runtime sort by stable ID |
Levels (Level<N>) | Locks at different levels | Incremental | Compile-time trait bounds |
Same Level: LockSet
Every Mutex gets a unique, monotonically-increasing LockId from a global AtomicU64 counter at creation time. The ID lives inside the mutex and moves with it β no address stability needed.
When you need multiple locks at the same level, LockSet pre-sorts them by ID at construction time. Two threads requesting the same locks in opposite order both sort to the same acquisition order. No cycle can form.
let alice = Mutex::new("alice");
let bob = Mutex::new("bob");
// Regardless of argument order, acquisition order is deterministic
let set = LockSet::new((&alice, &bob));
key_handle.scope(|key| {
let (a, b) = set.lock(key);
// Both locked, no deadlock possible
});Thread A Thread B
β β
βΌ βΌ
LockSet::new((&acct_1, &acct_2)) LockSet::new((&acct_2, &acct_1))
β β
βΌ βΌ
sorted: [acct_1, acct_2] sorted: [acct_1, acct_2]
β β
ββ takes acct_1 lock β
β ββ waits for acct_1 lock
ββ takes acct_2 lock β
β β
~~~~~~~~~~~~~~~~~~~~~~~TIME PASSES~~~~~~~~~~~~~~~~~~~~~~~
β β
ββ release acct_2 β
β β (still waiting for lock 1 first)
ββ release acct_1 β
β ββ takes acct_1 lock
β ββ takes acct_2 lock
β β
βΌ βΌ
OK OK (no cycle possible)Different Levels: Compile-Time Ordering
When locks represent different kinds of resources β say a config guard vs. a per-account lock β you assign them to different levels. Level<N> types with LockAfter trait bounds enforce strictly ascending acquisition:
use surelock::level::{Level, lock_scope};
type Config = Level<1>;
type Account = Level<2>;
let config: Mutex<_, Config> = Mutex::new(AppConfig::default());
let account: Mutex<_, Account> = Mutex::new(Balance(0));
lock_scope(|key| {
let (cfg, key) = config.lock(key); // Level 1 -> ok
let (acct, key) = account.lock(key); // Level 2 after 1 -> ok
// account.lock(key) then config.lock(key) -> compile error
});The key (ha!) is MutexKey. Itβs consumed by each lock() call and re-emitted at the new level. If you try to go backwards β trying to acquire a Level<3> followed by a Level<1> β the LockAfter bound isnβt satisfied and the compiler rejects it with a helpful (custom) error message.
Why a Total Order, Not a DAG?
This is a deliberate design decision. lock_tree uses a DAG, which lets you declare that branches A and B are independent β neither needs to come before the other. Sounds great, but it has a subtle problem: if thread 1 acquires A then B, and thread 2 acquires B then A, and both orderings are valid in the DAG, you have a deadlock that the compiler happily approved.
Locks are sorted in a total order by (Level, LockId). A total order is more restrictive but actually sound. If two locks participate in the system at all, their relative order is fixed. We could in concept linearise a DAG (much like we do in many op-based CRDTs), but that is harder to reason about and adds a lot of additional machinery.
Getting a Key
Both mechanisms β LockSet and levels β use MutexKey. You can think of the key as a kind of scope: itβs consumed by each lock() call and re-emitted at the new level. Itβs branded with an invariant lifetime (same technique as std::thread::scope) so it canβt escape the closure, and itβs !Send so it stays on the thread that created it.
There are two ways to get one: implicit and explicit.
Implicit: lock_scope
I expect this is what most people will reach for. On std, lock_scope handles all the setup internally β you just write:
use surelock::{Mutex, lock_scope};
let data = Mutex::new(42);
lock_scope(|key| {
let (guard, key) = data.lock(key);
assert_eq!(*guard, 42);
});Under the hood, lock_scope manages a thread_local! KeyHandle for you. The KeyHandleβs scope method takes &mut self, which means the borrow checker prevents nested scopes β you canβt start a new locking scope while one is already active.
Explicit: Locksmith and KeyVoucher
For no_std, embedded, or cases where you want tighter control over which core gets which capability, thereβs a fully explicit chain:
Locksmith Program-wide singleton (!Send)
|
v
KeyVoucher Transferable to other threads (Send)
|
v
KeyHandle Per-thread / claimed once (!Send)
|
v
MutexKey<'scope> Branded lifetime, consumed-and-re-emitted
|
v
MutexGuard RAII access to the dataThe Locksmith is a singleton β enforced via AtomicBool, not convention. It is !Clone, !Copy, and importantly !Send as an extra layer of safety to prevent it from being duplicated among threads. It issues KeyVouchers which are Send, so you can distribute keys among threads during setup. A voucher gets redeemed on its destination thread for a KeyHandle, which is !Send and stays put.
This might sound like a lot of ceremony, but itβs mainly type machinery to enforce safety invariants. On bare metal with no thread_local!, you want this level of control β youβre deciding at init time exactly which core gets a key, and the type system makes sure no one else can conjure one out of thin air.
Diagrams
Here is an example diagrammed out. Youβll notice that the lock levels are in a strict compile-time sequence, but the lock IDs β being assigned at runtime β show up in arbitrary order. This is completely fine; they merely need to be consistent across all callers.
Here we start with the root key that can lock any level (βabove level 0β). We acquire locks across two levels, sort them by ID, and acquire them in that order (for the reasons described in the timeline earlier). This consumes the root key, and emits a key that can lock above level 2. In the future, we can lock level 3 and onwards, until we unlock and recover the earlier keys.
Key{Level > 0}
β
β
ββββββββββLevel 1ββββββββββββ βΌ
β β ββββββββββLockSetβββββββββ
β β β β
β lock_1 β β β
β lock_3 βββββΌββββββββββββββΌβββββΆ lvl_1-lock_3 ββββββΌβββββββββΆ guard-3
β β β β
β lock_8 βββββββββββββΌββββββββββββββΌβββββΆ lvl_1-lock_8 ββββββΌβββββββββΆ guard-8
β β β β
βββββββββββββββββββββββββββββ β β
β β
ββββββββββLevel 2ββββββββββββ β β
β β β β
β β β β
β lock_5 βββββΌββββββββββββββΌβββββΆ lvl_2-lock_5 ββββββΌβββββββββΆ guard-5
β β β β
β lock_7 β β β
β β β β
βββββββββββββββββββββββββββββ ββββββββββββββββββββββββββ
β
β
βΌ
Key{Level > 2}
β
β
βΌ
ββββββββββLevel 3ββββββββββββ β β β β β β β β β β β β β
β β β
β β β
β lock_2 β βΌ β β β β β ββΆ β
β lock_4 β β
β β β
β lock_6 β β
β β β
βββββββββββββββββββββββββββββ β β β β β β β β β β β β β
no_std and Embedded
The crate is #![no_std] at its core, requiring only alloc. On std, you get StdMutex as the default backend and lock_scope as a convenient entry point. On no_std, you bring your own backend β anything implementing lock_api::RawMutex works via a blanket impl behind a feature flag.
It also supports targets without native CAS (e.g. Cortex-M0) via portable-atomic and critical-section features. I think this matters because the places where you most need deadlock freedom β bare-metal firmware with no debugger attached β are exactly the places where current tooling helps you least.
Escape Hatch
Real-world code sometimes needs to break the rules. Surelock provides Mutex::unchecked_lock() behind the escape-hatch feature flag. Itβs feature-gated so that if you use it, itβs visible in Cargo.toml, itβs grep-able, and it stands out in code review. IMO this is better than the alternative: developers silently reaching for parking_lot::Mutex alongside their surelock mutexes and defeating the whole system.
Wrap Up
Deadlocks are a solved problem in theory β weβve known how to prevent them since 1971. The challenge is making that prevention ergonomic enough that people actually use it. Surelock is my attempt at that: lean into Rustβs type system to make the correct thing the easy thing, and make the wrong thing a compiler error.
The crate is on Codeberg and published to crates.io. Dual-licensed MIT/Apache-2.0. Iβd love feedback, especially from folks working on embedded or no_std targets β thatβs where Iβve had the least real-world testing so far.
Footnotes
-
With one exception: acquring the SAME mutex atomically more than once. Iβm not happy that this edge case exists, but also itβs not common that you would try to take the same mutex alongside itself. β©
-
thiserroris the sole runtime dependency, which is zero-cost at runtime. The defaultstdbackend wrapsstd::sync::Mutexdirectly. β© -
Technically affine in Rustβs type system (can be dropped but not duplicated), but the intent is linear β youβre expected to use it, and the API is designed so that dropping it early just ends the scope. β©