
Designing Solidity Contracts That Don’t Fight Formal Verification
This guide provides concrete examples of patterns that are difficult for the Certora Prover to reason about. It helps teams shape their smart-contract code written in Solidity to be more verification-friendly while maintaining good engineering practices.
Author:
Pamina GeorgiouWhen verifying Solidity contracts, we rely heavily on summaries in CVL to over-approximate function behavior. A summary captures what a function does: how it reads or modifies storage, returns values and interacts with other contracts without requiring the prover to reason about its full implementation every time.
Many Solidity constructs blur the boundary between what a function does and how it does it. Patterns that mix computation with storage mutation, hide storage access behind assembly, or entangle contracts through complex call graphs make it difficult to write sound summaries. In these cases, the prover must inline behavior, track low-level effects, or reason about tightly coupled components, significantly degrading performance.
A related challenge is "munging the code": situations where otherwise-correct code must be refactored purely to accommodate verification. This usually indicates that computation, state mutation, and control flow are too entangled for robust verification. The goal of this guide is to help teams avoid patterns that force verification-driven refactoring.
Issue: Functions that compute values and update storage in the same block introduce side-effects, making summarization harder.
Hard Example (Avoid):
Preferred (Split Logic):
CVL Summary (for the pure function):
Why it helps: (1) The prover can summarize calculateAccruedDebt as a pure mathematical function with precise math of the right type. The stateful wrapper becomes trivial to verify. (2) Further, it allows to abstract away hard math that might be irrelevant to some property and unnecessarily slow down SMT solvers. For this we can summarize the function to simply return a nondeterministic value of the correct return type.
Issue: Complex call chains, especially cycles like Vault→Strategy→Vault, complicate modular reasoning and can lead to reentrancy vulnerabilities.
Hard Example (Avoid):
Preferred (Unidirectional Dependencies):
Why it helps: Single-direction call graphs (Vault → Strategy, never reverse) allow modular verification. Each contract can be verified independently.
Issue: Inline assembly with sload/sstore creates opaque behavior that breaks storage analysis and summary generation.
Hard Example (Avoid):
Preferred (Restrict Assembly to Memory Operations):
Issue: Assembly blocks that replicate behavior Solidity can express natively force the prover into opaque reasoning or require manual "munging" (rewriting production code just for verification). This is especially common in gas-optimized token standards.
Hard Example (Avoid):
Preferred (Plain Solidity):
Why it helps: The assembly version manipulates the free memory pointer and raw memory slots, which can be opaque to the prover. Hence the prover often assumes any memory could be affected (HAVOC) such that storage variables are randomly changed. The Solidity version is functionally identical, fully transparent, and requires zero summarization. The ~200 gas savings is negligible in virtually all real-world usage but creates a significant verification burden. When you encounter assembly like this in code you're verifying, this is the kind of "munge" that's often needed: replacing the assembly with equivalent Solidity so the prover can reason about it.
Issue: Parameters with overloaded meaning (e.g., sign determines mode) make summarization ambiguous.
Hard Example (Avoid):
Preferred (Explicit Actions):
Why it helps: Each action has unambiguous semantics. The prover can reason about
Action.Deposit independently of withdrawal logic.
abi.encodePackedIssue: Packed encoding creates ambiguous boundaries that can cause hash collisions and are hard to reason about.
Hard Example (Avoid):
Preferred:
Issue: Inline complex math requires the prover to reason about overflow, precision, and rounding, often intractable without manual summaries.
Hard Example (Avoid):
Preferred (Use Standard Library + Wrap):
CVL Summary:
Issue: Storing structs at computed slots via assembly (ERC-7201 style) makes storage opaque. When access happens through libraries, the prover can't resolve address fields for linking or hook on struct members.
Hard Example (Avoid):
Preferred (Single Contract-Level Accessor):
Why it helps: A single accessor in the contract (not a library) gives the prover one place to reason about storage layout. Field access becomes trackable.
Issue: SMT solvers don't reason about all of math in one unified engine. Internally, they decompose problems into specialized theories: each an independent decision procedure optimized for a specific domain. Two relevant theories whose combinations are, to this day, a hard and unsolved math problem are:
mulmod, and modular arithmetic. Already undecidable in general (per Matiyasevich's theorem), so the solver uses heuristics, that is incomplete search that can time out. More about this here.>>, <<), bitwise AND/OR/XOR, and two's complement overflow. Decidable but exponential in bit-width: for 256-bit values the search space is enormous.When both theories appear in the same formula, the solver must run a theory combination procedure (typically Nelson-Oppen or similar). This means the NIA solver and the BV solver exchange equalities and disequalities about shared variables, iterating until they agree. Each exchange can trigger re-solving in both theories, and because NIA is already using heuristics, the cross-theory communication often pushes it into paths it can't resolve, thus leading to timeouts even on functions that look short.
This is a common pattern in DeFi math libraries that implement gas-efficient mulDiv with full 512-bit precision. These functions typically use mulmod for the high-precision product, then bit-shift and mask to perform Newton's method for division: mixing nonlinear arithmetic (multiplication, modular reduction) with bitvector operations (shifts, AND, two's complement tricks) in a single block. The solver sees both theories entangled in every constraint, and the theory combination overhead compounds with the already-difficult nonlinear reasoning.
Example: A mixed-theory mulDiv with bitwise rounding
Consider a real-world pattern from DeFi math libraries: a mulDiv that uses mulmod for the full-precision product, then applies a branchless bitwise rounding trick:
The solver must encode mulmod (NIA theory) and the bitwise |, >>, ^ (BV theory) into a single satisfiability query. Every shared variable (like remainder and needsRound) forces the Nelson-Oppen procedure to synchronize the two solvers. The NIA solver proposes a value for remainder, the BV solver checks whether the bit operations are consistent, disagrees, sends back constraints and this loop repeats. With 256-bit values, the BV solver alone faces 2²⁵⁶ possible states per variable, and coordinating with NIA's heuristic search on top of that is what causes the blowup.
Advanced Technique: Decompose, Verify Separately, Prove Equivalence
When the function can't be simplified (it's a battle-tested math library you don't want to modify), there's a more powerful approach: decompose the function into theory-pure stages, verify each stage in isolation, and then prove that their composition equals the original.
This doesn't always work as it requires the function to have a natural decomposition point where the arithmetic and bitwise stages are sequential rather than deeply interleaved. But when it applies, it lets you verify functions that would otherwise be too hard for SMT-solving.
Step 1: Decompose into theory-pure functions
Step 2: Verify properties of each stage independently
Because each stage stays in one theory, the solver handles them efficiently:
Step 3: Prove equivalence with the original
The key step — prove that the decomposed version produces identical output to the original for all inputs:
This equivalence rule still mixes theories (it references both implementations), but it's a much simpler query: the solver just needs to check whether two uint256 return values can ever differ, not understand why either implementation is correct. In practice, the solver can often discharge this by observing that the decomposed version is just an inlined rewriting of the original — the intermediate variables are identical.
Once equivalence is established, you can summarize the original function using the properties you proved on the decomposed stages. In subsequent rules that call mulDivRound, the prover uses the ghost summary and never touches the mixed-theory implementation again:
When this technique applies: The function must have a natural decomposition point: a place where the arithmetic result is fully computed before bitwise operations begin. This works for mulDiv variants, fee calculations with rounding, and price conversions with bit-packed precision. It does not work when arithmetic and bitwise operations are deeply interleaved (e.g., Newton's method where each iteration mixes both), though even then you can sometimes decompose at the iteration boundary.
When it doesn't apply: Fall back to a pure ghost summary, asserting only the properties your higher-level rules need. You lose the ability to verify the complete implementation's correctness, but you can still verify the protocol-level properties that use the function.

By adopting these patterns and avoiding the problematic ones, engineering teams make their systems far easier to verify formally. The result: faster verification cycles, stronger guarantees, and clearer communication between developers and verification engineers.
Key takeaway: If you find yourself extensively refactoring code just to make verification work, that's a signal the original patterns were too entangled. Write code that admits simple, natural summaries from the start.