From Hash Trees to BLS Signatures: The Stack Behind On-Chain Validator Proofs
You're building a smart contract that needs to know whether the caller actually controls an Ethereum validator. Not an EOA claiming to be one, not a wallet with a 48-byte string pasted in as a "BLS public key". An active validator with 32 ETH staked on the beacon chain, provably linked to the caller's address.
How do you verify that on-chain? No oracle, no allowlist, no off-chain script running somewhere that you have to trust.
For most of Ethereum's history you couldn't. Smart contracts run on the execution layer. Validator information lives on the consensus layer (the beacon chain). These are two different chains with two different cryptographic setups, and for years they couldn't see each other. Some projects tried to hack around this by asking validators to write a marker into block.extraData, but that stopped working on mainnet the moment MEV-boost took over block production.
That changed in two hard forks. Dencun (March 2024) gave the EVM a trust-minimized window into beacon chain state via EIP-4788. Pectra (May 2025) added BLS12-381 precompiles via EIP-2537, letting contracts verify consensus signatures directly. Combined with SSZ Merkle proofs (the format Ethereum uses to hash its data structures), these three pieces form a complete stack for proving validator identity on-chain.
This post builds that stack from the bottom up. I'll assume you know what a hash function does and what a public/private key pair is. Everything else (ECDSA, BLS, Merkle trees, SSZ, generalized indices, the beacon state layout) gets explained from scratch. By the end, you'll understand exactly how a single smart contract call can prove the caller runs an active validator on the beacon chain.
Table of Contents
- The problem: proving validator identity on-chain
- Ethereum 101: two chains, two key types
- Public key cryptography in one page
- BLS signatures: what makes them special
- EIP-2537: BLS on the EVM
- Merkle trees from scratch
- SSZ: Ethereum's serialization and Merkleization format
- The beacon state
- EIP-4788: the beacon root in the EVM
- Putting it together: proving a validator on-chain
- Real-world example: EigenLayer's EigenPod
- What this unlocks
- Limitations and gotchas
- Takeaways
- References
The problem: proving validator identity on-chain
Imagine you're designing a protocol that coordinates off-chain work done by Ethereum validators. Maybe it's a decentralized indexer where each validator runs a sidecar that indexes blocks. Maybe it's a restaking platform where validators pledge their stake to secure additional services. Maybe it's a committee-based bridge. The specifics don't matter. What matters is one requirement: the protocol has to verify, on-chain, that whoever is registering is actually a real, active Ethereum validator.
"Real" matters because your entire security model leans on it. The protocol assumes each participant has 32 ETH at stake that can be slashed if they misbehave. If random EOAs can walk in and claim to be validators without that stake, an attacker can spin up thousands of fake identities, each pretending to be backed by capital that doesn't exist, and take over your network for the cost of a few transaction fees.
So how do you check?
The easy answer is off-chain. Run a script that queries the beacon chain, maintain an allowlist, done. But this pushes trust onto whoever maintains the allowlist. It doesn't scale (you need manual processes to add new validators, remove slashed ones, handle key rotations), and the whole thing is invisible to anyone auditing your contract.
The better answer is on-chain. Put the check inside the smart contract itself, so it's cryptographically enforced and publicly verifiable. But this runs into a wall: smart contracts run on the execution layer, and validator information lives on the consensus layer. These are two different chains with two different cryptographic setups.
Three building blocks make an on-chain check possible today:
- EIP-4788 exposes beacon block roots to the EVM. It gives a smart contract a trust-minimized anchor into the consensus layer's state.
- SSZ Merkle proofs let you prove any field inside the beacon state against one of those anchors, without submitting the whole state.
- EIP-2537 adds BLS12-381 precompiles so contracts can verify signatures made with consensus-layer keys.
Each piece is useless alone. Together, they let a smart contract prove msg.sender runs a specific validator on the beacon chain, with no oracle in between.
The rest of this post explains each piece from first principles, then assembles them into a working proof.
Ethereum 101: two chains, two key types
Before we get to the proofs, it helps to understand why Ethereum has this gap between execution and consensus in the first place.
Execution layer vs consensus layer
"Ethereum" is actually two chains running in parallel.
The execution layer is what most people think of when they hear "Ethereum". It runs the EVM, processes transactions, stores account balances, executes smart contracts, emits logs. When you send a transaction, pay gas, or interact with a DeFi protocol, you're talking to the execution layer. It's been running since 2015.
The consensus layer, also called the beacon chain, is the proof-of-stake brain. It decides which execution blocks are valid, coordinates validators, handles attestations, manages the deposit queue, and slashes misbehaving validators. It runs in parallel to the execution layer and was introduced at the Merge in September 2022. Validators live here. Your 32 ETH deposit lives here. All the "who is currently staking, who just joined, who was slashed, what's everyone's balance" state lives here.
Each execution block is paired with a beacon block, and the pairing is enforced cryptographically: the beacon block commits to the execution block's hash. Together they form one Ethereum block. But from inside a smart contract, you can only see execution-layer state. Account balances, storage slots, event logs. The beacon chain was invisible until EIP-4788 changed that.
secp256k1 vs BLS12-381
The two layers don't just store different data. They use different cryptography.
| Execution layer | Consensus layer | |
|---|---|---|
| Curve | secp256k1 | BLS12-381 |
| Signature scheme | ECDSA | BLS |
| Private key size | 32 bytes | 32 bytes |
| Public key size | 33 or 65 bytes | 48 bytes |
| Signature size | 65 bytes | 96 bytes |
| Aggregatable? | No | Yes |
| EVM support | Native (ecrecover since day one) | Added in Pectra via EIP-2537 |
secp256k1 is the same elliptic curve Bitcoin uses. Every Ethereum account (EOA) is controlled by a secp256k1 private key, and every transaction is signed using ECDSA over that curve. The EVM has had an ECDSA precompile (ecrecover at address 0x01) since the very beginning.
BLS12-381 is a different curve, designed specifically for applications that need signature aggregation. Validators on the beacon chain sign every attestation with a BLS key, and their public keys are BLS pubkeys. Until Pectra in May 2025, the EVM had no way to verify these signatures at all.
The next sections unpack what ECDSA and BLS actually are, and why the choice of curve matters.
Why validators have two separate identities
Here's a weird consequence of the two-layer design: an Ethereum validator has two completely different keys.
- A consensus key, which is a BLS12-381 private key. Used to sign attestations on the beacon chain. This is the validator's identity on the consensus layer.
- A withdrawal key or address, which lives on the execution layer. This is the account that receives withdrawal funds when the validator exits or when rewards are swept.
Post-Shapella (April 2023), the standard setup is to use a regular Ethereum execution address as the withdrawal target. The validator's withdrawal_credentials field on the beacon chain gets set to 0x01 followed by 11 zero bytes followed by that 20-byte execution address. This "type 0x01" credential is the bridge between the two layers. It's how a beacon-chain validator is linked to an execution-layer account that a smart contract can reason about.
This is the key fact we'll come back to in Section 10. The withdrawal_credentials field, stored inside the beacon state, is what lets us say "the validator at index N is controlled by the execution address A". If we can read that field from inside a smart contract and check A == msg.sender, we've proven the caller controls the validator.
Everything in this post is a story about how to read that field.
Public key cryptography in one page
You know what a private key and a public key are. What you might not know is what actually happens when you "sign" or "verify" with them. This section fills in the gap.
What a signature proves
A digital signature proves two things at once:
- Authenticity: the message was signed by someone holding a specific private key.
- Integrity: the message hasn't been modified since it was signed.
The setup: you have a private key (a large random number that you keep secret) and a public key (derived from the private key through a one-way function, safe to share). Anyone holding your public key can verify signatures you produced with your private key. Nobody can go the other direction and recover the private key from the public key or from a signature.
To sign a message:
- Hash the message with a cryptographic hash function to get a fixed-size digest.
- Run an algorithm that takes the digest and the private key as input, producing a signature.
To verify a signature:
- Hash the same message to get the same digest.
- Run a different algorithm that takes the digest, the signature, and the public key. If the math is consistent, the signature is valid.
The exact math depends on the signature scheme. Ethereum uses two schemes that work very differently under the same conceptual interface: ECDSA on the execution layer, BLS on the consensus layer.
ECDSA and secp256k1
ECDSA stands for Elliptic Curve Digital Signature Algorithm. It's what Ethereum (and Bitcoin) use for regular wallet transactions.
The math is built on elliptic curves. A curve is defined by an equation like y² = x³ + ax + b, evaluated in a finite field (think: arithmetic that wraps around a very large prime number). The set of (x, y) points satisfying the equation forms a mathematical structure with a special property: you can "add" two points on the curve to get a third point on the curve, and you can "multiply" a point by a scalar (a regular integer) to get another point. This arithmetic is what ECDSA uses.
Ethereum's specific curve is called secp256k1. It was standardized in 2000, adopted by Bitcoin in 2009, and inherited by Ethereum in 2015. The "256" refers to the size of the underlying prime field (about 2^256), and the "k1" is a codename from the SEC standards body.
Sizes on secp256k1:
- Private key: 32 bytes. A random number between 1 and the curve's order.
- Public key: a point (x, y), where x and y are each 32 bytes. The uncompressed form is 64 bytes of data (plus a 1-byte prefix to mark it as uncompressed, giving 65 bytes total). The compressed form is 33 bytes: the 32-byte x coordinate plus 1 byte indicating which of the two possible y values to use (since for any x on the curve, there are two valid y values).
- Signature: 65 bytes, usually written as
(r, s, v). r and s are 32 bytes each; v is a 1-byte "recovery id" that lets verifiers recover the signer's public key from the signature itself. - Security level: about 128 bits. Brute-forcing a private key would take roughly 2^128 operations, which is infeasible on any hardware we know of.
The internal math of ECDSA is beyond the scope of this post, but the shape is: a signature is a pair of numbers (r, s) computed from the message hash, the private key, and a per-signature random value. To verify, you do more elliptic curve operations with r, s, the message hash, and the public key, and check whether the result matches a specific value. If yes, valid.
From public key to address
Here's a detail that trips up almost everyone the first time they think about it: an Ethereum address is not the same thing as a public key. The address is derived from the public key by hashing and truncating.
The exact recipe:
- Take the uncompressed public key: 64 bytes (the
(x, y)point, without the0x04prefix byte that marks it as uncompressed). - Hash it with keccak256. The output is 32 bytes.
- Keep only the last 20 bytes of that hash. Throw away the first 12.
Those 20 bytes are the Ethereum address. In Solidity:
bytes memory pubkey = ...; // 64 bytes
address addr = address(uint160(uint256(keccak256(pubkey))));
Three things to notice:
The address is a fingerprint, not the key itself. You can go public key → address (hash and truncate), but you cannot go address → public key. The hash function is one-way. This is why ecrecover works the way it does: given a signature and a message hash, it uses the math of ECDSA to recover the full public key, then derives the address from that pubkey. The address alone isn't enough to verify anything.
20 bytes is a deliberate tradeoff. It's 160 bits, which is enough collision resistance (finding two public keys that hash to the same address would take about 2^80 operations, infeasible). Storing 20 bytes per account instead of 64 saves a lot of space across hundreds of millions of accounts. Bitcoin does the same thing, though with RIPEMD160 instead of keccak256.
This is why withdrawal_credentials stores an address, not a pubkey. The 32-byte withdrawal_credentials field looks like 0x01 || 0x00 * 11 || <20 bytes>. The 20-byte portion is the execution-layer address derived from some private key, following the same recipe above. When a smart contract later checks msg.sender, it's comparing 20-byte addresses to 20-byte addresses. Both sides were produced by the same derivation rule, so the comparison is meaningful.
The whole point of the 0x01 credential format is that it lets the beacon chain encode an execution-layer address inside a 32-byte slot, so execution-layer smart contracts can compare it against msg.sender directly. If the beacon chain stored the raw 64-byte public key instead, the contract would have to recompute the address derivation before comparing, adding cost and complexity for no benefit.
Can you reverse the derivation?
Not from the address alone. keccak256 is a one-way hash function, so finding a public key that maps to a given address would require brute-forcing around 2^160 candidates. Infeasible on any hardware that exists or is likely to exist.
But you can recover the public key from any signature that the address has produced. That's what ecrecover actually does: ECDSA includes a 1-byte recovery parameter (the v in (r, s, v)) that lets a verifier reconstruct the full (x, y) public key from the signature alone, without needing it passed in separately. Ethereum saves 64 bytes per transaction by using this trick instead of storing the pubkey alongside each signature, the way Bitcoin's original format does.
A practical consequence: an address that has never signed a transaction has no public key visible on-chain. An address that has signed even once has its pubkey effectively public, because anyone can pull any of its past signatures and run the recovery math. This matters for post-quantum security. Shor's algorithm on a sufficiently large quantum computer could recover a private key from a public key, so addresses that have never signed are shielded by an extra keccak256 hash layer that first-time signers lose the moment they broadcast a transaction. This is one of the arguments for moving funds to fresh addresses before a hypothetical quantum break, and part of why ongoing Ethereum research is looking at post-quantum signature schemes.
Why Ethereum's transaction signatures use ECDSA
The EVM exposes ECDSA verification via a precompile at address 0x01, called ecrecover. Given a message hash and a signature (v, r, s), it returns the 20-byte Ethereum address that signed the hash.
address signer = ecrecover(messageHash, v, r, s);
This is why smart contract wallets, meta-transactions, EIP-712 typed data signing, EIP-2612 permit, ERC-4337, and essentially every "prove a user signed something" pattern on Ethereum boils down to one ecrecover call. It's fast, it's cheap (fixed cost around 3,000 gas), and it's been part of the EVM since launch.
ECDSA has one crucial limitation that matters for the next section: you cannot combine multiple ECDSA signatures into one. If you want to prove that 1,000 people signed the same message, you have to submit 1,000 separate signatures and verify each one. At ~3,000 gas per verification, that's 3 million gas just for signature checking. This doesn't scale for consensus.
BLS signatures: what makes them special
BLS is named after Boneh, Lynn, and Shacham, the three researchers who published the scheme in 2001. Like ECDSA, it's a digital signature scheme built on elliptic curves. Unlike ECDSA, it has a property that makes BLS ideal for large-scale consensus: signatures can be aggregated.
The aggregation superpower
Say you have 1,000 validators. Each one signs a message attesting that a block is valid. You want to prove on-chain that at least 667 of them signed.
With ECDSA, you have 1,000 separate 65-byte signatures. At ~3,000 gas per ecrecover, that's 3,000,000 gas spent just on signature verification, plus 65 kB of calldata to pass them in. Completely impractical for the beacon chain, which produces thousands of attestations per slot.
With BLS, you can combine all 1,000 signatures into a single aggregate signature of 96 bytes. You can also combine the 1,000 public keys into a single aggregate public key of 48 bytes. Then you verify once: does the aggregate signature match the aggregate public key against the message? One verification, regardless of how many signers you started with.
This property is called signature aggregation. It's the whole reason Ethereum uses BLS on the beacon chain. Without it, running proof-of-stake at Ethereum's scale would be impossible.
How pairing-based crypto enables aggregation
The aggregation trick relies on a mathematical object called a pairing. A pairing is a function e(P, Q) that takes two elliptic curve points and returns a number. It has a special property: it's bilinear, meaning you can "move" scalar multiplications across it.
Concretely, if a and b are scalars and P and Q are curve points:
e(aP, bQ) = e(P, Q)^(a*b)
That algebraic identity is what lets you combine signatures. If signature σ1 corresponds to private key sk1, and σ2 corresponds to sk2, then the "sum" of σ1 and σ2 (with the right kind of multiplication on the curve) is a valid signature for the combined public key pk1 + pk2. Verify the aggregate against the combined pubkey, and you've verified both individual signatures at once.
The curve Ethereum uses for BLS is BLS12-381. There's a confusing naming collision worth clearing up right away: the "BLS" in BLS12-381 is not the same BLS trio as the signature scheme. BLS curves are a family of pairing-friendly elliptic curves described by Barreto, Lynn, and Scott in 2002. Ben Lynn is the only person whose name appears in both "BLS" groups (Ben Edgington calls this out explicitly in his book's BLS Signatures chapter). The "12" in BLS12-381 is the curve's embedding degree, a technical property that controls pairing efficiency and security. The "381" is the number of bits in the curve's field modulus. The specific BLS12-381 curve Ethereum uses was designed by Sean Bowe in 2017 for the Zcash project, and the Ethereum Foundation later adopted it for the consensus layer because it balances security, performance, and pairing efficiency. Ben Edgington's "BLS12-381 For The Rest Of Us" is a solid beginner-friendly introduction if you want to dig into the curve itself.
BLS12-381 has two groups of points, G1 and G2. They're linked by the pairing function e: G1 × G2 → GT. Ethereum's beacon chain uses the minimal-pubkey-size variant of BLS: public keys live in G1 (the smaller group, 48 bytes compressed) and signatures live in G2 (the larger group, 96 bytes compressed). This split minimizes the cost of aggregating public keys, which is the hot path for beacon chain verification. Reference: Ben Edgington, Upgrading Ethereum, BLS Signatures chapter.
Key sizes on BLS12-381 as used by Ethereum:
- Private key: 32 bytes.
- Public key (in G1): 48 bytes compressed.
- Signature (in G2): 96 bytes compressed.
- Security level: targeted at 128 bits (this was the original design goal; later analysis by NCC Group estimates the actual level at around 117 to 120 bits, which is still considered ample for production use).
Compared to secp256k1, BLS public keys are 50% larger and signatures are 50% larger per signer. The win comes from aggregation: if you aggregate 1,000 signatures into one, you save 999 × 96 bytes of data and 999 verifications, which crushes the size overhead.
The verification equation for a single signature is:
e(G1_generator, signature) == e(pubkey, H(message))
Where G1_generator is the fixed generator point of G1, pubkey is in G1, signature is in G2, and H(message) is a hash-to-curve function that maps the message into G2. Both sides of the equation are pairings of a G1 element with a G2 element, so the dimensions line up. If the two pairing values are equal, the signature is valid.
Why Ethereum picked BLS for consensus
On the beacon chain, every 12 seconds a subset of validators called a committee produces attestations. On mainnet, with over a million validators, committees at any given slot contain tens of thousands of signers. Every signature produced by the committee is BLS, and they all get aggregated into one or a handful of compact aggregate signatures per slot.
The result: the beacon chain's consensus activity compresses from "millions of signatures per epoch" to "a manageable number of aggregates". Block producers propagate these aggregates, validators verify them, and the chain moves forward.
BLS has one tradeoff that matters for smart contracts: there is no equivalent of ecrecover. With ECDSA, the signature contains enough information to recover the public key directly. With BLS, you must pass the public key in explicitly to verify. That's fine on the beacon chain (which already knows which validators are in each committee), but it was a problem for the EVM until Pectra added dedicated BLS precompiles, which is what the next section covers.
EIP-2537: BLS on the EVM
The EVM had no way to verify BLS signatures until the Pectra hard fork shipped on May 7, 2025. EIP-2537 fixed that by adding seven new precompiles that expose BLS12-381 curve operations directly to smart contracts.
What a precompile is
A precompile is a piece of functionality built into the Ethereum client itself, not the EVM bytecode. Precompiles live at low addresses (0x01 through 0x11 so far) and you call them like any other contract: use staticcall with the right input, get back the output. The current list and gas costs are documented at evm.codes.
The reason they exist is performance. Some operations (ECDSA recovery, SHA-256, modular exponentiation, elliptic curve pairings) are extremely expensive to implement in Solidity but fast when written directly in Go or Rust by the client authors. Instead of making you pay millions of gas to run the operation in the EVM, the client implements it natively and charges a fixed, much lower gas price.
Before Pectra, the list of precompiles included things like ecrecover (0x01), sha256 (0x02), ripemd160 (0x03), a bn254 pairing check (used for zk-SNARKs), and modular exponentiation. There was nothing for BLS12-381.
The seven BLS precompiles
EIP-2537 added seven precompiles at addresses 0x0b through 0x11. You don't need to know all of them in detail, but here's what each one does and why it's needed:
| Address | Name | What it does | Why you need it |
|---|---|---|---|
0x0b | BLS12_G1ADD | Adds two points in G1 | Aggregate two public keys |
0x0c | BLS12_G1MSM | Multi-scalar multiplication in G1 | Weighted aggregation of many pubkeys |
0x0d | BLS12_G2ADD | Adds two points in G2 | Aggregate two signatures |
0x0e | BLS12_G2MSM | Multi-scalar multiplication in G2 | Weighted aggregation of many signatures |
0x0f | BLS12_PAIRING_CHECK | Checks a pairing equation | Verify a BLS signature |
0x10 | BLS12_MAP_FP_TO_G1 | Maps a field element to G1 | Hash a message to a curve point |
0x11 | BLS12_MAP_FP2_TO_G2 | Maps a field element to G2 | Hash a message to a signature-sized point |
G1 and G2 are the two groups of points that make up BLS12-381. By convention, Ethereum public keys live in G1 (48 bytes compressed) and signatures live in G2 (96 bytes compressed). The pairing function e(., .) takes one point from each group.
The most important one for our purposes is BLS12_PAIRING_CHECK at 0x0f. A pairing check takes a list of G1 and G2 point pairs and returns 1 if the product of all pairings equals the identity, 0 otherwise. To verify a single BLS signature you submit two pairs: (G1_generator, signature) and (-pubkey, H(message)). If the pairing check returns 1, it means e(G1_generator, signature) == e(pubkey, H(message)), which is exactly the BLS verification equation.
Gas costs
Gas costs are non-trivial but bounded and predictable. Exact values from the EIP-2537 spec:
| Operation | Gas cost |
|---|---|
| G1 point addition | 375 |
| G2 point addition | 600 |
| G1 multi-scalar multiplication | 12,000 per pair, discounted for batches |
| G2 multi-scalar multiplication | 22,500 per pair, discounted for batches |
| Pairing check | 37,700 base + 32,600 per pair |
| Map field to G1 | 5,500 |
| Map field to G2 | 23,800 |
MSM is the multi-scalar multiplication operation that computes sum(s_i * P_i) for a list of scalars and points. The base cost (k * mul_cost) is discounted by a lookup table for larger batches, so aggregating many signatures gets cheaper per-signature as the batch grows.
A single BLS signature verification needs two pairings (one for each side of the equation), so BLS12_PAIRING_CHECK at k=2 costs 37,700 + 2 * 32,600 = 102,900 gas. That's the cost of one signature check. Not cheap compared to the ~3,000 gas for ecrecover, but cheap enough to run inside a user-facing transaction without blowing the block limit.
What this unlocks
Once the EVM can verify BLS signatures, a bunch of previously impossible things become possible:
- On-chain beacon chain light clients: a contract can verify beacon chain sync committees, which sign aggregate BLS signatures.
- Trustless bridges: one chain can verify signatures produced by a committee on another chain without an oracle.
- Restaking verifiers: contracts can check whether a validator's BLS key signed a specific off-chain message.
- Attestation checks: a contract can verify that a specific validator (or group of validators) attested to something on the beacon chain.
EIP-2537 by itself doesn't let you prove a validator exists on the beacon chain. It just lets you check their signatures. To prove that a particular BLS pubkey belongs to an active validator, you still need to read the beacon state, and that requires Merkle proofs. Which is what the next three sections are about.
Merkle trees from scratch
Before we get to SSZ and the beacon state, we need one more building block: Merkle trees.
The problem Merkle trees solve
Say you have a list of 1,000 items. You want to be able to prove to someone that a specific item is in the list, without forcing them to download all 1,000 items. You also want the proof to be tamper-evident: if anyone modifies the list after you commit to it, any proof against the modified list should fail to verify.
A Merkle tree is the standard solution. The idea: hash every item, then pair hashes up and hash them together, layer by layer, until you have a single hash at the top (the root). That root is a tiny (32 byte) commitment to the entire list. Anyone holding just the root can verify any item's membership by checking a short proof.
Building the tree
Start with 4 items as the simplest non-trivial example. Call them A, B, C, D.
Hash each item into a leaf:
leafA = hash(A)
leafB = hash(B)
leafC = hash(C)
leafD = hash(D)
Pair leaves up and hash the pairs together:
nodeAB = hash(leafA || leafB)
nodeCD = hash(leafC || leafD)
Finally, hash the two inner nodes together to get the root:
root = hash(nodeAB || nodeCD)
The tree looks like this:
root
/ \
nodeAB nodeCD
/ \ / \
leafA leafB leafC leafD
| | | |
A B C D
For a list of N items, the tree has log2(N) layers (rounded up, and with padding to the next power of 2 if N isn't already one). A tree over 1,000 items is 10 layers deep. A tree over 1,048,576 items is 20 layers deep. This logarithmic growth is what makes Merkle proofs efficient.
Verifying membership with a proof
Now suppose you know only the root, and someone tells you "item B is in the list". How do you verify without downloading the whole list?
They send you a Merkle proof: the sibling hashes along the path from their leaf up to the root. For leaf B, the siblings are leafA (at the bottom layer) and nodeCD (at the next layer up). Two hashes, 64 bytes total.
You verify by recomputing the root:
step 1: computed_nodeAB = hash(leafA || hash(B))
step 2: computed_root = hash(computed_nodeAB || nodeCD)
step 3: check computed_root == stored_root
If the recomputed root matches the root you already trust, B is definitely in the list. If B had been tampered with, its leaf hash would be different, which would change computed_nodeAB, which would change computed_root, and the check would fail.
Three things to note:
- The verifier only stores the root. Everything else comes from the prover.
- Proof size grows with the log of the tree size. 10 hashes for a tree of 1,000 items, 20 for a tree of a million.
- The verifier needs to know which side is which (whether to hash
left || rightorright || left) at each layer. This is usually encoded in the proof or derived from the index of the leaf.
Why Merkle trees matter for our use case
The beacon state is huge. It contains every validator, every pending deposit, every committee assignment, every slashing record. Hundreds of megabytes on a busy chain. Nobody wants to pass that into a smart contract.
But the beacon state is organized as a (very large, very structured) Merkle tree. Its root is a single 32-byte hash. If a smart contract knows that root and someone wants to prove that a specific validator has a specific field value, they can do it with a short Merkle proof. The contract verifies by recomputing upward, exactly like the 4-leaf example above.
The catch is that the beacon state's Merkle tree isn't a plain binary hash tree. It follows a specific schema called SSZ, and the shape of the tree depends on the shape of the underlying data structure. That's the next section.
SSZ: Ethereum's serialization and Merkleization format
SSZ stands for SimpleSerialize. It's the format Ethereum's consensus layer uses to serialize data structures and compute their Merkle tree roots. Everything on the beacon chain (blocks, states, validators, attestations) is SSZ-serialized and SSZ-Merkleized.
Why not just use regular Merkle trees? Two reasons:
- Schema awareness. Every field in the beacon state has a known type and a known position. SSZ hashes the structure in a deterministic way that lets verifiers construct proofs against specific fields without ambiguity.
- Update-friendliness. The beacon state changes every slot. SSZ is designed so that small updates only touch a small part of the tree, which means proofs for unchanged fields stay valid and updates are cheap to recompute.
This section explains the parts of SSZ you need to understand the beacon state proofs. There's a lot more to SSZ (variable-length types, unions, bitlists, bitvectors), but the core ideas are simple.
Types and hash_tree_root
SSZ defines a function called hash_tree_root that takes any SSZ type and returns a 32-byte hash. The hash is computed by:
- Serializing the type's data into 32-byte chunks.
- Arranging the chunks as leaves of a binary Merkle tree, padded with zero chunks to the next power of 2.
- Computing the Merkle root over those leaves.
For primitive types (uint64, bytes32), the "chunks" are just the type's byte representation. For containers (structs), each field is recursively hash_tree_root-ed and the resulting hashes become the leaves of the container's tree. For lists and vectors, each element is hashed and the results become leaves, with an extra step for lists (which have a variable length).
Here's a minimal concrete example. Consider a container with four 32-byte fields a, b, c, d:
struct Example {
a: bytes32,
b: bytes32,
c: bytes32,
d: bytes32,
}
Its hash_tree_root is the root of a 4-leaf Merkle tree where each leaf is one field's bytes32 value:
root
/ \
hash(a,b) hash(c,d)
/ \ / \
a b c d
For a container with three fields, SSZ pads to four leaves by adding a zero chunk:
root
/ \
hash(a,b) hash(c,0)
/ \ / \
a b c 0
The number of leaves is always rounded up to the next power of 2. This is how SSZ keeps the tree structure predictable: the tree depth for a container with N fields is ceil(log2(N)), regardless of what's actually in the fields.
Lists, vectors, and variable-length data
Vectors (fixed-length arrays) work just like containers: each element becomes a leaf, pad to next power of 2, compute the root.
Lists (variable-length arrays with a maximum length) have an extra step. The list elements are Merkleized as if they were a vector of the maximum length, producing a root. Then the root is mixed in with the actual length of the list:
list_root = hash(vector_root, uint256(actual_length))
This is important because the beacon state's validators field is a list. Its hash_tree_root depends on both the contents (the validator structs) and the current count of validators, so the root changes any time a new validator is added.
Generalized indices
Here's the concept that ties Merkle proofs together: generalized indices.
A generalized index is a way to address any node in a binary Merkle tree using a single positive integer. The root is index 1. The root's left child is 2, its right child is 3. The left child's children are 4 and 5. The right child's children are 6 and 7. And so on.
1
/ \
2 3
/ \ / \
4 5 6 7
The formula is: the left child of node i is 2*i, and the right child is 2*i + 1. To get from the root to any specific node, you write the generalized index in binary (dropping the leading 1 bit) and read the bits left-to-right: 0 means "go left", 1 means "go right".
For example, to get to node 12: binary is 1100, drop the leading 1 to get 100, so the path is left-left-right-left... wait, that's 1, 0, 0 which is three more bits: right-left-left. Let me recount. 12 in binary is 1100. Strip the leading 1 and you get 100. Read left-to-right: 1 means right child, 0 means left child, 0 means left child. So from root: right, left, left. You end up at node 1 -> 3 -> 6 -> 12. Check: 2*6 = 12. Yes.
Why does this matter? Because once you know the generalized index of a specific field in a nested SSZ structure, you know exactly which sibling hashes you need to build a Merkle proof, and exactly how many layers deep the field is. SSZ tooling computes these indices automatically from the schema.
Generalized indices in the beacon state
Here's the magic of SSZ: the generalized index of any field in any nested SSZ type can be computed purely from the schema. You don't need to look at the actual data; you just need to know the type definitions.
For example, EigenLayer's BeaconChainProofs.sol defines these constants directly:
uint256 internal constant VALIDATOR_TREE_HEIGHT = 40;
uint256 internal constant VALIDATOR_PUBKEY_INDEX = 0;
uint256 internal constant VALIDATOR_WITHDRAWAL_CREDENTIALS_INDEX = 1;
uint256 internal constant VALIDATOR_EXIT_EPOCH_INDEX = 6;
Those numbers come straight from the SSZ schema of the beacon state. The validators list has a height of 40 (meaning it supports up to 2^40 validators, way more than will ever exist). Inside each validator struct, the pubkey field is at leaf index 0 of the validator's own tree, withdrawal_credentials is at leaf index 1, exit_epoch is at leaf index 6. These are stable and don't change with the state.
To prove a specific validator's withdrawal_credentials field, you submit two pieces:
- The validator's field values (as a fixed-size array of 32-byte chunks).
- A Merkle proof path that walks from those chunks, up through the validator's own tree, up through the validators list, up through the beacon state, to the beacon state root.
The contract recomputes the root using the chunks and the proof, and checks it against the beacon state root it already trusts. That's the whole idea.
How does the contract come to trust a beacon state root in the first place? That's what EIP-4788 is for. But before we get there, one more piece: what's actually in the beacon state?
The beacon state
The beacon state is the consensus layer's equivalent of the execution layer's world state. It's the complete snapshot of everything the beacon chain tracks at a given slot: validators, balances, committees, finalization data, pending deposits, randomness seeds, historical roots, and more. A new version is produced every slot (every 12 seconds).
For this post, the only part that matters is the validators list and the Validator struct inside it. But it's worth understanding the full shape briefly so the Merkle tree geometry makes sense.
Top-level fields
The beacon state is an SSZ container. The phase 0 version had 21 fields; each hard fork since then has added more. Post-Pectra (Electra), the beacon state has about 36 top-level fields. Here are the ones that matter for validator proofs:
| Field | Type | What it contains |
|---|---|---|
genesis_time | uint64 | Unix timestamp of genesis |
genesis_validators_root | bytes32 | Root of the validator list at genesis |
slot | uint64 | Current slot number |
fork | Fork | Current fork version |
latest_block_header | BeaconBlockHeader | Header of the latest beacon block |
block_roots | Vector[bytes32, 8192] | Recent block roots |
state_roots | Vector[bytes32, 8192] | Recent state roots |
historical_roots | List[bytes32] | Older block/state roots |
eth1_data | Eth1Data | Data from the execution layer |
eth1_data_votes | List[Eth1Data] | Votes for the next eth1 data |
eth1_deposit_index | uint64 | Index of the next deposit to process |
validators | List[Validator] | The validators list. This is what we care about. |
balances | List[uint64] | Validator balances (separate from the Validator struct for update efficiency) |
randao_mixes | Vector[bytes32, 65536] | RANDAO seeds |
slashings | Vector[uint64, 8192] | Slashing amounts per epoch |
| ... | ... | ... |
Each top-level field is a leaf of the beacon state's own Merkle tree. The tree is padded to the next power of 2, which is where the tree depth comes from. Deneb had 27 fields, which fits into a tree of 32 leaves (depth 5). Electra added 9 new fields for compounding credentials, pending deposits, and consolidations, bringing the total to 36. That no longer fits in 32 leaves, so the tree grows to 64 leaves (depth 6). The root of whichever tree shape is current is the beaconStateRoot.
EigenLayer encodes these fork-specific tree heights as a ProofVersion enum and picks the right one at verification time. The constants match: DENEB_BEACON_STATE_TREE_HEIGHT = 5 and PECTRA_BEACON_STATE_TREE_HEIGHT = 6. Any contract verifying beacon state proofs needs the same versioning, because fork upgrades can keep changing the tree shape.
The Validator struct
The Validator struct is a container with 8 fields, defined in the Phase 0 consensus specs:
class Validator(Container):
pubkey: Bytes48 # index 0
withdrawal_credentials: Bytes32 # index 1
effective_balance: uint64 # index 2
slashed: boolean # index 3
activation_eligibility_epoch: Epoch # index 4
activation_epoch: Epoch # index 5
exit_epoch: Epoch # index 6
withdrawable_epoch: Epoch # index 7
Eight fields, which is already a power of 2, so no padding. The tree has depth 3 and looks like:
validator_root
/ \
h h
/ \ / \
h h h h
/ \ / \ / \ / \
pubkey effective_balance activation_epoch exit_epoch
withdrawal_credentials slashed activation_eligibility_epoch withdrawable_epoch
Three layers, which means a proof for any one field inside a validator is three sibling hashes.
But proving a specific validator inside the full beacon state takes way more layers than that. You need to walk:
- 3 layers to get from the target field to the validator's own root.
- 40 layers to get from the validator's root through the validators list to the list's root (since the list has a max size of 2^40).
- A few more layers to mix in the list length and reach the list's summary node.
- 6 layers to get from the validators list leaf through the beacon state's top-level tree to the beacon state root.
Total: around 50 sibling hashes in the proof. At 32 bytes each, that's about 1.6 kB of calldata. Not tiny, but not absurd either.
The 0x01 withdrawal credentials format
The withdrawal_credentials field is 32 bytes. Its first byte is a prefix that tells you what kind of withdrawal address it is:
0x00: BLS withdrawal credentials. Legacy format. The rest of the bytes are the hash of a BLS withdrawal public key. You can't withdraw to these directly; they must be converted to0x01first via aBLSToExecutionChangemessage on the beacon chain.0x01: Execution layer withdrawal address. The rest of the bytes are an 11-byte zero padding followed by a 20-byte Ethereum execution address. Introduced with Capella/Shapella (EIP-4895).0x02: Compounding credentials. Added in Pectra (EIP-7251). Same 20-byte execution address layout as0x01, but signals the validator has opted into the new 2,048 ETH maximum effective balance and auto-compounding rewards.
Every validator post-Shapella (April 2023) uses 0x01 or 0x02. This is the bridge between the two layers. A specific validator on the beacon chain is linked to a specific execution address by exactly these 32 bytes.
If you control the execution address, you control where the validator's funds go. Which means if you can prove that a beacon chain validator has withdrawal_credentials = 0x01 || 0x00*11 || msg.sender, you've proven that msg.sender is the execution-layer owner of that validator. This is the linkage we're going to exploit in Section 10.
All we need now is a way for a smart contract to trust the beacon state root. That's EIP-4788, up next.
EIP-4788: the beacon root in the EVM
Smart contracts have their Merkle proof machinery. Validators have their SSZ tree. The last missing piece is a trusted starting point: how does the contract know which beacon state root is real?
EIP-4788, shipped in the Dencun hard fork in March 2024, provides exactly that. It makes every block's parent beacon block root accessible from inside the EVM, automatically and trustlessly.
The mechanism
At the start of every execution block, before any user transactions run, the client automatically calls a special system contract at address 0x000F3df6D732807Ef1319fB7B8bB8522d0Beac02. This call is not a user transaction. It's hardcoded into block processing by the consensus rules. The call stores two things in the contract:
- The execution block's
timestamp. - The hash of the parent beacon block (the beacon block from the previous slot).
The system contract keeps these in a fixed-size ring buffer with room for 8191 slots of data. 8191 is a prime, chosen so no timestamp can collide with an earlier one until the buffer has fully wrapped around. At 12 seconds per slot, that's about 27 hours of history.
Any smart contract can query the contract with staticcall, passing a 32-byte timestamp. If the timestamp is in the ring buffer, the contract returns the beacon block root stored for that timestamp. If not (too old, too new, or never stored), the call reverts.
Because the root comes from the client's automatic system call and not from a user, and because it's fixed at block processing time, there's no way for anyone to fake it. The precompile is a trust-minimized window into consensus state.
Calling it from Solidity
The minimal Solidity to fetch a beacon block root:
address constant BEACON_ROOTS = 0x000F3df6D732807Ef1319fB7B8bB8522d0Beac02;
function _getBeaconBlockRoot(uint64 timestamp) internal view returns (bytes32) {
(bool success, bytes memory data) = BEACON_ROOTS.staticcall(
abi.encode(timestamp)
);
require(success && data.length == 32, "beacon root not found");
return bytes32(data);
}
That's the entire interface. 32-byte input, 32-byte output, reverts if the timestamp isn't known.
Which timestamp to use
The interface is simple, but picking the right timestamp takes a little care. The contract stores one beacon root per execution block, keyed by that execution block's timestamp. So if you want the beacon block root for beacon slot N, you need the timestamp of the execution block that was produced alongside beacon slot N+1 (because EIP-4788 stores the parent beacon root at each slot).
In practice, you don't compute this yourself. The off-chain proof generator (whatever builds the Merkle proof) already knows which slot it built against and reports the matching execution timestamp. Your contract just trusts the inputs and passes the timestamp to the precompile.
Why 27 hours?
The ring buffer size is a deliberate tradeoff. Longer history means more storage overhead on every block. Shorter history forces contracts to submit proofs quickly. 8191 slots balances these: long enough that batch submission and human latency are fine, short enough that the per-block storage cost stays bounded.
For most applications this is plenty. A validator generates a proof and submits the transaction within minutes. The 27-hour window only becomes a constraint for things that need historical state (bridges settling disputes weeks later, archival proofs), and those cases have a different answer: ZK light clients like SP1 Helios, which verify arbitrary beacon state roots without relying on the ring buffer. EIP-4788 is the right primitive when you need a recent snapshot, which covers the validator proof use case.
Putting it together: proving a validator on-chain
We now have every piece:
- EIP-4788 gives us a trusted beacon block root from inside the EVM.
- SSZ Merkle proofs let us verify any field in the beacon state against that root.
- The 0x01 withdrawal credentials format binds a specific validator to an execution address.
Combining them gives us a trustless on-chain proof that msg.sender controls a specific active validator.
The three-step proof chain
The proof walks down from the trusted beacon block root to the target field:
-
Beacon block root → beacon state root. A small Merkle proof walks from
BeaconBlock.state_rootup through the beacon block header to the beacon block root. The beacon block is a container with 5 fields (slot, proposer_index, parent_root, state_root, body_root), so this proof is only a few hashes. -
Beacon state root → validator fields. A deeper Merkle proof walks from the target validator's fields, up through the validators list, up through the beacon state's top-level container, to the beacon state root. About 50 hashes for a validators list of size 2^40 nested inside a container of depth 6.
-
Withdrawal credentials binding check. Once the fields are verified, the contract reads
validatorFields[1](thewithdrawal_credentialsfield) and checks it equals0x01 || 0x00*11 || msg.sender. If yes, the caller is cryptographically proven to be the execution-layer withdrawal address of the validator atvalidatorIndex. -
Liveness check. Also from
validatorFields, the contract readsexit_epochand rejects validators whose exit epoch isn'tFAR_FUTURE_EPOCH(2^64 - 1). This prevents exited or about-to-exit validators from registering.
The Solidity
Here's a minimal implementation. It uses a helper library (like EigenLayer's BeaconChainProofs) for the SSZ Merkle verification math:
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
import {BeaconChainProofs} from "./lib/BeaconChainProofs.sol";
contract ValidatorRegistry {
address constant BEACON_ROOTS = 0x000F3df6D732807Ef1319fB7B8bB8522d0Beac02;
uint64 constant FAR_FUTURE_EPOCH = type(uint64).max;
// SSZ field indices inside a Validator container.
uint256 constant VALIDATOR_PUBKEY_INDEX = 0;
uint256 constant VALIDATOR_WITHDRAWAL_CREDENTIALS_INDEX = 1;
uint256 constant VALIDATOR_EXIT_EPOCH_INDEX = 6;
mapping(address => uint40) public registeredValidator;
function register(
uint64 beaconTimestamp,
uint40 validatorIndex,
bytes32[] calldata validatorFields,
BeaconChainProofs.StateRootProof calldata stateRootProof,
bytes calldata validatorFieldsProof
) external {
// 1. Fetch the beacon block root committed by EIP-4788.
bytes32 beaconBlockRoot = _getBeaconBlockRoot(beaconTimestamp);
// 2. Verify beacon state root against beacon block root.
BeaconChainProofs.verifyStateRoot(beaconBlockRoot, stateRootProof);
// 3. Verify validator fields against beacon state root.
BeaconChainProofs.verifyValidatorFields(
BeaconChainProofs.ProofVersion.PECTRA,
stateRootProof.beaconStateRoot,
validatorFields,
validatorFieldsProof,
validatorIndex
);
// 4. Bind msg.sender to the validator's withdrawal address.
bytes32 creds = validatorFields[VALIDATOR_WITHDRAWAL_CREDENTIALS_INDEX];
bytes32 expected = bytes32(
abi.encodePacked(bytes1(0x01), bytes11(0), msg.sender)
);
require(creds == expected, "withdrawal credentials mismatch");
// 5. Liveness check: reject exited validators.
uint64 exitEpoch = uint64(uint256(
validatorFields[VALIDATOR_EXIT_EPOCH_INDEX]
));
require(exitEpoch == FAR_FUTURE_EPOCH, "validator exited");
// 6. Record the registration. msg.sender is now proven.
registeredValidator[msg.sender] = validatorIndex;
}
function _getBeaconBlockRoot(uint64 timestamp) internal view returns (bytes32) {
(bool success, bytes memory data) = BEACON_ROOTS.staticcall(
abi.encode(timestamp)
);
require(success && data.length == 32, "beacon root not found");
return bytes32(data);
}
}
Read it once and notice how every building block shows up:
BEACON_ROOTSis the EIP-4788 precompile address from Section 9.- Step 1 calls the precompile to get a trusted beacon block root.
- Step 2 Merkle-verifies the beacon state root against that block root (Sections 6 and 7).
- Step 3 Merkle-verifies the validator's fields against the beacon state root (Sections 7 and 8).
- Step 4 reads
withdrawal_credentialsfrom the verified fields and enforces the 0x01 binding (Section 8). - Step 5 reads
exit_epochand rejects exited validators (Section 8). - Step 6 stores the registration, at which point
msg.senderis cryptographically proven to control the validator atvalidatorIndex.
What the caller provides
Off-chain tooling (usually a sidecar running next to the validator) generates the proof bundle before calling register. The inputs are:
beaconTimestamp: the execution timestamp that matches the target beacon slot. Must be within the last 8191 slots.validatorIndex: the validator's numeric index in the beacon chain's validators list.validatorFields: the eight 32-byte fields from the validator's SSZ container, in order.stateRootProof: the Merkle path from the beacon state root up to the beacon block root.validatorFieldsProof: the Merkle path fromvalidatorFields[validatorIndex]up to the beacon state root.
Generating the proofs needs access to a Beacon Node API (the consensus client's REST interface at port 5052 by default). You fetch the target beacon block and state, SSZ-Merkleize the parts you care about, and output the proofs as calldata. Libraries like fastssz (Go) do the heavy lifting, and EigenLayer ships a Go prover you can vendor or reference.
Gas costs
A full register call for a single validator costs on the order of 100,000 gas on mainnet. This is an order-of-magnitude estimate based on the proof shape (about 50 SHA-256 Merkle steps plus an EIP-4788 staticcall plus a storage write). The exact number depends on the implementation's memory layout and calldata packing; EigenLayer's production implementation and madlabman's reference library both land in the same ballpark per single-validator proof.
At 10 gwei and $3,000 ETH, 100,000 gas is roughly $3 per registration. For a one-time on-boarding action, entirely reasonable. Batching multiple validators into one verifyWithdrawalCredentials call (as EigenLayer does with arrays) amortizes the state root proof and drops the per-validator cost significantly.
Real-world example: EigenLayer's EigenPod
The largest production user of this pattern is EigenLayer. Their core primitive is the EigenPod: a smart contract that lets a validator point their withdrawal credentials at the pod and then reuse their staked ETH as collateral for additional services (this is what "restaking" means).
verifyWithdrawalCredentials
The main entry point is EigenPod.verifyWithdrawalCredentials(). Its signature:
function verifyWithdrawalCredentials(
uint64 beaconTimestamp,
BeaconChainProofs.StateRootProof calldata stateRootProof,
uint40[] calldata validatorIndices,
bytes[] calldata validatorFieldsProofs,
bytes32[][] calldata validatorFields
) external
onlyOwnerOrProofSubmitter
onlyWhenNotPaused(PAUSED_EIGENPODS_VERIFY_CREDENTIALS);
Notice the arrays: the same function verifies many validators at once, amortizing the single state root proof across all of them. The verification logic is otherwise identical to the register example in the previous section, wrapped in a loop.
When the verification passes, the function records each validator as active in the pod and awards the pod owner restaked shares equal to each validator's effective balance. Those shares can then be delegated to operators running additional services: oracles, sequencers, bridges, DA committees, whatever. The validator's stake is on the line for both Ethereum consensus and the additional service.
This is restaking. The primitive that makes it possible is the exact proof chain this post is about. Without EIP-4788 plus SSZ Merkle proofs plus the 0x01 binding, you'd need an oracle or a trusted committee to vouch for which validators control which execution addresses. With them, the whole thing is trustless.
Production address and audit trail
EigenLayer uses a proxy pattern: each staker has their own EigenPod proxy, and all proxies delegate to a single implementation contract on mainnet at 0x5c86e9609fbBc1B754D0FD5a4963Fdf0F5b99dA7. That's the contract where verifyWithdrawalCredentials actually lives. It has been audited multiple times by Certora, Sigma Prime, Cantina, and Consensys Diligence, and is currently securing billions of dollars in restaked ETH. The reference Solidity library is BeaconChainProofs.sol in their contracts repo. Most other projects implementing this pattern either copy it directly or vendor it with modifications.
Handling fork upgrades
One detail from their code worth highlighting: EigenLayer uses a ProofVersion enum to handle hard forks. The beacon state tree depth changed from 5 to 6 between Deneb and Electra (Pectra added a new top-level field to BeaconState). Rather than hardcoding the tree height, EigenLayer parameterizes it, so the same contract can verify proofs against pre-Pectra or post-Pectra states.
Any contract verifying beacon proofs over a long time horizon needs a version switch like this. Future hard forks will keep changing the layout. The schema is stable enough that the switch is small (one enum, a handful of constants per version), but it has to be there.
What this unlocks
Now that we've seen the full stack, here's what you can actually build with it.
Native restaking
EigenLayer, Symbiotic, Karak, and similar platforms all rely on this pattern. Validators prove their consensus-layer stake to a smart contract without giving up custody. The contract then grants delegatable shares representing that stake, which operators use to secure additional services. Slashing logic lives on the execution layer. The underlying stake lives on the consensus layer. The proof chain is what links them.
Off-chain network registration
Any protocol that coordinates off-chain work by validators needs a way to verify that participants are real validators. This includes:
- Decentralized indexers where each validator runs a sidecar that indexes blocks and serves queries.
- Data availability committees that vote on data off-chain and commit to results on-chain.
- Oracle networks where validator identity determines committee sampling.
- L2 sequencer sets that want validator-backed sequencing.
- Actively Validated Services (AVSs) in EigenLayer terminology.
In all of these, the on-chain registration step is the same pattern from Section 10. Sidecar software generates the proof, calls register, and the contract binds the off-chain identity to a proven validator.
Beacon chain light clients
A smart contract can verify the beacon chain's sync committee by checking BLS signatures through EIP-2537, then trust the state roots those committees sign. This is slower than EIP-4788 for recent state (you're reproducing consensus verification instead of using the precompile's hardcoded store), but it works for arbitrary historical state. Projects like SP1 Helios combine this with ZK proofs to make verification cheap at scale.
Trustless cross-chain bridges
If one chain can verify signatures and state from another chain, you have a trustless bridge. The Ethereum-to-rollup direction already works via execution proofs. The rollup-to-Ethereum direction for optimistic rollups traditionally uses a multisig or a fraud-proof window. With on-chain BLS verification, light clients, and Merkle proofs of state, you can replace that multisig with a consensus-backed committee whose behavior is verifiable.
Slashing evidence on-chain
If a validator signs two conflicting attestations, they should be slashed. With on-chain BLS verification and beacon state Merkle proofs, you can submit evidence of double-signing directly to a smart contract, which can then take action (kick the validator from a restaking pool, burn their restaked collateral, revoke their registration). This closes a gap where slashing evidence historically had to be processed off-chain.
Validator-gated governance
A DAO where only active Ethereum validators can vote, with voting power proportional to their effective balance. The DAO contract uses the proof chain to register voting rights. No snapshot oracle, no allowlist, just verified beacon state.
The pattern under all of these is the same: anywhere you currently trust an oracle or a multisig to tell you which validators are active, you can replace it with a trustless on-chain proof.
Limitations and gotchas
The stack is powerful, but there are sharp edges. If you're building on top of it, these are the things that trip people up.
27-hour freshness window
EIP-4788's ring buffer holds 8191 slots. That's about 27 hours. If your proof references a beacon timestamp older than that, the precompile reverts and your transaction fails. Proofs generated a week ago are useless.
In practice this means off-chain tooling has to generate a proof and submit the transaction reasonably close together. For a one-shot action like validator registration this is fine (generate, submit, done). For anything that needs to prove historical state (long-running disputes, archival attestations, settlement weeks after the fact), you need a different mechanism. ZK light clients like SP1 Helios verify arbitrary beacon roots without the window, at the cost of extra complexity.
Fork upgrades change the tree layout
SSZ is schema-aware, which means every hard fork that adds a field to BeaconState changes the top-level tree's layout. The Deneb state had depth 5. The Electra state has depth 6. Future forks will keep adding fields.
If your contract hardcodes a specific tree depth, it will silently break at the next fork. Or worse, it will keep validating proofs but against the wrong layout. EigenLayer's ProofVersion enum is the standard answer: parameterize the layout, bump the version on each fork, keep the old versions around for backward compatibility. Any contract you write needs the same.
There's a subtler version of this problem inside the Validator struct itself. The 8 fields we listed in Section 8 are stable at the time of writing (Pectra), but nothing guarantees they stay that way. If a future fork adds a field to Validator, the leaf indices for existing fields won't move (SSZ appends at the end), but the tree depth will change, which changes the proof length. Your contract's constants will need to be updated.
Off-chain proof generation is non-trivial
The contract side is the easy half. The hard half is generating the proof bundles off-chain. You need:
- A Beacon Node API endpoint. Most validators already run a consensus client locally, but confirm this before promising end users a smooth flow.
- An SSZ library that supports tree construction and generalized-index math.
fastssz(Go) is the most mature; there are Rust and TypeScript options with varying completeness. - Code that walks the SSZ schema and produces the Merkle proof bytes in the format your contract expects. This code is small but fiddly, especially around list length mixing and padding.
EigenLayer's open-source prover is the easiest starting point. Vendor it, adapt the input and output formats to match your contract, and you're mostly done.
Gas costs at peak prices
~100,000 gas is cheap at 10 gwei. At 100 gwei during a network congestion spike, the same proof costs ~$30. For a one-time registration that's tolerable. For anything that needs to happen frequently or at unpredictable times, you should design around the cost: batch proofs (like EigenLayer's multi-validator arrays), use layer 2s for frequent operations, or amortize the proof across many actions.
Pairing checks via EIP-2537 are more expensive. A single BLS signature verification costs ~100k gas on top of any Merkle proof work, so a contract that does both (signature verification plus state proof) is looking at 200k+ gas per call.
L1-only
BEACON_ROOTS is a precompile on Ethereum L1. It does not exist on Arbitrum, Optimism, Base, zkSync, or any other L2. Those chains have their own execution environments and no relationship with the beacon chain.
If you want to prove an Ethereum validator's identity from inside an L2 contract, you need a different approach: bridge the beacon root over, use a cross-chain message, or run an L1 escape hatch that bridges the registration result. The proof chain in this post is fundamentally L1. This is fine if your protocol registers validators on L1 and uses the result on L1 (which most restaking platforms do), but worth knowing up front.
Snapshot, not continuous
The proof verifies the validator's state at one specific slot. It doesn't monitor the validator going forward. If a validator passes verification, then exits the next day, the registration contract still thinks they're active unless you build an unregister path.
EigenLayer handles this with periodic re-verification and explicit exit proofs. Any production system needs some version of the same. "Prove you're active right now" is not the same as "prove you'll stay active forever".
Only 0x01 and 0x02 credentials work
The binding check withdrawal_credentials == 0x01 || 0x00*11 || msg.sender only matches validators using type-0x01 execution credentials (or type-0x02 compounding credentials, which have the same 20-byte execution address in the same position). Validators still using the legacy BLS withdrawal credentials (type 0x00) can't pass this check. They'd need to convert to 0x01 first via a BLSToExecutionChange message on the beacon chain, which has been available since Capella/Shapella.
In practice, almost all mainnet validators have already migrated to 0x01, but a handful of old ones haven't. Your tooling should produce a clear error message for this case instead of silently rejecting the registration.
Takeaways
The stack is three independent pieces that only work together. EIP-4788 gives the EVM a trusted beacon block root. SSZ Merkle proofs let a contract verify any field of beacon state against that root. EIP-2537 lets the same contract verify BLS signatures. Missing any one of them and the chain breaks.
The binding between a validator and an execution address is 32 bytes. withdrawal_credentials = 0x01 || 0x00*11 || <20-byte execution address>. Post-Shapella, this is how a beacon-chain identity maps to something a smart contract can reason about. Every proof in this post is ultimately a proof about this field.
Proving an active validator on-chain costs about 100,000 gas. ~$3 at normal gas prices, within a 27-hour freshness window, using the EIP-4788 precompile and a ~50-hash SSZ Merkle proof. That's cheap enough to run inside a user-facing transaction.
EigenLayer's BeaconChainProofs.sol is the reference implementation. Multiple audits, billions of dollars in TVL, live on mainnet, handles fork upgrades via a ProofVersion enum. If you're building this, vendor it or depend on it directly. Don't reimplement SSZ Merkle verification from scratch unless you have a very good reason.
BLS signature verification and validator existence are different problems. EIP-2537 handles the first. SSZ Merkle proofs against beacon state handle the second. A contract that wants full validator attestation verification needs both: prove the BLS key is in the active validator set, then verify the signature the key produced.
This unlocks a category of applications that couldn't exist before Dencun and Pectra. Restaking, trustless off-chain network registration, beacon light clients, validator-gated governance, on-chain slashing evidence. Anywhere you currently trust an oracle or multisig to vouch for validator membership, you can now use a cryptographic proof instead.
References
Specifications
- EIP-4788: Beacon block root in the EVM
- EIP-2537: Precompile for BLS12-381 curve operations
- EIP-4895: Beacon chain push withdrawals as operations (Shapella)
- EIP-7251: Increase the MAX_EFFECTIVE_BALANCE (0x02 compounding credentials)
- Ethereum Consensus Specs: SSZ Merkle proofs
- Ethereum Consensus Specs: Beacon chain (Phase 0)
- Ethereum Consensus Specs: BLSToExecutionChange (Capella)
Mainnet deployments and reference implementations
- EigenLayer contracts repository (Layr-Labs/eigenlayer-contracts)
BeaconChainProofs.sol(production Solidity library)EigenPod.sol(main entry point,verifyWithdrawalCredentials)- EigenLayer EigenPod design doc
- EigenLayer BeaconChainProofs design doc
- EigenPod implementation on mainnet (0x5c86...9dA7)
- Layr-Labs/eigenpod-proofs-generation (Go prover)
- madlabman/eip-4788-proof (smaller reference library)
- axiom-crypto/beacon-blockhash-verifier
Background reading
- Pectra mainnet announcement (Ethereum Foundation)
- ethereum.org: Dencun upgrade overview
- Consensys: Ethereum Evolved, Dencun Part 3 (EIP-4788)
- Lido CSM and EIP-4788 integration writeup
- ethereum.org: Withdrawal credentials
- ethereum.org: Keys in proof-of-stake Ethereum
- Ben Edgington, Upgrading Ethereum: BLS Signatures
- Wikipedia: BLS digital signature
- ethresear.ch: Slashing Proofoor (on-chain slashed validator proofs)
- Ben Edgington: BLS12-381 For The Rest Of Us
- Ethereum Foundation: Shapella mainnet announcement
- evm.codes: Ethereum precompiled contracts reference
Tooling
EigenLayer audits
The reference implementation has been reviewed by Certora, Sigma Prime, Cantina, and Consensys Diligence across multiple engagements between 2023 and 2026. If you plan to ship this pattern to production, reading those reports is time well spent. They're in the EigenLayer contracts repository under audits/.