What is a Rollup?
Learn what a rollup is, how rollups scale blockchains, how optimistic and ZK rollups work, and why data availability and blobs matter.

Introduction
A rollup is a blockchain scaling system that executes transactions outside a base chain and then sends compressed results back to that base chain for settlement. The reason this matters is simple: blockchains are expensive not mainly because moving balances is conceptually hard, but because asking thousands of nodes to execute and store everything is scarce, duplicated work. A rollup changes that division of labor. It lets a faster layer handle most execution, while the base chain acts as the final court that stores key data, verifies commitments, and protects withdrawals.
That design sounds almost contradictory at first. If transactions happen somewhere else, why is the system still secure? And if the base chain still has to be involved, where do the savings come from? The answer is that security and execution are not the same job. A rollup tries to keep the expensive part (global consensus over every computation) off the critical path, while still preserving the invariant that users can verify what happened and recover funds through the base layer.
This is why rollups became central to Ethereum’s scaling approach, but the idea is broader than Ethereum. More generally, a rollup is part of a modular architecture: one layer specializes in execution, while another provides settlement, consensus, and often data availability. You can see that pattern in Ethereum-based rollups, in validity-rollup ecosystems such as zkSync, and in sovereign-rollup tooling that posts data to a separate data-availability layer such as Celestia. The details differ, but the core question is always the same: what must the base layer do so users do not have to trust the rollup operator?
How do rollups move execution offchain while preserving onchain verification?
| Execution location | Verification role | Data on L1 | Cost per tx | Finality speed |
|---|---|---|---|---|
| L1 (on-chain) | On every full node | Full transaction and state | High per-transaction cost | On-chain finality |
| Rollup (batching) | L1 verifies commitments | Batched data and commitments | Shared amortized cost | Challenge-window or immediate |
The cleanest way to understand a rollup is to start from the bottleneck it is trying to remove. On a base chain like Ethereum, every full node either executes transactions directly or at least verifies the results under strict consensus rules. That gives strong shared security, but it also makes throughput scarce and fees high. A rollup improves this by taking a batch of user transactions, executing them offchain, and posting the batch’s transaction data and a commitment to the resulting state back to the base chain.
The key compression point is this: the base chain does not need to redo the entire life of the rollup to secure it. It needs enough information to enforce correctness. In practice, that usually means a rollup publishes data about the transactions and some compact representation of the new state, such as a state root or output root. Once that information is anchored on the base chain, users and independent nodes can reconstruct the rollup state, check the computation, and use the bridge contracts on the base layer to deposit or withdraw.
A simple narrative example makes the mechanism concrete. Imagine ten thousand users trading, sending payments, minting tokens, and interacting with applications on a rollup during the day. Instead of sending ten thousand expensive L1 transactions, the rollup’s operator orders them, executes them in the rollup environment, compresses the resulting transaction data, and submits a much smaller number of L1 transactions that represent the batch. Ethereum stores the posted data, records the state commitment, and enforces the rollup’s bridge rules. Users get lower per-transaction costs because the expensive L1 overhead is being shared across the whole batch.
That is the core economic reason rollups work. If a batch of N transactions can share one L1 posting cost, then each user pays roughly a fraction of that cost instead of the full cost of separate L1 execution. The exact fee model varies by implementation, but the mechanism is always some form of amortization: many users share one settlement footprint.
Why is data availability essential for rollups and what breaks if data is withheld?
| DA option | Availability guarantee | Trust assumption | Cost | Best for |
|---|---|---|---|---|
| On-chain calldata | Permanent on L1 | L1 security only | Highest per-byte cost | Maximal trustlessness |
| EIP-4844 blobs | Temporary on consensus layer | L1 security during window | Much lower short-term cost | Cheap rollup settlement |
| External DA (Celestia) | DA via separate chain | DA-chain honest majority | Lower than L1 calldata | Sovereign rollups |
| DA committee (Validium) | Off-chain committee availability | Committee honesty required | Lowest cost | Very high throughput with trust tradeoff |
At first glance, it may seem like the proof or state root is the whole story. It is not. The part many readers underestimate is data availability: the transaction data has to be available so that anyone can reconstruct the rollup state independently.
This matters because a commitment by itself is too small to tell you what happened. A state root can say, in effect, “the system moved from state A to state B,” but without the underlying transaction data, outside observers cannot replay the transition. That means they cannot verify whether the operator processed transactions honestly, and in many designs they cannot prove the balances needed to withdraw safely. A rollup is therefore not just “offchain execution plus a hash on L1.” It is offchain execution plus enough posted or otherwise guaranteed data so independent parties can check the chain.
This is why Ethereum documentation emphasizes that optimistic rollups post transaction data to mainnet as calldata or blobs. The reason is mechanical, not cosmetic: if the rollup’s execution is based on posted transactions, then anyone can use that data to reconstruct state and verify or challenge the result. If the operator withholds data, users may be unable to verify the chain or exercise trustless exits. In other words, the system stops being a true rollup and starts looking more like a trust-based commit-chain or validium-style design.
An analogy helps here. Think of a rollup as a company that submits its books to an auditor. The summary statement matters, but the receipts matter more, because the receipts are what let someone independently check the summary. The analogy explains why data availability is essential. It fails, however, in one important way: a rollup does not rely on a single trusted auditor. The design goal is that anyone can verify from publicly available data.
Optimistic vs ZK rollups: how do their security and finality models differ?
| How correctness proven | Finality delay | Monitoring need | Proof cost | Best tradeoff |
|---|---|---|---|---|
| Optimistic rollup | Fraud proofs on challenge | Challenge window (≈7 days) | Low upfront, dispute expensive | Lower prover cost, slower finality |
| Validity (ZK) rollup | Cryptographic validity proofs | Minimal active monitoring | High upfront, verification cheap | Fast finality, higher prover cost |
Once the transaction data is available, there are two broad ways to enforce correctness. The organizing principle is simple: either the rollup posts batches and assumes they are valid unless challenged, or it proves validity up front.
An optimistic rollup takes the first approach. It posts transaction data and state commitments to the base chain, and the system initially treats those commitments as valid. If someone believes the operator published an incorrect state transition, they can challenge it during a dispute window using a fraud proof or fault proof. Ethereum’s developer documentation describes this as an optimistic assumption: transactions are presumed valid unless proven otherwise. The security requirement is that at least one honest participant is watching the chain and willing to challenge invalid assertions.
This is where the withdrawal delay in optimistic rollups comes from. If a user wants to withdraw assets from L2 back to L1, the base chain has to wait long enough to be confident no valid fraud proof will overturn the posted state. In practice, that often means a challenge period of roughly seven days. Optimism’s documentation describes withdrawals in stages, with finalization only after the challenge window ends. So the delay is not an arbitrary annoyance. It is the direct consequence of a design that checks correctness only if needed.
The dispute process is designed to keep L1 work manageable. Rather than replaying an entire disputed batch onchain, many optimistic systems use interactive protocols that narrow the disagreement step by step until the base chain only has to adjudicate a tiny piece of computation. Ethereum’s optimistic-rollup docs describe this as a multi-round interactive proving process with bisection and one-step proofs. Arbitrum Nitro similarly centers its design around interactive fraud proofs, with the prover switching into a special dispute mode only when challenged.
A validity rollup, often called a ZK rollup, takes the second approach. Instead of saying “assume this batch is valid unless someone proves otherwise,” it says “here is a cryptographic proof that this batch is valid.” The base chain verifies that proof before accepting the new state commitment. The immediate consequence is that finality can be much faster, because the system does not need a long challenge period to wait for disputes. zkSync Era, for example, is explicitly described as a ZK rollup that uses zero-knowledge proofs to scale Ethereum.
The tradeoff is different. validity proofs can dramatically reduce the need for social monitoring during settlement, but generating those proofs is computationally intensive and operationally complex. You shift cost from dispute monitoring to proof generation. So the difference between optimistic and validity rollups is not that one has security and the other does not. Both can inherit strong security from the base layer if designed correctly. The real difference is when and how correctness is established.
What roles do sequencers play and how does their centralization affect censorship and ordering?
The cryptographic design is only part of the story. In day-to-day operation, most rollups have a component that receives user transactions, orders them, builds blocks, and submits batches to the base chain. That component is usually called the sequencer.
A sequencer is important because ordering is the critical path for user experience. If there were no active sequencer, users might need to post directly to L1 much more often, which is slower and more expensive. A sequencer can provide fast confirmations, keep fees lower, and make the rollup feel like a normal high-throughput chain. But this convenience introduces a concentration of power. Ethereum’s docs note that many rollups were bootstrapped with centralized sequencers, and Optimism’s protocol overview says block production on OP Mainnet is primarily managed by a single sequencer.
That centralization creates two distinct risks. The first is censorship: the sequencer can refuse to include your transaction, or go offline. The second is ordering power: the sequencer can choose transaction order, which affects MEV, liquidation priority, and other economic outcomes. These are not side issues. They are part of the core trust model of many current rollups.
Good rollup designs therefore include escape valves. For example, Optimism distinguishes between direct submission to the sequencer, which is cheaper but not censorship resistant, and L1 deposits, which are censorship resistant because they derive from L1 blocks. More broadly, the base layer and canonical bridge are supposed to ensure that even if the sequencer misbehaves, users can eventually force inclusion, prove state, or withdraw according to the rollup’s rules. The exact mechanism varies by implementation, but the invariant is the same: the sequencer should improve liveness and UX, not become the final custodian of funds.
Still, current deployments often fall short of the ideal. Ethereum’s scaling roadmap explicitly notes that centralized sequencers and small pools of known provers create censorship and centralization risks, and that decentralizing these roles remains an important unfinished task. So when someone says a rollup “inherits Ethereum security,” that should be read carefully. The asset safety model may inherit L1 security, while transaction inclusion and day-to-day operation may still depend heavily on a small operator set.
Why do rollup transaction fees depend on L1 data costs?
A common misunderstanding is that because rollups execute offchain, their fees should be almost independent of the base chain. In practice, that is not true. The dominant cost has often been the price of posting data to L1.
Ethereum’s roadmap material states this directly: over 90% of rollup transaction cost comes from data storage. This is an important clue about what rollups are really buying from the base layer. They are not mainly buying execution. They are buying settlement and data availability. If the cost of data publication on L1 is high, rollup fees remain meaningfully exposed to L1 conditions even when L2 computation itself is cheap.
This also explains why protocol upgrades that look “below” the application layer can have huge effects on rollup UX. If a rollup transaction fee is mostly the user’s share of published data, then a cheaper data channel can make the whole L2 dramatically cheaper without changing the application at all.
What are blobs (EIP-4844) and how do they lower rollup data costs?
That is exactly what Ethereum’s Proto-Danksharding, implemented through EIP-4844, was designed to do. The idea is straightforward: give rollups a cheaper place to put batch data than ordinary calldata.
EIP-4844 introduced a new transaction type carrying blobs. A blob is large data attached to a transaction whose contents are not accessible to EVM execution, though a commitment to the blob is. This distinction matters because the blob is not meant to be smart-contract state. It is meant to be a data-availability channel for protocols like rollups. The consensus layer persists the blobs for a limited time, while the execution layer does not treat them like normal contract-accessible payload.
The economic trick is that blob data is temporary. Ethereum.org’s roadmap explains that blobs are cheaper because they are not permanent and can be deleted once no longer needed. The optimistic-rollup docs describe blob data as non-persistent and pruned after roughly eighteen days. That means Ethereum no longer promises to store all rollup transaction data forever. Instead, it guarantees availability for long enough that the rollup’s security machinery can function, while long-term archival responsibility shifts outward to rollup operators, indexers, exchanges, and other infrastructure providers.
This is a good example of scaling by asking what property is actually necessary. For fraud proofs or independent reconstruction, the key need is that data be available during the relevant verification window, not that every byte be stored forever by every node. By relaxing permanence while preserving temporary availability, Ethereum created a much cheaper data lane for rollups.
The effect is large because it targets the true bottleneck. Ethereum’s roadmap says Proto-Danksharding enables a substantial (greater than 100x) increase in throughput and corresponding reductions in rollup transaction costs. It was deployed in the Cancun-Deneb, or Dencun, upgrade in March 2024, after which rollups began using blob storage and processing millions of transactions in blobs.
There are limits, though. EIP-4844 is explicitly a stop-gap on the path toward fuller danksharding. It introduces a conservative blob capacity, separate blob-gas accounting, and a sidecar propagation model, but it does not solve everything. Full danksharding requires additional protocol and networking machinery, including proposer-builder separation and data availability sampling, and remains an ongoing engineering and research effort rather than a completed endpoint.
How do rollups work with external data-availability layers and sovereign rollups?
Rollups are often introduced through Ethereum, but the underlying architectural idea is broader: separate execution from the layers that provide ordering, settlement, and data availability.
Celestia’s modular-chain framing makes this explicit. In that model, rollups specialize in execution while another layer supplies consensus and data availability. Rollkit follows that logic by spinning up sovereign rollups that collect transactions into blocks and post them to Celestia. The tradeoff changes, because a sovereign rollup does not settle disputes on Ethereum in the same way an Ethereum rollup does. But the family resemblance is still clear: execution happens in one place, while another layer guarantees that data exists and can be checked.
This broader view also helps avoid a common confusion with other L2 designs. Not every offchain scaling system is a rollup. Cardano’s Hydra Heads, for example, are multi-party state channels; powerful for certain settings, but structurally different from rollups because they rely on a bounded participant set and channel-style interaction rather than a globally ordered batch chain that posts data back for broad verification. Rollups are best understood not as “anything on layer 2,” but as a specific answer to a specific problem: how can many users share offchain execution without giving up trust-minimized access to the base layer?
What failure modes and assumptions should users watch for with rollups?
A rollup’s promise depends on a few assumptions that are easy to gloss over until something goes wrong.
The first is that data must actually be available. If transaction data is withheld, users may not be able to reconstruct state, challenge bad transitions, or prove balances for withdrawal. This is why onchain calldata, blobs, or a carefully designed external data-availability layer are such central design choices. Lower costs from offchain data systems often come with new trust assumptions, such as reliance on a data availability committee.
The second is that someone must enforce the rules. In optimistic rollups, that means at least one honest watcher must monitor the system and submit fraud proofs when necessary. If nobody challenges an invalid assertion, the safety model weakens sharply. This is not a theoretical nitpick; it is part of the fundamental security argument.
The third is that operational centralization still matters. Real rollups have sequencers, batch posters, provers, and bridges run by organizations and software stacks that can fail. The Arbitrum sequencer outage in December 2023, for example, illustrated how a centralized sequencing and posting pipeline can become a practical single point of failure under unusual load. Optimism also experienced a major mainnet outage tied to an unsafe head stall in February 2024. These incidents do not mean rollups are insecure in the narrow asset-custody sense, but they do show that liveness, ordering, and UX depend on infrastructure that is not yet fully decentralized.
The fourth is that “inherits L1 security” is not a blanket statement. It usually means the rollup’s final settlement and withdrawal guarantees are anchored to the base chain under specified assumptions.
It does not mean every part of the user experience is equally decentralized or equally guaranteed.
- fast confirmations
- censorship resistance
- sequencing fairness
- archival access to historical blob data
Conclusion
A rollup works by making a precise trade: it moves execution away from the base chain, but keeps enough data and enforcement on the base chain that users do not have to simply trust the operator. That is why rollups can be much cheaper than L1 without becoming ordinary custodial systems.
If you remember one idea tomorrow, make it this: **a rollup is not just “transactions somewhere else.” It is offchain execution plus onchain accountability.
** Everything else follows from how a system tries to maintain that accountability while scaling.
- fraud proofs
- validity proofs
- sequencers
- blobs
- withdrawal delays
- decentralization roadmaps
How does this part of the crypto stack affect real-world usage?
Rollup properties (how data is posted, the withdrawal delay, and who orders transactions) directly affect how quickly you can move funds, how censorship-resistant transfers are, and how long you must wait to withdraw. Use Cube Exchange to fund and trade assets that live on or bridge to rollups, but first run a short checklist that maps the rollup's technical tradeoffs into concrete risks and timings.
- Fund your Cube account with fiat or transfer a supported crypto (USDC or ETH) into Cube.
- Check the rollup's documentation or explorer to confirm its data-availability method (calldata, EIP-4844 blobs, or external DA), the typical blob retention (≈18 days for EIP-4844), and whether it posts state commitments to L1 or an external DA layer.
- Verify operational details that affect UX: the sequencer model (centralized vs. decentralized), whether direct L1 deposits are supported for censorship resistance, and the withdrawal finality model (optimistic rollups often have ~7-day challenge windows; ZK rollups finalize much faster).
- On Cube, open the market for the asset you want, choose an order type (use a limit order if you need price control during potential L2 congestion), submit the trade, and then monitor on-chain batch inclusion and the rollup's bridge status before initiating withdrawals.
Frequently Asked Questions
- If transactions execute offchain, why does a rollup still need the base chain? +
- Because rollups separate execution from final settlement: the base chain stores compact commitments and enough transaction data so anyone can reconstruct and verify the offchain work, acting as the final court that enforces withdrawals and prevents theft while the rollup handles bulk execution.
- What's the practical difference between optimistic rollups and ZK (validity) rollups? +
- Optimistic rollups post batches and assume they are valid unless someone proves otherwise using a fraud-proof during a challenge window (which is why withdrawals are delayed, typically on the order of days), while validity or ZK rollups attach a cryptographic proof that the batch is correct so the base chain verifies correctness up front and finality can be much faster.
- Why is data availability considered non-negotiable for rollups, and what happens if data is withheld? +
- Data availability is essential because a state root alone doesn't show how the state changed; if the operator withholds the underlying transaction data, external parties cannot replay or verify the rollup and users may be unable to perform trustless withdrawals, effectively turning the system into a trust-based commit-chain or validium.
- How did EIP-4844 (blobs) lower rollup costs and what are the limits or retention rules for blob data? +
- EIP-4844 (Proto-Danksharding) introduced blob transactions as a cheaper, temporary data lane for rollups: blobs are stored separately from EVM-accessible calldata, are intended to be pruned after a limited retention (the sidecar model targets about 18 days), and initial block blob capacity is conservative (target ≈0.375 MB per block, max ≈0.75 MB), which materially lowers L1 data costs but is a stop-gap toward full danksharding.
- If rollups do execution offchain, why are rollup transaction fees still affected by L1 conditions? +
- Because the dominant cost for many rollups is publishing transaction data to L1, rollup fees remain tightly linked to base-chain data-storage costs—Ethereum documentation states over 90% of rollup transaction cost can come from data storage—so cheaper L1 data channels directly reduce L2 fees.
- Are sequencers centralized on rollups, and what are the security or UX risks of that centralization? +
- Many rollups start with a single or small set of sequencers that order and post batches, and that concentration creates practical risks: sequencers can censor transactions or exploit ordering (MEV) and can become single points of failure if they go offline, so designs include escape valves but full decentralization of sequencing remains an open roadmap item.
- Does a rollup truly inherit all of L1's security guarantees? +
- Saying a rollup 'inherits L1 security' is qualified: final settlement and asset guarantees can be anchored to the base chain under specified assumptions, but day-to-day liveness, censorship resistance, sequencing fairness, and archival access often depend on operators and infrastructure that are not yet fully decentralized.
- How do rollups that use external data-availability layers (like Celestia) differ from Ethereum-based rollups? +
- Sovereign rollups that post data to an external DA layer (for example, Celestia) follow the same separation of execution and DA/consensus but change the settlement and dispute tradeoffs: they rely on the external DA provider for availability rather than Ethereum calldata/blobs, which alters the assumptions and mechanisms for proving or forcing withdrawals.
- What concrete assumptions must hold for a rollup to remain trustless and secure? +
- The main operational assumptions are: transaction data must actually be available; someone (or some set of watchers) must enforce rules by submitting fraud proofs or proofs-of-validity when needed; and centralized operational roles (sequencers, provers, batch posters) must not become custodians—if these assumptions fail, the rollup's trust-minimized guarantees weaken.
- Is EIP-4844 the final solution for rollup scaling, or are there remaining technical milestones (like danksharding) to watch for? +
- EIP-4844 and blobs were designed as an interim step; full danksharding requires additional protocol and networking features (for example proposer-builder separation and data availability sampling) and the roadmap and timing for those remaining pieces and for sequencer/prover decentralization are unresolved research and engineering tasks.
Related reading