EIP-7594, officially titled PeerDAS - Peer Data Availability Sampling, is a core Ethereum Improvement Proposal that introduces a networking protocol to implement Data Availability Sampling (DAS). [1] Its primary objective is to significantly scale Ethereum's data availability layer, building upon the foundation established by EIP-4844 (Proto-Danksharding). By allowing network nodes to verify the availability of large quantities of data (known as "blobs") while only downloading a small fraction, PeerDAS is designed to increase data throughput for Layer 2 rollups, thereby lowering their operational costs without increasing the hardware requirements for individual nodes. [2] This enhancement supports Ethereum's scalability while preserving decentralization. The proposal was a key component of Ethereum's "Pectra" network upgrade. [1]
EIP-7594 addresses the next major step in scaling Ethereum's data layer following the Dencun upgrade's implementation of EIP-4844. [3] While EIP-4844 introduced blobs as a cost-effective way for rollups to post data to Ethereum, its initial design still required nodes to download the full content of all blobs in a block. This "full replication" model has inherent scalability limits, with an approximate throughput ceiling of 31 KB per second. [4] To move towards "Full Danksharding" and increase the number of blobs per block from a target of 3 to potentially 64 or 128, a robust DAS mechanism became necessary. [3]
The core problem PeerDAS solves is the "data availability problem." This refers to a scenario where a malicious block producer could publish a block header but withhold the actual transaction data contained within the blobs. For a Layer 2 rollup, this is a critical security risk, as it could prevent users from verifying state changes, challenging fraudulent transactions, or executing forced withdrawals of their funds. [4]
PeerDAS mitigates this risk by enabling nodes to gain high statistical confidence that all data for a block is available without having to download the entire dataset. [1] The design philosophy prioritized simplicity and security by leveraging existing, "battle-tested" peer-to-peer components from Ethereum's consensus layer, such as the discv5 discovery protocol and the gossipsub message propagation protocol. [3]
The concept for PeerDAS was first introduced as a "sketch" on the Ethereum Research forum in September 2023 by key Ethereum Foundation researchers. [3] The proposal was formalized and created as EIP-7594 on January 12, 2024. [1] The authors of the EIP include a group of prominent Ethereum researchers: Danny Ryan, Dankrad Feist, Francesco D'Amato, Hsiao-Wei Wang, Alex Stokes, Anton Nashatyrev, Csaba Kiraly, Dmitry S., Suphanat Chunhapanya, Daniel Lubarov, and protolambda. [1] [3] [2]
The EIP-7594 proposal moved to "Last Call" status in early 2024, with a deadline of October 28, 2025. [1] Ethereum core developers confirmed its inclusion in the Pectra network upgrade, which was anticipated for deployment in 2025. [2] The Pectra upgrade was a significant event in Ethereum's roadmap, combining PeerDAS with other major proposals like EIP-7251 (MaxEB) and EIP-7702 (Account Abstraction). [2] Some media outlets also referred to the upgrade carrying FIP-7594 by a speculative name, "Fusaka," projected for December 2025. [4]
The implementation plan for PeerDAS was designed to be gradual to ensure network stability. The initial phase was planned to launch with a capacity of 10 blobs per block, with plans to increase this to 14 and eventually a target of 48 blobs per block in subsequent, smaller hard forks. These future adjustments were expected to be minor configuration changes facilitated by companion proposals like EIP-7892. [4]
PeerDAS achieves its goals through a combination of data processing techniques and a sophisticated peer-to-peer networking protocol.
The protocol builds directly on the blob data structure introduced by EIP-4844 but adds several layers of processing to enable sampling.
cell KZG proofs. These are cryptographic proofs that allow a node to verify that a specific cell belongs to its corresponding blob's KZG commitment without downloading the entire blob. [1]cell_proofs for all blobs in a transaction. A key efficiency in this design is that the computationally expensive task of generating these proofs is outsourced to the blob transaction sender. The block producer and other network nodes only need to perform the much cheaper proof verification, keeping block production on the critical path lean. [1]The core of PeerDAS is its networking layer, which organizes nodes to distribute, custody, and serve data samples.
Each node is deterministically and pseudo-randomly assigned a specific set of data to maintain, or "custody." This assignment is a public function of the node's unique ID, the current epoch, and a custody_size parameter. This ensures that the responsibility for storing the data is distributed across the entire network. While there is a minimum CUSTODY_REQUIREMENT for honest nodes, participants can voluntarily custody and serve more data than the minimum. They can advertise this higher capacity in their Ethereum Node Record (ENR), allowing other peers to discover them. Nodes that choose to custody 100% of all data are known as "super-full nodes" or "DAS providers." [3]
Nodes use a DHT-based mechanism like discv5 (already used in Ethereum's beacon chain) to find peers, specifically searching for those that advertise custody for the data samples they need. To distribute the samples themselves, the network establishes dedicated gossipsub topics for each row and column (e.g., row_5, column_8). A node joins the gossip subnets corresponding to the data it is assigned to custody, receiving and serving verifiable data samples on those specific channels. [3]
The verification process occurs through sampling. Each slot, a node queries its peers for a set number of randomly selected samples to verify data availability. By using the public custody assignment function, a node can identify which of its peers should possess a given sample. If a node successfully receives its required number of unique samples for a block, it can be statistically certain that the entire dataset was made available by the block producer. [2]
PeerDAS includes a powerful network self-healing mechanism. If a node successfully acquires a sufficient portion of samples for a row or column (e.g., more than 50%), it can use erasure coding to reconstruct the missing samples locally. Once reconstructed, the node "cross-seeds" these newly recovered samples to the corresponding orthogonal subnets. For example, if a node reconstructs row_5 and is also part of the column_8 subnet, it will then gossip the sample at coordinate (5, 8) to the column_8 subnet, helping to repair any gaps in the network's data. [3]
The implementation of PeerDAS introduced a new hard limit of 6 blobs per transaction. This rule must be enforced by clients at all stages, including transaction submission, network gossip, and block processing. Additionally, validators are subject to a higher custody requirement than regular full nodes, creating a more robust backbone for the network under the assumption they have greater resources. [1]
The design of PeerDAS was guided by specific technical choices intended to optimize for efficiency, security, and ease of implementation.
One key decision was to organize sampling by columns instead of by rows. Column sampling, where a node retrieves a vertical slice across all blobs, was chosen for two main reasons. First, it simplifies reconstruction, as nodes are likely to already have pieces of many different blobs from the public transaction mempool. Second, it allows the heavy computational workload of data extension and proof generation to be performed by the transaction sender "off-critical-path," before block construction. A row-based sampling approach would have required this computation to be done by the block producer, adding significant load and latency to the block production process. [1]
The use of a peer-based sampling system provides redundancy, as a node can simply query a different peer if one is malicious or offline. It also enables transparent scaling: nodes with more resources can voluntarily custody more data, organically enhancing the network's performance and robustness without requiring protocol-level changes. [1]
PeerDAS is designed to secure the network against data withholding attacks and enforce honest participation through network-level rules.
The primary security risk PeerDAS addresses is a Data Withholding Attack, where a malicious block producer publishes a valid block header but withholds the underlying blob data. The defense against this is a pseudo-randomized sampling scheme. Each node independently and randomly requests an adequate number of samples, making it statistically improbable for an attacker to hide a significant portion of data without being detected by numerous nodes. [1]
The security is mathematically quantifiable and extremely high. According to the EIP's analysis with mainnet-like parameters (a network of 10,000 sampling nodes), the upper-bound probability of an attacker deceiving just 200 nodes (2% of the network) is approximately 10⁻²⁰. The probability of deceiving 300 nodes (3%) is negligible at 10⁻¹⁰¹. This makes a successful large-scale data withholding attack a practical impossibility. [1]
The protocol is enforced through a combination of consensus rules and peer-to-peer incentives.
blob data structure introduced in 4844 and implements the full DAS mechanism required to securely scale blob capacity, realizing the next phase of the Danksharding roadmap. [2]EIP-7594 was designed to be fully backward compatible with EIP-4844. It extends the functionality of blobs and their associated transactions without breaking any existing features implemented in the Dencun upgrade. [1]