September 10, 2024 - 7 min read
Supra optimizations set a new gold standard for the performance and reliability of next-gen blockchain applications.
A distributed consensus protocol that integrates high throughput, low latency, and scalability provides significant benefits in terms of both performance and reliability. These protocols are designed to efficiently handle a large number of transactions per second (throughput) while ensuring quick confirmations (latency), making them essential for real-time applications and systems that demand high efficiency like financial networks and blockchain platforms. Scalability enables these protocols to support extensive networks and manage a growing number of participants without a noticeable decline in performance, making them well-suited for expanding and dynamic environments.
The synergy between low latency and high throughput improves user experience by shortening transaction confirmation times and enhancing service delivery, particularly in scenarios with high interaction rates. Additionally, these protocols promote resource efficiency by reducing operational costs and optimizing bandwidth usage, which is particularly advantageous in networks with limited communication resources. They also enhance system robustness and resilience by incorporating fault tolerance mechanisms that maintain consensus even in the presence of faulty or malicious (Byzantine) nodes.
In this post, we introduce the core architecture of Supra, a system engineered to deliver high throughput and low latency in large-scale, geo-replicated environments. Supra achieves its high throughput by decoupling transaction dissemination from the consensus process, building on concepts established in previous works like Narwhal. By refining this approach, Supra distributes transactions to smaller committees, ensuring their availability with high probability, which allows the system to scale without degrading performance.
For consensus, Supra leverages our Moonshot consensus protocol, which achieves a theoretically optimal commit latency of just 3 message delays (md). The seamless integration of our transaction dissemination layer with the consensus mechanism results in an end-to-end latency of only 5.5 md, making low latency a core feature of the system’s design.
We refer to the entire set of n nodes as a tribe where at most f<n/3 nodes are Byzantine faulty. To create a scalable solution, we uniformly sample a sub-committee of size nc such that at most fc < nc/2 are Byzantine within the sub-committee, except with a negligible probability of failure. We refer to this sub-committee as a clan.
Typically, the value of nc is much smaller than 2f+1. For example, a simple hypergeometric probability calculation shows a clan of only 116 parties is sufficient to ensure an honest majority within the clan when n=300 (instead of 198 parties) with a negligible error probability of 10-6. Similarly, a clan of only 66 parties suffices to ensure an honest majority within the clan under the same conditions, with an error probability of 10-3.
At Supra, we use BLS multi-signatures instead of EdDSA or ECDSA signatures. Although EdDSA and ECDSA signatures are considerably cheaper to create and verify, they cannot be aggregated. Consequently, nodes would need to multicast a quorum of these signatures during consensus, which results in substantial bandwidth usage as the system size increases. In contrast, BLS signatures can be aggregated into shorter signatures, making the multicasting of these signatures more bandwidth-efficient.
We employ BLS multi-signatures with signatures in group G1, which is faster to create, verify, and aggregate. For example, on e2-standard instances on Google Cloud Platform (GCP), it takes about 0.2 ms to create a signature, 1.89 ms to verify the signature, and around 0.007 ms to aggregate two signatures. Similarly, verifying the aggregated BLS signature takes approximately 1.89 ms.
We partition our system into k clans, ensuring an honest majority within each clan with very high probability. Within each clan, nodes collect client transactions, group them into a batch (denoted as B), and disseminate this batch exclusively within the clan. The nodes in the clan verify the transactions in batch B and multicast a signed vote for B to all the nodes in the tribe.
Upon collecting fc +1 signed votes for batch B, the tribe node aggregates these signatures into a BLS signature and verifies the aggregated signature. Notably, the system does not verify individual signature shares (i.e., signed votes for B), but instead optimistically aggregates these shares to obtain a single aggregated signature, verifying only the aggregated signature.
This approach bypasses the costly verification of individual signature shares in the good case when all nodes sign correctly, thus reducing latency. If faulty nodes produce incorrect signatures and the aggregated signature fails verification, the system can then verify the individual signature shares to identify the culprit for appropriate penalization.
When the aggregated signature for batch B is formed, it serves as the data availability proof for batch B. This implies at least one honest node in the clan has received B with high probability. Consequently, batch B can be downloaded at a later time when needed. We call this aggregated signature for B as the data availability certificate for B.
The data availability certificate for batch B is then fed into our Moonshot consensus protocol which is executed by the tribe. Within the protocol, the block proposer gathers multiple such data availability certificates into a block and proposes it during their turn as leader. The transactions within batch B are considered final once the consensus protocol commits the block containing B.
Finally, we utilize optimistic signature verification within the Moonshot consensus protocol, focusing exclusively on aggregated signatures. This method speeds up verification in the good case and contributes to reduced latency.
Moonshot achieves an optimistic consecutive-proposal latency (the minimum latency between two block proposals) of 1 message delay (md) and a commit latency of 3 md. Batch dissemination and data availability certificate generation cumulatively add 2 md. Since blocks are proposed every network hop, data availability certificates must be queued until the next block proposal, introducing an average queuing latency of 0.5 md. Consequently, the overall end-to-end latency of our system is 5.5 md.
We conducted extensive evaluations on the Google Cloud Platform (GCP), distributing nodes evenly across five distinct regions: us-east1-b (South Carolina), us-west1-a (Oregon), europe-north1-a (Hamina, Finland), asia-northeast1-a (Tokyo), and australia-southeast1-a (Sydney).
In our setup, a client is co-located with the consensus node. Each transaction consists of 512 random bytes, and the batch size is set to 500KB. Each experimental run lasts 180 seconds. For latency measurements, we calculated the average time from transaction creation to its commit by all non-faulty nodes to determine end-to-end latency. Throughput is assessed based on the number of committed transactions per second.
We initially evaluated our architecture using a network of 125 nodes, partitioned into k = 5 clans, each consisting of 25 nodes (with 5 nodes from each GCP region). Although clans of size 25 in a 125-node system have a relatively high failure probability of 0.291 regarding the risk of a dishonest majority, our goal was to demonstrate that data dissemination within a clan and consensus across a tribe can deliver high throughput and low latency.
For this experimental evaluation, we utilized e2-standard-16 machines, each equipped with 16 vCPUs, 64GB of memory, and up to 16 Gbps of egress bandwidth.
We observed throughput and end-to-end latency, as depicted in the graph above. Notably, we achieved a sub-second end-to-end latency with a throughput of 500 KTps. Additionally, we attained a throughput of approximately 330 KTps with a latency of approx. 500 ms. Given that the average message delay (md) across the GCP regions is around 100 ms, our system approaches the theoretical latency limits, delivering a throughput of 330 KTps with latency close to the theoretical limits of our architecture.
We subsequently evaluated our architecture with a network of 300 nodes, partitioned into k=5 clans, each comprising 60 nodes (12 from each GCP region). In this configuration, clans of size 60 in a 300-node network have a failure probability of 0.0107 concerning the risk of a dishonest majority within a clan. Nonetheless, our objective was to demonstrate that our architecture can maintain high throughput and low latency even at larger system sizes.
For this experimental evaluation, we used e2-standard-32 machines, each equipped with 32 vCPUs, 128 GB of memory and up to 16 Gbps of egress bandwidth.
We observed throughput and end-to-end latency, as shown in the graph above. As before, we achieved a sub-second end-to-end latency with a throughput of 500 KTps, but with a lower failure probability due to the increased clan size. Additionally, we reached a throughput of 300 KTps with a latency of approximately 650 ms, closely aligning with the theoretical limits of our architecture.
Supra’s novel architecture represents significant performance optimizations for distributed consensus systems, particularly for large-scale, geo-replicated environments. Our experimental evaluations demonstrate its ability to achieve high throughput, low latency, and exceptional scalability, making it well-suited for a broad range of dApps that demand both speed and reliability. Supra’s innovative approach to reducing latency and maximizing throughput sets a new benchmark for high-performance blockchain architecture, where every optimization is vital for driving real-world adoption.
RECENT POSTS
Đăng ký nhận bản tin Supra để cập nhật tin tức, thông tin mới nhất, insight trong lĩnh vực Blockchain và nhiều hơn thế nữa.
©2024 Supra | Entropy Foundation (Thụy sĩ: CHE.383.364.961). Đã đăng ký Bản quyền