The official wallet of Supra.

Supra Oracle Upgrade Delivers Sub-Second Data Freshness

April 01, 2024 - 9 min read

DORA 2.0 embodies the ultimate interoperability-service aggregator with batch-optimized price feeds and industry-leading 600-900ms data freshness.

DORA 2.0

Introduction

Web3 has blessed developers with new incredible abilities to create decentralized applications that can’t be shut down, are resistant to corruption, and are publicly verifiable via their on-chain immutability. As a result, a massive amount of value has flocked into the space to secure and participate in the operations of these protocols. With these innovations, users are enjoying an increasingly interoperable and verifiable world when it comes to ownership of their digital assets and online data. 

Practically speaking, users can now easily and affordably interact with DeFi services in which the solvency and user base of the protocol is all on-chain. Pretty soon they’ll be able to record their home ownership, buy disaster insurance, and even maintain or share medical records by leveraging layers of cryptographic primitives (like oracles).

One of the speedbumps along the way to this transition is the ease at which developers have been able to safely and efficiently integrate external systems and data with Automated Market Makers. This one of the many challenges that Supra alleviates, by providing a data and service aggregation layer in the form of a decentralized SMR for developers to leverage in the creation and implementation of publicly verifiable dApps.

Now that it’s become feasible to integrate these concepts within existing business models, retail users will soon experience and then demand that they be present throughout the online and even offline experience. Financial institutions give us evidence of this everyday, as they are now seemingly in a race to integrate digital assets or launch their own services connected to blockchains. Take JP Morgan for instance, who recently announced an eye-catching IoT use case in the form of the first tokenized asset transfer between satellites in space

It’s thus becoming more obvious by the day that an interoperable Web3 integrating with our daily lives is inevitable. The full impacts of this are incalculable, yet they need to be built layer by layer. To build these kinds of high-integrity systems and markets, securing the flow of assets and data across systems is paramount, which is where Supra Oracles comes into play. Here’s a quick overview of the protocol and the optimizations of its latest upgrade.

Conceptualizing Supra Oracles (DORA)

Supra is a first-of-its-kind decentralized oracle service, focused on creating a viable and efficient solution to the aptly-named “blockchain oracle dilemma.” The underlying technology is called DORA, which is short for Distributed Oracle Agreement. DORA is a cutting-edge design which utilizes a number of elegant incentives and Byzantine-resistant methods to make its oracle network robust, and external or internal manipulation is essentially impossible. 

Supra’s oracles, powered by DORA (Distributed Oracle Agreements), is designed for efficiency, and resiliency against attacks and collusion. Instead of using static nodes with predictable tasks to perform, Supra uses a Tribes and Clans model in which all Tribe nodes are randomly reshuffled across Clans (smaller subsets of nodes in the Tribe).

At any given moment, one Clan is responsible for running Supra’s DORA protocol to power FX, commodity, and cross-chain data feeds. Nodes in the DORA Clan are periodically reshuffled across other clans, so fresh nodes will be running DORA all the time.

Furthermore, the assigned tasks for each node are also periodically rotated (which data sources and which commodity or currency pair), making it unpredictable. Implementing this sort of unpredictability (and difficulty for any rogue nodes attempting to collude) eliminates attack vectors that are very much present with static node designs

In addition to these security guarantees, DORA benefits from extremely high throughput (measured in transactions per second, or TPS) and fast finality (transactions can no longer be reversed, which means users get quick results and see them on-chain in a sub-second fashion. Overall, this reduces gas costs and significantly reduces slippage, optimizing the protocol’s efficiency. 

However, in the end, the most important thing about DORA is its perpetual commitment to decentralization and randomization. Blockchain data is only as secure and accurate as it is decentralized, making DORA a true evolution when it comes to secure and efficient cross-chain communications. This is the basis upon which things like tokenizing RWAs relies upon (coming soon).

How DORA Works in Normal Conditions

With blockchain oracles, multiple sources submit unsigned data for validation, often in the form of asset prices, on a variety of digital and real world assets. To be an oracle validator, staked crypto assets must be secured with the DORA protocol to provide the incentive and disincentive structures for DORA participants. Supra’s decentralized oracle aggregates this incoming data all into a single representative value, filters out erroneous outliers, computes the relevant S-values, validates them and posts the outputs for consumption by smart contracts while retaining a strong Byzantine fault tolerance

However, in the presence of Byzantine faults, blockchain oracles need multiple nodes just like blockchains do. ​When all honest nodes can provide one value as an input, yet some of the nodes are Byzantine, the challenge for the remaining honest nodes to agree upon a single representative value is called the DORA problem. In the absence of any Byzantine faults, only a single honest validator node gathering the information from multiple sources would be sufficient.

what are oracles

While computing the average, if even a single node submits a Byzantine value, the output would arbitrarily deviate from the average of all the honest values. The median offers a robust alternative as a representative value since it is a statistical aggregator which effectively tolerates more corrupted data. ​

So, when dApps create pull requests for off-chain or cross-chain data, DORA is at their service. First, every node of the clan is assigned a set of data sources to obtain the value of certain commodity prices, with every node utilizing multiple sources of data. Once the oracle nodes receive prices from multiple data sources, they compute the median values for them.

supra- data gathering

Next, clan nodes sign their computed median values and send them to aggregators. Since we can’t know which nodes are Byzantine, a clan of aggregators from the tribe is used such that there is an extremely high probability of having at least one honest aggregator within the clan. Multiple aggregators are used to avoid delays resulting from the use of a potential Byzantine aggregator.

DORA aggregation

The aggregators then compute the mean value by leveraging the formation of a coherent cluster (CC) within a set agreement distance, or a set of values in which they all coalesce around a value in agreement amongst themselves. Value deviations from the coherent cluster are of course bound by the limitations of the agreement distance parameters.

This anchors the honest median values being submitted by oracle nodes to aggregators, who simply wait long enough for coherent clusters to form before sending the mean to all clan nodes for the final validation. Once consensus has been reached, a quorum certificate is signed and the S-value is posted to the blockchain. In fact, DORA is resilient to a higher percentage of Byzantine faults (51%) as opposed to the conventional 33%. 

DORA publish

Of course, the process we just reviewed is what takes place under what’s called the “happy path,” or normal circumstances. When things are volatile, or a network comes under attack, robust fallback mechanisms should be there to prevent unwanted outcomes. This is crucial for preventing catastrophic losses during extreme events as well, and retaining the network’s longevity by adhering to data fidelity and resisting attacks (or collusion).

How DORA Works in Adverse Conditions

So far we have covered how DORA works under normal circumstances in which the values from most of the data sources are close to one another, forming coherent clusters. When conditions become adverse whether it be from market volatility or Byzantine nodes, it is possible that data inputs are not close enough to form a coherent cluster. Then, all of the clan nodes start their fallback timer as soon as they send their values to aggregators. 

When a node’s fallback timer runs out, it sends a fallback vote to all the aggregators. Eventually, an aggregator will receive enough fallback votes to form a quorum certificate (QC) for the fallback event, and would publish it on the blockchain. Any validator node observing the fallback message for a round on automatically switches to the fallback protocol.

fallback protocol

At this point, DORA employs the entire tribe to compute fallback S-values. Similar to the happy path protocol, the tribe nodes gather data from the data sources, compute the median from the set of prices they received, sign it, and send it off to aggregators before it is sent to RPC nodes and made available for consumption.

When a coherent cluster can’t be formed, the aggregator instead waits for the first 2ft+1 tribe nodes to send their values. Here, we assume the tribe size to be 3ft+1 with at most ft nodes that may turn Byzantine. Out of 2ft+1 values, at least ft+1 of these values must be from honest nodes. Therefore, the median of these values is used since it would be bound by honest values.

fallback tribe

The aggregator proposes the computed median as the S-value for this round, along with the set of 2ft+1 digitally signed values that it received. Then, the entire tribe of nodes validate and send their approval votes back to the aggregator who forms a QC with 2ft+1 votes, and then posts the validated S-values to whichever free node requesting data from Supra.

DORA 2.0 Updates Deliver Sub-Second Data

So, what’s new with DORA 2.0? As a whole, the protocol has become even more efficient than before, running at least twice as fast, and even up to 2.5x faster depending on the geographical distances of free nodes requesting data. Oracle requests can also be served in more efficient batches now, optimizing gas efficiency and encouraging dApps to continuously maintain their data freshness. Previously, DORA was validating commodity data by and recording it on Supra’s SMR before delivering it to free nodes for consumption, taking 2-3 seconds in all. 

Following the DORA 2.0 update, the validated commodity data will now bypass the Supra SMR altogether and is delivered straight to requesting free nodes for consumption on the destination chain, this time in a sub-second fashion. To be more precise, the average time for a free node to request data and have it validated and delivered takes about 600-900 milliseconds. With this move, DORA 2.0 sets a new gold standard when it comes to data freshness

As expected, DeFi and accompanying dApps are maturing nicely, and financial institutions are adopting crypto assets and integrating them with legacy financial systems. Additionally, retail users are likewise demanding more transparency, asset ownership, and a better user experience in the modern era. To all the builders out there, the tools are in our hands and our task is laid out before us. All that is left is to grind hard and wait for these trees to bear fruits; the time is nigh.

Read Next

twitterlinkedinfacebookmail

RECENT POSTS

Recibe noticias, información y más.

Suscríbete al boletín de Supra para recibir noticias, actualizaciones, análisis de la industria y más.

PrivacidadCondiciones de usoUso de datos del sitio web y cookiesRevelación de bugs (errores)Política de privacidad de la información biométrica

©2024 Supra | Entropy Foundation (Suiza: CHE.383.364.961). Todos los derechos reservados