LightLink Protocol: Deep Dive
Last updated
Last updated
This documentation offers a comprehensive technical insight into the protocol architecture, exploring how its various components seamlessly integrate to form a secure, fast, and scalable layer two network.
Our novel layer two design comprises several interconnected components, each playing a crucial role within the ecosystem. LightLink harnesses the Ethereum platform as its settlement layer, housing the Canonical State Chain. Operating at the execution layer is our custom client, heavily inspired by the renowned open-source Ethereum client Geth. LightLink builds upon the remarkable advancements achieved by the Celestia team to provide secure, modular data availability.
The architecture diagram presented in Figure 1 above, illustrates the comprehensive data flow within the protocol. For most users, interaction primarily revolves around light clients or replicator nodes. These key components facilitate seamless submission of new transactions and enable direct querying of the layer two blockchain through their JSON-RPC APIs. Transactions submitted to a replicator node or light client undergo eventual ordering by the sequencer, culminating in their inclusion within a block.
The pivotal role of consolidating multiple layer two blocks into a singular bundle rests with the Hummingbird Node running in publisher mode. Subsequently, this bundle serves as the foundation for crafting a new roll-up block headers on layer one. The publisher's duties extend to posting bundle data to Celestia, while concurrently issuing new block headers to the Canonical State Chain contract with each bundle iteration. Lastly, the Hummingbird Smart Contracts serve as the linchpin, orchestrating seamless integration within the system.
Data availability (DA) refers to a user's confidence that the data required to verify a block is available to all network participants.
For LightLink replicator nodes, this is relatively simple, as replicator nodes download every new block from layer two and independently verify every transaction and block by re-executing them. However, hummingbird validator nodes validate block data directly from layer one Ethereum. This means that even if the L2 network was offline, users could still withdraw their funds from LightLink by submitting their withdrawal transactions directly to the L1 smart contract. Validator nodes can then validate the withdrawal based on the state of the available historical data. Users can exit the network once a single validator is in operation and the historic chain data is available.
Since anyone can easily run a validator, the main concern is ensuring the data is always available and ready to be downloaded whenever required. Early layer two scaling solutions achieved data availability by compressing and storing the transaction data on their layer one network, such as Ethereum. This is the obvious solution. However, it has some drawbacks:
Cost: It is very expensive to store data on Ethereum. This cost is usually borne by the end user of the layer two protocol and increases the overall transaction fee.
Congestion: Submitting batches of transactions directly to Ethereum can result in unnecessary network congestion, resulting in higher fees.
LightLink is one of the first layer two blockchains to leverage Celestia's novel data availability protocol. Celestia is a modular data availability network enabling consensus around whether a piece of data is available. Celestia Blobstream is the first data availability solution for Ethereum that securely scales with the number of users. Blobstream relays commitments to Celestia's data root to an on-chain light client on Ethereum for integration by developers into L2 contracts. This enables Ethereum developers to build high-throughput L2s using Celestia's optimised DA layer, the first with Data Availability Sampling (DAS). LightLink was designed to work with Celestia from the ground up, enabling cost-effective data availability and therefore, cheaper fees for LightLink users.
Initially, the sequencer node is hosted and maintained by the LightLink team. In the future, it is possible for the sequencer node to be randomly distributed amongst peers with some form of staking and slashing mechanism to enforce honesty.
The sequencer node assumes the pivotal role of constructing layer two blocks within the system. Tasked with receiving and validating transactions from replicator nodes, it proceeds to order and process them accordingly. Upon successfully processing a transaction and embedding it within a layer two block, the state of the layer two chain progresses accordingly. Subsequently, the newly formed L2Block is disseminated across all peers within the layer two network.
Transactions are prioritised for inclusion in the next L2Block based on their gas price, with those offering the highest gas price given precedence. Furthermore, the sequencing of transactions within the block is determined by their nonce and gas price.
Presently, the sequencer produces a new L2Block every 500
milliseconds, with these blocks cryptographically appended to the layer two chain. This 500ms production rate is deemed reasonable; however, it remains adjustable to suit evolving requirements. The sequencer diligently populates each new L2Block until it reaches the predetermined gas limit, presently set at 15,000,000
gwei. Plans are underway to significantly elevate this block gas limit as the network matures, thereby enhancing the protocol's theoretical throughput from 1428
transactions per second (TPS) to over 10,000
.
Transactions committed to L2Blocks attain a level of "soft finality", signifying their status as final as long as the sequencer operates with integrity. The veracity of transactions processed by the sequencer undergoes independent validation by community validators, ensuring the sequencers continued honesty and integrity.
In the event of sequencer downtime, seamless continuity is assured through the deployment of a new sequencer. The chain persists, with the new sequencer inheriting the chain's history from existing replicators and seamlessly assuming the task of block production. The LightLink DAO is responsible for updating the sequencer.
Replicator nodes are currently in operation with RPC endpoints provided here.
Replicator nodes are full nodes, meaning they download every L2Block from the sequencer and independently validate and process the transactions within. Replicators allow the LightLink network to scale as the number of replicators increases. Each replicator stores a local copy of the entire LightLink chain.
For a replicator to connect to the sequencer, it must first be safe-listed by the sequencer. The replicator will then download the chain from the sequencer and process every historic transaction. Every replicator keeps their own copy of the layer two chain state, transactions and blocks.
Replicators provide their own JSON-RPC server, which can be used to query the chain. Replicators will also validate the integrity of the blocks by verifying the block hashes and signatures.
Light Clients are on the LightLink development roadmap.
As the LightLight network grows, the hardware requirements to run a full node will increase. Most users will opt to use third-party node infrastructure providers; however, some will want to verify the data they receive for themselves. Light clients connect to replicator full nodes and only download and validate the L2Block headers from them. Instead of downloading the L2Blocks transactions, the light client requests data from replicator nodes as required and verifies this data via Celestia's fraud-proving system.
Hummingbird is the name for the set of components responsible for building and securing the layer two roll-up Canonical State Chain on layer one. The components comprise the Hummingbird Client, which can run multiple roles such as a rollups publisher, challenge defender or as a validator/challenger. Hummingbird is also comprised of a number of smart contracts on both layer one and layer two. These contracts are responsible for important tasks such as passing messages between layers one and two, updating the Canonical State Chain on layer one and proving the integrity of the Canonical State Chain.
As mentioned, most users only need to interact with the layer two network over JSON-RPC via a replicator node or light client. They will never have to worry if the sequencer is behaving honestly. The protocol only requires at least one honest validator to confirm the sequencer and publisher are behaving honestly. A single validator has the capability to prove:
State execution was performed correctly.
The data in each roll-up block was made available and is consistent with layer two.
Each roll-up block header is valid according to the protocol parameters.
That no valid transactions have been censored.
If the sequencer or publisher behaves dishonestly and does not follow the protocol rules, the layer two Canonical State Chain will be rolled back to preserve integrity.
The sequencer is responsible for building and processing new layer two blocks. The Hummingbird publisher is responsible for bundling those blocks and making the block's data available on Celestia. The publisher is the only person allowed to append new block headers to the Canonical State Chain on layer one.
Hummingbird's novel design allows LightLink to scale Ethereum efficiently and transparently while maintaining security. This is achieved by providing provable data availability and correctness via Celestia's Blobstream data availability oracle on layer one and provable state execution via our on-chain MIPS EVM.
Hummingbird will transform LightLink into a chain leveraging Optimium architecture. Merging the features of LightLink, with security of Ethereum. To achieve this we had to create a novel rollups system instead of just forking Optimism engine – like so many others do.
The upcoming system will maintain LightLinks high speed, ultra low transaction fees and best-in-class enterprise features. While also ensuring: the correctness of Layer 2 is provable on Layer 1, that Layer 2 transactions can't be censored, and that the network data is provably available.
Hummingbird is currently undergoing development and testing, as a result details of the system might change.
Hummingbird comprises a collection of smart contracts, as well as a client application.
The Client is responsible for publishing rollup blocks, performing validation and interacting with challenges. To fulfill its purpose it must interact with three blockchains: A LightLink Chain (Layer 2), Celestia (data availability layer), and Ethereum (Layer 1).
The Contracts define the rollup chain, storing rollup blocks, and containing methods for validating the integrity of Layer 2.
The Hummingbird client first downloads a bundle of Layer 2 blocks from the LightLink sequencer. This bundle is then uploaded to Celestia (Data availability layer). Finally a rollup header is created – including a pointer to the data on Celestia, and is submitted to the CanonicalStateChain
contract on Ethereum (Layer 1).
Each rollup block body, is a bundle, which contains up to 5000 LightLink blocks. This unique approach allows us have ultra low Layer 2 fees, but requires special care when it comes to data availability and on-chain validation.
Layer 2 blocks are uploaded to Celestia as shares, validators attests to the availability of a binary Merkle root which contains these shares. Theses attestations can be verified on Ethereum using the Blobstream contracts. If a Celestia validator withholds data or produces incorrect blocks it can be detected by anyone running a light node and be slashed.
Hummingbird provides a ChainOracle
contract for Layer 1 which allows the cost effective loading of Layer 2 headers and transactions from Celestia shares. All items loaded through the ChainOracle
are proven to correspond to the Celestia pointer inside a submitted Rollup block. The proofs also include the Binary Merkle proof showing the shares are part of an attested celestia data root committed to the blobstream.
*potential fallback challenges considered – in the unlikely event of extreme collusion of Celestia validators.
Anybody can act as a validator of the network by interacting with the challenge contracts on Ethereum.
The Challenge contracts can be broken into three groups: Data availability challenges, Censorship challenges and Execution challenges. Each challenge is typically a two party game where the challenger pays a fee to initiate, a defender must provide on-chain verifiable proof of correctness, and the winner receives the fee.
The data availability challenges rely heavily upon the ChainOracle
to prove that data is available and has been since the rollup block was submitted.
The Censorship challenge relies upon the ChainOracle
, Cannon and TxQueue
contracts to check a queued transaction was included and valid.
The Execution challenges implement Cannon to compute correct state roots and compare the results.
The network prevents the sequencer from censoring transactions by allowing the user to submit Layer 2 transactions directly to a TxQueue
Contract on Layer 1. These queued transactions must be included within the first block, of the rollup bundle that's epoch corresponds to the Layer 1 block that the transactions was submitted in.
If the transaction was not included the user can challenge the rollup bundle which should have included it:
If the transaction is valid, and not included in the correct rollup block then the block can be rolled back following an censorship challenge. This forces the sequencer to include it, otherwise it will be replaced by a sequencer who will.
Hummingbird implements George Hotz's Cannon as a fault-proof system. The core of Cannon is a MIPS emulator written in solidity, which allows us to run a EVM (or any state transition engine) on-chain. You can think of it as LightLinks EVM running on the Ethereum EVM.
With this system we can derive the correct Layer 2 state roots directly on Layer 1. (The Execution challenge contract only needs to run a single step of this execution on-chain, as an intermediate state root can be derived and checked to be part of the tree producing the final state root).
A single publisher is allow to submit new blocks to Layer 1.
Each rollup block submitted has a window of time (1 days) in which it can be rollback following a successful challenge. If the current publisher has 3 rollbacks it can be replaced by the DAO. This prevents a malicious publisher from halting the network indefinitely.
The LightLink sequencer and replicator nodes dynamically respond to rollbacks, and will continue working from the new chain head.
The hummingbird smart contracts repository can be found here.
The Canonical State Chain (CSC) can be considered the source of truth for the layer two chain. All layer two blocks will eventually be bundled up by the hummingbird publisher and published to the CSC. Blocks in the CSC are pending for seven days. This gives validators enough time to challenge an incorrect block and roll back the chain if necessary. Only one publisher is allowed to push new blocks to the CSC contract. If a bad block is submitted, leading to a rollback, the LightLink DAO can initiate the election of a new publisher.
A roll-up block header has the following anatomy:
Every header contains enough information about a bundle of layer two blocks to be independently verified on layer one. The celestiaHeight
, celestiaShareStart
and celestiaShareLen
values provide details of where the block data is stored in Celestia so that it can be retrieved or proven to be available.
The stateRoot
provides the expected state hash of the chain after applying all of the blocks contained within the data bundle. The blockRoot
and txRoot
allow us to easily verify if a given block or transaction is contained within the bundle.
The canonical state chain contract has the ability to roll back the Canonical State Chain to a specific block. The challenge contract, which is defined within the canonical state chain contract, is the only contract allowed to initiate a chain rollback in the event of a successful challenge.
Challenge allows anyone to challenge the validity of a block. As mentioned, if a block is proven to be invalid, the chain is rolled back to the previous block. Most challenges require a challenge fee, which is paid to the winner of the challenge. The fee mechanism serves three purposes, its used to incentivise good challenges, disincentivise frivolous challenges and reimburse the defender for the cost of providing proof (gas). Challenges must be initiated within a valid challenge window. This is a time window that starts when the block is published, and ends after a certain time has passed. If a challenge is made outside of this window, it will be rejected. The window may be different for certain challenges.
The challenge contract is used as a base for all challenges in the protocol. As seen below, a multitude of different types of challenges are imported into the challenge contract.
6.1.3 ChallengeHeader.sol
Contract that lets anyone challenge a block header against some basic validity checks. The following checks are made:
The epoch is greater than the previous epoch.
The l2Height is greater than the previous l2Height.
The prevHash is the previous block hash.
The bundle size is less than the maximum bundle size.
If any of these checks fail, the chain is rolled back to the previous block.
Just like with all challenges, the challenge window must be open; however, there is no challenge fee.
This is a challenge game; anybody can challenge the DA of a block.
Once initiated, the defender (block publisher) must provide proof within a short time window. (This proof is verified via the Blobstream contract). If they fail to do so, the challenger wins the challenge, and the chain is rolled back to the previous block.
The window on this challenge starts 80 mins after the block is published, and ends 6 hours after the block is published. The delay in the challenge window's start gives Celestia enough time to validate the data and publish the proof. The shortened end window gives time for subsequent challenges after data availability is proven.
Anyone can challenge data availability by calling challengeDataRootInclusion()
.
If a challenge is detected, a Hummingbird Defender will query the Celestia pointer info from the block header.
Query Celestia node to generate a data inclusion proof
Submit that proof to defendDataRootInclusion()
via Challenge.sol
Challenge.sol will use the Celestia Blobstream DAOracle to verify the proof, and the challenge will be marked as completed "defender won"
if the DA challenge is not defended in a set time period; anyone can call settleDataRootInclusion()
, the challenger will win & CSC will be rolled back
This contract enables any user to directly upload valid layer 2 blocks, from the data availability layer, on to layer 1.
Data is loaded in two parts:
Celestia shares are loaded, along with the required merkle proofs and validator attestations.
Stored shares can then be decoded into Layer 2 headers and transactions.
Once loaded, the headers and transactions can be fetched from the ChainOracle by their respective hashes. This mechanism is crucial for the other challenges listed below.
This is a challenge game, that anybody can call challengeL2Block()
to validate the correctness of a layer 2 Block:
To initiate a challenge a user must provide the number of the layer 2 block which they are challenging.
The Defender will then have to load that layer 2 block using the ChainOracle
.
The Defender will then call defendL2Block()
. The challenge contract will fetch the headers and transactions from the ChainOracle
and check the following fields are correct: transactionsRoot
, parentHash
, number
, timestamp
Note: The stateRoot
is not checked here. The stateRoot
is validated by the challenge below.
Incomplete. This is an implementation of Cannon – the fault proof engine. The primary function of this contract is to verify that stateRoot
of a layer 2 block is correct by executing a block on layer 1, calculating the correct stateRoot
and comparing the results.
A challenge can be initiated by anyone by doing the following:
Load the challenged block and its parents header into the Canonical state chain.
Load initial state that will be accessed during execution into MipsMemory.sol
.
Call challengeExecution()
to initiate the challenge.
LightLinks state transition engine (EVM) is compiled to the MIPs instruction set.
The challenger and defender do a binary search until they find the exact MIPS operation where their reported state executions differ.
The operation is run in MIPS.sol
– a onchain MIPS emulator. The emulated EVM, calculates the correct output of the operation.
If the output matches the challengers expectation the challenger wins. Otherwise, the defender wins.
The hummingbird client is a tool for creating and validating rollups. It interacts with all three layers: Layer 1, Layer 2 and the data availability layer to rollup, inspect, challenge and defend. The repository can be found here.
It can be run in three different modes:
Rollup: Publishing new bundles to Celestia and the Canonical State Chain.
Challenge: Prepare and initiate new challenges.
Defender: Listen for new challenges and attempt to defend them.
Rollups can be create using the following commands:
The client begins by retrieving the latest state of the rollup from the CanonicalStateChain
smart contract deployed on Layer 1, to ensure synchronization.
Following this, it will fetch a set of blocks from the Layer 2 network. These blocks are bundled up, encoded (RLP) and uploaded the data availability layer: Celestia.
Finally the rollup block is assembled to encapsulate key information which includes: epoch
the current Layer 1 height, l2Height
the height of the Layer 2 blocks in this bundle and fields which point to the block bundle on Celestia. The rollup block is then published to the `CanonicalStateChain` contract on Layer 1.
Challenges are used to validate the integrity of the rollups. Some challenges, like challenging execution, requires complex preparation and mutliple calls and watching for responses. While others are simple and require only a single step of initialization.
Challenges can be triggered manually from the CLI
Some challenges may be baseless and require a defender to submit proofs of correctness. Defending may require fetching the source data of the challenged block from Celestia, providing data to the chain oracle and the generation of proofs.
Defences can be triggered manually or automatically via the CLI: