Skip to content

lambdaclass/mina_bridge

Repository files navigation

mina_bridge 🌉

Zero-knowledge state bridge from Mina to Ethereum

About

This project introduces the proof generation, posting and verification of the validity of Mina states into a EVM chain, which will serve as a foundation for token bridging.

This MVP verifies Mina state opening proofs in Ethereum without taking into account consensus validation and Pickles verification.

⚠️ We're currently focused on orchestrating the end-to-end integration of Mina proof verification, from extraction on the node to final validation on the Sepolia Ethereum testnet. Including the verification of account inclusion in the stored state to our process.

  • Circuit for verification algorithm. 🚫 All the tasks described below can be started once the new circuit framework and its API in Rust is ready to be used
    • Integrate current Kimchi partial verification in o1js with the verifier circuit. The rework in Rust should take 2 weeks, depending on the new API.
    • Complete Kimchi verifier circuit: final verification. (this will stay pending since we need some improvements in o1js for the MSM).
  • KZG Prover (state proof wrapper):
    • Connect Kimchi + KZG prover with current verifier circuit.
  • Verifier Smart Contract:
    • Inputs deserialization.
    • Partial verification.
    • Final verification.
  • Account state utility: A user will be able to query its account state and check that the retrieved state corresponds to the last verified Mina state in Ethereum.
    • Verification of Merkle proof in Solidity. Done ✅
    • Integration between Merkle proof verification and Mina state proof verification. WIP 🔨

Design objectives

mina_bridge will include:

  1. Backend service for periodically wrapping and posting Mina state proofs to an EVM chain.
  2. A “wrapping” module for Mina state proofs to make them efficient to verify on the EVM.
  3. The solidity logic for verifying the wrapped Mina state proofs on a EVM chain.
  4. Browser utility for smart contract users: Mina address is provided as an input. State is looked up against Mina and then shared as a Mina state lookup-merkle-proof wrapped inside an efficient proof system.
  5. A solidity contract utility that smart contract developers or users can execute on an EVM chain to feed in a Mina state lookup proof that will check the state lookup against the latest posted Mina state proof to verify that this Mina state is valid.

⚠️ Disclaimer

mina_bridge is a PoC, currently it misses elemental features and correct functionality is not guaranteed.

⭐ How to run the PoC script ⭐

Proof wrapper and Verifier contract

In the root folder, run:

make

This will run the polling service, the verifier circuit, the KZG prover and the Ethereum smart contract verifier.

Account state utility

Server

Go to the state_utility/server folder and run:

cargo run

This will start the Account state utility server using the port 3000. The server has one method account_state which receives the following arguments:

  • mina_public_key: The public key associated to a Mina account.
  • mina_rpc_url: URL of the Mina node GraphQL API.
  • eth_rpc_url: URL of the Ethereum gateway.
  • verifier_address: Address of the Mina bridge verifier Ethereum contract.

This method returns the following JSON:

{
  "balance": <mina_account_balance>,
  "verified": <mina_account_state_validity>
}

Where:

  • mina_account_balance is the balance of the Mina account that corresponds to <mina_public_key>.
  • mina_account_state_validity: true if the state of the Mina account that corresponds to <mina_public_key> is present in the ledger hash stored in the Verifier Ethereum contract with address <verifier_address>. false otherwise.
Examples

We assume the Account state utility server, the Mina node and the Ethereum node are in localhost.

To fetch the state of a Mina account in Devnet:

> curl "http://localhost:3000/account_state/B62qpWKzx7e1mmwVf8dJAPFQPGVpZP9pJaobhvg8iagqU2r1bEyndMa/http%3A%2F%2Flocalhost%3A3085%2Fgraphql/http%3A%2F%2Flocalhost%3A8545/0xB7f8BC63BbcaD18155201308C8f3540b07f84F5e"
{"valid":true,"balance":20000000000000}

This means that the Mina account with the public key B62qpWKzx7e1mmwVf8dJAPFQPGVpZP9pJaobhvg8iagqU2r1bEyndMa has 20000000000000 nanoMINA stored in Devnet and its state is valid according to the Mina bridge.

Integration test

In the root folder, run:

make check_account

This will run the account state utility as an integration test. NOTE: To run this, run the Proof wrapper and the Verifier contract before.

Architecture

This is subject to change.

    flowchart TB
        MINA[(Mina)]-->A(Periodic proof poller)
        -->|Kimchi + IPA + Pasta proof| B(State proof wrapper)
        -->|Kimchi + KZG + bn254 proof| B3

        subgraph EB["EVM Chain"]
        direction LR
        B1["Block 1"] --> B2["Block 2"] 
            --> B3["Block 3"] --> B4["Block 4"]
        end

        U((User))<-->WEBUI{{Web UI}}<-->MINA
        U<-->S{{Solidity verifier utility}}
        B3-->|Proof request| S
Loading

Data flow

    flowchart LR
        MINA(Mina node)
        --> |Mina proof,\nVerification key| POLLING[Polling service]
        --> |JSON Proof| CIRCUIT[Verifier circuit]
        --> |Gates| KZG[KZG prover]
        --> |KZG proof,\nVerification key| CONTRACT[Verifier contract]
        --> |is_valid_proof?| U(User)
Loading

Note that, in the second diagram, the Verification key taken from the Mina node will be different for each Mina state proof, while the Verification key generated from the Verifier circuit should be always the same.

Usage

Integration test

On root folder run:

MINA_RPC_URL=<rpc_url> make

Where <rpc_url> is the URL of the Mina node GraphQL API used to query the Mina state proof.

This command will:

  • Invoke the polling service and query the last Mina chain state and a Pasta+IPA proof, then send both to the public_input_gen/ crate.
  • This crate will compute needed data (SRS, public inputs) for feeding the state proof into an o1js verifier circuit.
  • The circuit will verify the proof and output the gate system to a KZG prover.
  • The KZG prover will generate another proof (BN254+KZG) of this verification. This makes it suitable to verify in an Ethereum smart contract. The final proof including the embedded state will be sent to the Solidity verifier.
  • The verifier will be deployed in Anvil (a local test blockchain) and a bash script will send a transaction with the state+proof data for running the final verification. If successful, the contract will store the state data and will expose an API for the user to retrieve it, knowing that this data was zk-verified.

Account state utility

On root folder run:

MINA_RPC_URL=<rpc_url> sh state_utility/run.sh <public_key>

Where <rpc_url> is the URL of the Mina node GraphQL API used to query the Mina state proof.

This will return the balance of the account associated with the <public_key> passed as argument.

Roadmap

For the Bridge MVP, the utility will be implemented as a CLI that will receive a public key as an argument. The development stages are defined below:

  1. User queries the balance of one of its accounts. The service queries the Mina node without validating the state in Ethereum.
  2. User queries the balance of one of its accounts. The service queries the Mina node and the related Mina state opening proof. The service queries the Bridge to fetch its stored opening proof to compare the proofs and check that the retrieved Mina state was validated in Ethereum. If the Mina state is valid, the service returns the balance.

The diagram below shows the sequence of a user sending a request to the service implemented as stated in the development stage 2:

sequenceDiagram
 User->>Utility: balance?
 Utility->>Mina: balance?
 Mina->>Utility: balance
 Utility->>Mina: related proof?
 Mina->>Utility: related proof
 Utility->>Bridge: stored proof?
 Bridge->>Utility: stored proof
 Utility->>Bridge: are proofs equal?
 Bridge->>Utility: yes/no
 Utility->>User: balance (if yes)
Loading

Components of this Repo

This repository is composed of the following components:

Polling service

This module contains a script to fetch the latest Mina state proof from a Mina node and parse the proof into a JSON a file.

Dependencies

You need to have access to a Mina node. To run a node, follow the installation steps.

Running

On polling_service run:

MINA_RPC_URL=<rpc_url>  ./run.sh

This script will:

  1. Fetch the latest Mina state proof from a Mina node
  2. Clone our Mina mainnet fork (1.4.0)
  3. Use the cloned fork to parse the fetched proof into a JSON file

Verifier circuit

This module contains the o1js circuit used for recursively verify Mina state proofs. A proof of the circuit will be constructed in subsequent modules for validating the state.

The code is written entirely in Typescript using the o1js library and is heavily based on Kimchi's original verifier implementation.

Running Verifier circuit

On verifier_circuit/ run:

make

Where <rpc_url> is the URL of the Mina node GraphQL API used to query the Mina state proof.

This command will create:

  • the KZG proof,
  • the public inputs,
  • the prover index and
  • the SRS

The proof will be used in the next step.

Testing

npm run test
npm run testw # watch mod

will execute Jest unit and integration tests of the module.

Verifier circuit Structure

  • poly_commitment/: Includes the PolyComm type and methods used for representing a polynomial commitment.
  • prover/: Proof data and associated methods necessary to the verifier. The Fiat-Shamir heuristic is included here (ProverProof.oracles()).
  • serde/: Mostly deserialization helpers for using data from the verifier_circuit_tests/ module, like a proof made over a testing circuit.
  • util/: Miscellaneous utility functions.
  • verifier/: The protagonist code used for verifying a Kimchi + IPA + Pasta proof. Here:
    • batch.ts/ includes the partial verification code used for verifying a batch of proofs.
    • verifier.ts/ has the main circuit for verification, currently executes a minimal final verification over a batch of partially verified proofs.
    • sponge.ts/ has a custom sponge implementation which extends the Poseidon.Sponge type from o1js.
  • test/: JSON data used for testing, which are derived from the verifier_circuit_tests/.
  • SRS.ts contains a type representing a Universal Reference String (but uses the old Structured Reference String name).
  • polynomial.ts contains a type used for representing and operating with polynomials.
  • alphas.ts contains a type representing a mapping between powers of a challenge (alpha) and different constraints. The linear combination resulting from these two will get you the main polynomial of the circuit.
  • main.ts is the main entrypoint of the module.

Verifier circuit tests

Contains a Rust crate with Kimchi as a dependency, and runs some components of it generating data for feeding and comparing tests inside the verifier circuit.

For executing the main integration flow, do:

cargo run

this will run the verification of a test circuit defined in Kimchi and will export some JSON data into verifier_circuit/src/test.

For executing unit tests, do:

cargo test -- --nocapture

this will execute some unit tests and output results that can be used as reference value in analogous reference tests inside the verifier circuit.

KZG Prover

This module contains a Rust crate for generating a KZG proof. This proof is used in the eth_verifier module.

It receives the state proof and the public inputs from the verifier_circuit module and:

  • calculates the verifier index
  • calculates the linearization

and sends it with the proof, the verifier input and the SRS.

Verifier smart contract

eth_verifier/ holds the Mina state verifier in solidity, implemented using Foundry. The contract exposes an API for retrieving zk-verified data from the last Mina state.

Install dependencies by running:

make setup

Verifying a proof on testnet

This repo has commands to easily deploy the Verifier contract and verify a proof on Sepolia Testnet. To deploy the Verifier contract to Sepolia, run:

PRIVATE_KEY=<your_private_key> ETH_RPC_URL=<rpc_url> make sepolia.deploy

Where:

  • <your_private_key> is the private key of the account you will use to sign and pay for the deployment transaction.
  • <rpc_url> is the URL of the Sepolia endpoint.

To upload the proof generated with kzg_prover, run:

PRIVATE_KEY=<your_private_key> ETH_RPC_URL=<rpc_url> CONTRACT_ADDRESS=<verifier_address> make sepolia.upload_proof

Where <verifier_address> is the address of the deployed Verifier contract. It reads the proof, its public inputs and the index linearization and uploads them to the testnet so that the verifier contract stores them.

To verify the uploaded proof, run:

PRIVATE_KEY=<your_private_key> ETH_RPC_URL=<rpc_url> CONTRACT_ADDRESS=<verifier_address> make sepolia.verify

This will run the partial and final verification over the uploaded proof and return the verification result as a boolean. true if the uploaded proof is valid, false otherwise.

Local usage and deployment

The contract will be deployed and verification should execute after running the whole project workflow (executing make in the root) to query, parse and serialize the last Mina state data and proof.

To make this manually, you can follow the next steps. Files proof.mpk and state.mpk should be present in eth_verifier/ for this.

Start the local chain with:

make run_node

and then deploy and verify the data:

make deploy_and_verify

after deployment Anvil will return a list of deployed contracts, as it will also deploy needed libraries for the verifier:

...

##### anvil-hardhat
✅  [Success]Hash: 0x312036beb087e610e6ba100a1ef0653c31c28db4a924aee13e6550a4181a31ed
Contract Address: 0x67d269191c92Caf3cD7723F116c85e6E9bf55933
Block: 17
Paid: 0.005753394722296 ETH (1825720 gas * 3.1513018 gwei)


Transactions saved to: eth_verifier/broadcast/Deploy.s.sol/31337/run-latest.json

Sensitive values saved to: eth_verifier/cache/Deploy.s.sol/31337/run-latest.json

the last contract deployed is the verifier, its address can be used for interacting with the contract. Check if verification was successful by running:

cast call <CONTRACT_ADDR> 'is_state_available()(bool)'

if true, then you can get State data from the contract storage:

cast call <CONTRACT_ADDR> 'retrieve_state_creator()(string)'
cast call <CONTRACT_ADDR> 'retrieve_state_hash()(uint256)'
cast call <CONTRACT_ADDR> 'retrieve_state_height(uint256)'

Testing Verifier Smart Contract

Just run:

make test

Notes related to cast usage

  • For invoking non-view functions in the contract, it's needed to publish a transaction via cast send. Getter functions can be invoked with cast call.
  • Some commands may require you to encode the calldata before sending. In this case you can use cast calldata.

For more information on Ethereum transactions and encoding you can visit the following resources:

Merkle proof verifier

This component is composed of:

  • Ledger hash parser: Fetches the ledger hash represented as a Merkle root and parses it to be EVM-friendly. Its logic is in state_utility.
  • Merkle tree parser: Fetches the account state represented as a Merkle leaf and its corresponding Merkle path and parses them to be EVM-frienfly. Its logic is in merkle_path.
  • Verifier contract: Verifies the account state inclusion by verifying the Merkle proof in Solidity. Its logic is in eth_verifier.

Other components

  • kzg_prover: Rust code for generating a KZG proof. This proof is used in the eth_verifier.
  • public_input_gen/: Rust code for generating a Mina state proof. This proof is used in the verifier_circuit.
  • srs/: Contains tests SRSs for Pallas and Vesta curves.
  • test_prover/: Typescript code using o1js library. This is a test prover for the Kimchi proof system. It's a PoC and will be removed in the near future.

Kimchi proving system

Kimchi is a zero-knowledge proof system that’s a variant of PLONK.

Kimchi represents a series of enhancements, optimizations, and modifications implemented atop PLONK. To illustrate, it addresses PLONK's trusted setup constraint by incorporating a polynomial commitment in a bulletproof-style within the protocol. In this manner, there's no necessity to rely on the honesty of the participants in the trusted setup.

Kimchi increases PLONK's register count from 3 to 15 by adding 12 registers. With an increased number of registers, Kimchi incorporate gates that accept multiple inputs, as opposed to just two. This unveils new opportunities; for instance, a scalar multiplication gate would necessitate a minimum of three inputs—a scalar and two coordinates for the curve point.

New proof systems resembling PLONK employ custom gates to efficiently represent frequently used functionalities, as opposed to connecting a series of generic gates. Kimchi is among these innovative protocols.

In Kimchi, there's a concept where a gate has the ability to directly record its output onto the registers utilized by the subsequent gate.

Another enhancement in Kimchi involves the incorporation of lookups for performance improvement. Occasionally, certain operations can be expressed in a tabular form, such as an XOR table.

In the beginning, Kimchi relies on an interactive protocol, which undergoes a conversion into a non-interactive form through the Fiat-Shamir transform.

Proof Construction & Verification

Secuence diagram linked to proof-systems/kimchi/src/verifier.rs

Commitments to secret polynomials

Links to the associated code.

public input & witness commitment

beta

gamma

permutation commitment


Commitments to quotient polynomials

Links to the associated code.

alpha

quotient commitment


Verifier produces an evaluation point

Links to the associated code.

zeta

change of sponge

recursion challenges


Prover provides needed evaluations for the linearization - 1

Links to the associated code.

zeta

negated public input

15 register/witness - 6 sigmas evaluations


Prover provides needed evaluations for the linearization - 2

Links to the associated code.

TODO


Batch verification of evaluation proofs

Links to the associated code.

v,u

polynomials that have an evaluation proof


Pickles - Mina’s inductive zk-SNARK composition system

Pickles uses a pair of amicable curves called Pasta in order to deliver incremental verifiable computation efficiently.

The two curves pallas and vesta (pa(llas ve)sta) created by the Zcash team. Each curve’s scalar field is the other curve’s base field, which is practical for recursion

These curves are referred to as “tick” and “tock” within the Mina source code.

  • Tick - Vesta (a.k.a. Step), constraint domain size 2¹⁸ [block and transaction proofs]
  • Tock - Pallas (a.k.a. Wrap), constraint domain size 2¹² [signatures]

The Tock prover does less (only performs recursive verifications and no other logic), so it requires fewer constraints and has a smaller domain size. Internally Pickles refers to Tick and Tock as Step and Wrap, respectively.

One curve handles the current proof, while the other is used to verify previous proofs.

Tock is used to prove the verification of a Tick proof and outputs a Tick proof. Tick is used to prove the verification of a Tock proof and outputs a Tock proof. In other words,

  • Provetock ( Verify(Tick) ) = Tickproof

  • Prove tick (Verify(Tock) ) = Tockproof

Description

Both Tick and Tock can verify at most 2 proofs of the opposite kind, though, theoretically more is possible.

Currently, in Mina we have the following situation.

  • Every Tock always wraps 1 Tick proof.
  • Tick proofs can verify 2 Tock proofs
    • Blockchain SNARK takes previous blockchain SNARK proof and a transaction proof
    • Verifying two Tock transaction proofs

Pickles works over Pasta, a cycle of curves consisting of Pallas and Vesta, and thus it defines two generic circuits, one for each curve. Each can be thought of as a parallel instantiation of a kimchi proof systems. These circuits are not symmetric and have somewhat different function:

  • Step circuit: this is the main circuit that contains application logic. Each step circuit verifies a statement and potentially several (at most 2) other wrap proofs.
  • Wrap circuit: this circuit merely verifies the step circuit, and does not have its own application logic. The intuition is that every time an application statement is proven it’s done in Step, and then the resulting proof is immediately wrapped using Wrap.

Both Step and Wrap circuits additionally do a lot of recursive verification of the previous steps. Without getting too technical, Step (without loss of generality) does the following:

  1. Execute the application logic statement (e.g. the mina transaction is valid)
  2. Verify that the previous Wrap proof is (first-)half-valid (perform only main checks that are efficient for the curve)
  3. Verify that the previous Step proof is (second-)half-valid (perform the secondary checks that were inefficient to perform when the previous Step was Wrapped)
  4. Verify that the previous Step correctly aggregated the previous accumulator, e.g. acc2=Aggregate(acc1,π step,2)

Step-Wrap Diagram

Accumulator

The accumulator is an abstraction introduced for the purpose of this diagram. In practice, each kimchi proof consists of (1) commitments to polynomials, (2) evaluations of them, (3) and the opening proof.

What we refer to as accumulator here is actually the commitment inside the opening proof. It is called sg in the implementation and is semantically a polynomial commitment to h(X) (b_poly in the code) — the poly-sized polynomial that is built from IPA challenges.

It’s a very important polynomial – it can be evaluated in log time, but the commitment verification takes poly time, so the fact that sg is a commitment to h(X) is never proven inside the circuit. For more details, see Proof-Carrying Data from Accumulation Schemes, Appendix A.2, where sg is called U.

In pickles, what we do is that we “absorb” this commitment sg from the previous step while creating a new proof.

That is, for example, Step 1 will produce this commitment that is denoted as acc1 on the diagram, as part of its opening proof, and Step 2 will absorb this commitment. And this “absorbtion” is what Wrap 2 will prove (and, partially, Step 3 will also refer to the challenges used to build acc1, but this detail is completely avoided in this overview). In the end, acc2 will be the result of Step 2, so in a way acc2 “aggregates” acc1 which somewhat justifies the language used.

Analysis of the Induction (recursion) method applied in Pickles

The Verifier is divided into 2 modules, one part Slow and one part Fast.

Figure 1

S0 is the initial statement, U is the Update algorithm, the Pi are the proofs, and the S's are the updated statements.

Figure 2

On top of each Pi proof, we run a Fast verifier. With the Pi proof and the cumulative Statement from the previous step, the U algorithm is applied and a new updated Statement is created. This new updated Statement is the input of the Slow part of the Verifier, but we don't run the Slow Verifier until we reach the end of the whole round.


Execution of Verifier Slow (which is very slow) can be deferred in sequences, and the V slow current always accumulates to the previous statement. This implicitly 'runs Vs on S1' as well.


Remember that the S's are statements that accumulate, so each one has information from the previous ones.

Figure 3

When we reached the last round we see that the intermediate Verifiers Slow disappears, as they are no longer useful to us.

Figure 4

Attention!! We haven't executed any Verifier Slow yet; we only run Verifier Fast in each round.

Therefore, in the last step, we execute the current Verifier Fast on its Pi, and the Last Verifier Slow on the Final S. This may take 1 second, but it accumulates all the previous ones.

Figure 5


Everything inside the large red square in the following figure has already been processed by the time we reach the last round.

Figure 6


Let's now see how the Verifier Fast is divided.

Figure 7

Vf corresponds to field operations in a field F, and Vg corresponds to group operations in a group G.

Figure 8

The proof Pi is divided into 2 parts, one corresponding to group operations G, and it exposes, as a public input to the circuit, the part of the proof that is necessary to execute Vf.

Pickles Technical Diagrams

The black boxes are data structures that have names and labels following the implementation.
MFNStep/MFNWrap is an abbreviation from MessagesForNextStep and MessagesForNextWrap that is used for brevity. Most other datatypes are exactly the same as in the codebase.

The blue boxes are computations. Sometimes, when the computation is trivial or only vaguely indicated, it is denoted as a text sign directly on an arrow.

Arrows are blue by default and denote moving a piece of data from one place to another with no (or very little) change. Light blue arrows are denoting witness query that is implemented through the handler mechanism. The “chicken foot” connector means that this arrow accesses just one field in an array: such an arrow could connect e.g. a input field of type old_a: A in a structure Vec<(A,B)> to an output new_a: A, which just means that we are inside a for loop and this computation is done for all the elemnts in the vector/array.

Figure

Consensus

Mina employs Ouroboros Samasika as its consensus mechanism, which will be subsequently denoted as Samasika. Three essential commitments provided include:

  • High decentralization - Self-bootstrap, uncapped participation and dynamic availability
  • Succinctness - Constant-time synchronization with full-validation and high interoperability
  • Universal composability - Proven security for interacting with other protocols, no slashing required

Joseph Bonneau, Izaak Meckler, Vanishree Rao, and Evan Shapiro collaborated to create Samasika, establishing it as the initial succinct blockchain consensus algorithm.
The complexity of fully verifying the entire blockchain is independent of chain length.
Samasika takes its name from the Sanskrit term, meaning small or succinct.

Chain selection rules

Samasika uses two consensus rules: one for short-range forks and one for long-range forks.

Short-range fork rule

This rule is triggered whenever the fork is such that the adversary has not yet had the opportunity to mutate the block density distribution.
A fork is considered short-range if it took place within the last m blocks. The straightforward implementation of this rule involves consistently storing the most recent m blocks. Yet, in the context of a succinct blockchain, this is considered not desirable. Mina Samasika follows a methodology that necessitates information about only two blocks, the concept involves a decentralized checkpointing algorithm.

Long-range fork rule

When a malicious actor generates an long-range fork, it gradually distorts the leader selection distribution, resulting in a longer adversarial chain. At the start, the dishonest chain will have a reduced density, but eventually, the adversary will work to elevate it. Therefore, the only factor we can depend on is the variation in density in the initial slots after the fork, which is known as the critical window.
The reasoning is that the critical window of the honest chain is very likely to have a higher density because this chain has the most stake

Decentralized checkpointing

Samasika employs decentralized checkpointing to discern the nature of a fork, categorizing it as either short-range or long-range.

  • Start checkpoint - State hash of the first block of the epoch.
  • Lock checkpoint - State hash of the last known block in the seed update range of an epoch (not including the current block)

Remember, a fork is categorized as short-range if either:

  • The fork point of the candidate chains are in the same epoch.
  • The fork point is in the previous epoch with the same lock_checkpoint

As Mina prioritizes succinctness, it implies the need to maintain checkpoints for both the current and the previous epoch.

Short-range fork check

Keep in mind that short-range forks occur when the fork point occurs after the lock_checkpoint of the previous epoch; otherwise, it qualifies as a long-range fork.
The position of the previous epoch is a measurement relative to a block's perspective. In cases where candidate blocks belong to distinct epochs, each will possess distinct current and previous epoch values.
Alternatively, if the blocks belong to the same epoch, they will both reference the identical previous epoch. Thus we can simply check whether the blocks have the same lock_checkpoint in their previous epoch data.

Sliding window density

Let describe Mina's succinct sliding window density algorithm used by the long-range fork rule. In detail how windows are represented in blocks and how to compute minimum window density

Nomenclature
  • We say a slot is filled if it contains a valid non-orphaned block.
  • An w-window is a sequential list of slots s1,...,sw of length w.
  • A sub-window is a contiguous interval of a w-window.
  • The density of an w-window (or sub-window) is the number non-orphan block within it.
  • We use the terms window, density window, sliding window and w-window synonymously.
  • v is the Length by which the window shifts in slots (shift parameter). slots_per_sub_window
  • w is the Window length in slots. ( the sliding window is a w-long window that shifts v-slots at a time).

The Samasika research paper presents security proofs that determine the secure values for v, w, and sub-windows per window.
A sliding window can also be viewed as a collection of sub-windows.
Rather than storing a window as clusters of slots, Samasika focuses solely on the density of each sub-window.
The density of a window is computed as the sum of the densities of its sub-windows.

Given a window W that is a list of sub-window densities, the window density is: density(W) = sum(W)

Window structure

We use the phrase "window at sub-window s" to refer to the window W whose most recent global sub-window is s.
In the Samasika paper the window structure actually consists of the 11 previous sub-window densities, the current sub-window density and the minimum window density .A total of 13 densities.
The most recent sub-window may be a previous sub-window or the current sub-window.

Minimum window density

The minimum window density at a given slot is defined as the minimum window density observed over all previous sub-windows and previous windows, all the way back to genesis.
When a new block B with parent P is created, the minimum window density is computed like this.
B.min_window_density = min(P.min_window_density, current_window_density)
where current_window_density is the density of B's projected window

The relative sub-window i of a sub-window sw is its index within the window.

Ring-shift

When we shift a window [d0, d1, ..., d10] in order to add in a new sub-window d11, we could evict the oldest sub-window d0 by shifting down all of the other sub-windows. Unfortunately, shifting a list in a SNARK circuit is very expensive.
It is more efficient (and also equivalent) to just replace the sub-window we wish to evict by overwriting it with the new sub-window, like this: sub_window_densities: d11 | d1 | d2 | d3 | d4 | d5 | d6 | d7 | d8 | d9 | d10

Projected window

Generating a new block and determining the optimal chain in accordance with the long-range fork rule involve the computation of a projected window.
Given a window W and a future global slot next, the projected window of W to slot next is a transformation of W into what it would look like if it were positioned at slot next.
For example, when a new block B is produced with parent block P, the height of B will be the height of P plus one, but the global slot of B will depend on how much time has elapsed since P was created.
According to the Samasika paper, the window of B must be initialized based on P's window, then shifted because B is ahead of P and finally the value of B's sub-window is incremented to account for B belonging to it.
Remember that the calculation of window density, including sub-window s, only occurs when the sub-window is greater than s, after s becomes a previous sub-window. Therefore, if next is k sub-windows ahead of W we must shift only k - 1 times because we must keep the most recent previous sub-window.

Now that we know how much to ring-shift, the next question is what density values to shift in. Remember that when projecting W to global slot next, we said that there are no intermediate blocks. That is, all of the slots and sub-windows are empty between W's current slot and next. Consequently, we must ring-shift in zero densities. The resulting window W is the projected window.

Recall this diagram:

Suppose window W's current sub-window is 11 whose density is d11 and d1 is the oldest sub-window density

Now imagine we want to project W to global slot next = 15. This is k = 15 - 11 = 4 sub-windows ahead of the most recent sub-window. Therefore, we compute shift_count = min(max(k - 1, 0), sub_windows_per_window) in this case: shift_count = min(max(4 - 1, 0), 11) = 3

Ring-shift in 3 zero densities to obtain the projected window.

We can derive some instructive cases from the general rule

Genesis window

Anything related to Genesis windows is not involved in the Mina Bridge.

Relative minimum window density

When Mina engages "chain selection" in the long-range fork rule, It doesn't directly employ the minimum window densities found in in the current and candidate blocks.
Rather than that, Mina opts for the relative minimum window density...

Remember that the minimum window density consistently decreases. Consequently, if a peer has been offline for a while and wants to reconnect, their current best chain might exhibit a higher minimum window density compared to the canonical chain candidate. Additionally, the long-range fork rule dictates that the peer to choose the chain with the superior minimum density.
The calculation of the minimum window density does not take into account the relationship between the current best chain and the canonical chain with respect to time.
Within Samasika, time is encapsulated and safeguarded by the notions of slots and the VRF. When computing the minimum window density, it is imperative to factor in these elements as well.
The relative minimum window density solves this problem by projecting the joining peer's current block's window to the global slot of the candidate block.

Protocol

This section outlines the consensus protocol in terms of events. Initialize consensus and Select chain.

In the following description, dot notation is used to refer to the local data members of peers. For example, given peer P, we use P.genesis_block and P.tip, to refer to the genesis block and currently selected chain, respectively.
For example, given peer P, we use P.genesis_block and P.tip, to refer to the genesis block and currently selected chain, respectively.

Initialize consensus

Things a peer MUST do to initialize consensus includes are Load the genesis block, Get the tip, Bootstrap and Catchup
Bootstrapping consensus requires the ability to synchronize epoch ledgers from the network.
All peers MUST have the ability to load both the staking epoch ledger and next epoch ledger from disk and by downloading them. P2P peers MUST also make these ledgers available for other peers.

Select chain

Each time a peer's chains receive an update, the select chain event takes place.
A chain is said to be updated anytime a valid block is added or removed from its head. The chain selection algorithm also incorporates certain tiebreak logic.
Supplementary tiebreak logic becomes necessary when assessing chains with identical length or equal minimum density.

Let P.tip refer to the top block of peer P's current best chain. Assuming an update to either P.tip or P.chains, P must update its tip similar to this:

The following selectSecureChain algorithm receives the peer's current best chain P.tip and its set of known valid chains P.chains and produces the most secure chain as output.

And the selectLongerChain algorithm:

Maintaining the k-th predecessor epoch ledger

The staking and next epoch ledgers MUST be finalized ledgers and can only advance when there is sufficient depth to achieve finality.
The staking and next epoch ledgers must be in a finalized state and can progress only when there is enough depth to ensure finality. Peers are required to retain the epoch ledger of the k-th predecessor from the tip, where k represents the depth of finality.
Due to the security prerequisites of Ouroboros, the gap in slots between the staking and next epoch ledgers may be great. Consequently, at any given moment, we essentially have three "pointers": staking s, next n, and finality k.
The final_ledger (epoch ledger of the k-th predecessor from the tip) is updated each time chain selection occurs, i.e., for every new tip block appended.

Getting the tip

For a joining peer to discover the head of the current chain it MUST not only obtain the tip, but also the min(k, tip.height - 1)-th block back from the tip. For the latter the peer MUST check the block's proof of finality.
Peers perform the proof of finality check by verifying two zero-knowledge proofs, one for the tip and one for the root, and a Merkle proof for the chain of protocol state hashes between them.