Open source is a key element of Linea’s strategy. There are multiple reasons for open sourcing. We want to allow everyone to verify that the code does what it’s supposed to do and to identify any potential security issues. We also want to allow anyone to fork the software and run their own version without having to provide any justification. Additionally, we aim to have multiple people from independent organizations, with various motivations and degrees of involvement, work together on the same code.
On this last point, the idea is as simple as the execution is difficult, especially in a blockchain world where a contributor who looks absolutely genuine can actually be a seasoned black hat (it is worth noting that other software such as Linux is a target too, as any open source used to manage critical infrastructure). However, it is very important for the decentralization of the network: if the contributions come from a single organization, then the protocol is actually centralized. While, thanks to the open-source nature, another organization can still, theoretically, step in if things go badly, the goal should always be to ensure that nobody is in a position of full control at any time. A secondary objective is to keep the protocol alive. In our Layer 2 world, many trade-offs are still unclear, and many truths are yet to be discovered: we don’t want to be stuck at a temporary maximum.
The first part of the solution is to decouple the governance of the network from the governance of the code repository. The governance of the network will decide what code the protocol is running. If there are multiple networks they will make independent decisions. The open source code evolving independently at its own pace. Technically, this translates to multiple (git) branches, but this alone is not enough: if all the branches diverge it becomes a set of forks put in the same place, but not a set of teams working together. The Apache foundation provides interesting ways to tackle this with the notions of contributor levels (contributor, committer, pmc member), and decision process (lazy consensus, voting).
So, to make Linea a successful and true open-source project with a fast pace of development we need: (1) contributors with a reasonable trust relationship between themselves; (2) an agreed decision-making process; (3) an architecture that allows for differences of choice and experimentation;(4) audits; (5) formal verification and fuzzing; (6) an efficient test suite.
On (1) trust, this takes time, and the Apache levels mentioned above work, including peer review of all changes.
On (2), decision making, the Apache way can also be applied.
The recipes for the needed architecture are known: modules with proper interfaces, plugins (e.g., like in Besu), and feature flags are key.
When it comes to audits, rollup funds are stored in a smart contract, and there is no way to avoid slow and expensive audits, with at least two independent audits per release.
Formal verification and fuzzing are not yet a magic wand for smart contracts and for “regular” code, but zk-rollups have an interesting property: the circuit and the smart contract are enough to guarantee the safety of the system (i.e., the funds are safe). Everything else (block production, trace generation, proof generation, etc.) can impact the liveness (i.e., slow block production), but not the safety of the funds. They are still very critical (and slowing block production can indirectly put the funds at risk through liquidations, for example). The circuit itself still needs to be audited like the smart contract, but its relative simplicity (compared to everything else) makes it a very good candidate for formal methods, especially to evaluate the soundness (i.e., can we generate a valid proof for a false statement) of the circuit.
An efficient test suite is also a key element. The circuit and the smart contract are a very small part of the system. The block production, the consensus layer, the trace generation, and the prover account for maybe 90% of the code, and this code is critical for performance (i.e., transaction cost and finality, in other words, user experience) and reliability. Audits still help, but the volume and complexity limit their efficiency. It is common to reject or postpone very beneficial changes because of the fear of breaking things. A proper test suite solves this, with the approach “if it passes the tests, it can go into production.” The test suite must be fast, requiring the usage of fake components, and must also test fault tolerance, requiring the usage of chaos engineering tools, integrated within the unit test suite. When it comes to the EVM, an efficient reference test suite helps to guarantee the completeness of the system (i.e. can we generate a valid proof for all valid behavior of the EVM).
Independently of all this, the multi prover is also a critical element of the security and of the decentralization, once again by reducing the importance of a single team.
In other words, open source is hard and demanding. But the resulting system, capable of executing thousands of transactions per second without trusting anyone, including the developers of the system, and without having anyone in a position of power, will make a long-lasting difference.