We tend to think of new technology as bursting on a scene and suddenly transforming everything it touches. In that critical moment, it leaves society resplendent in a new era where everything is so much better.
But that is something of a compression of reality.
If you look at the history of new technology, there’s always a longish period that is required to transition from prototype to mass adoption. Almost without exception, there’s a growth phase decades after the original technology has been pioneered on a lab bench, where the technology, whatever it is, needs time to find solutions to all sorts of manufacturing and technical problems of scale. Only then can it become a commercial hit.
You can see this pattern with the transistor, invented in 1947, but only replacing bulky hot valve technology in the mid fifties.
Reinforced concrete was invented in the middle of the 19th century. By only in the 20th century was it ready to become a go-to material for major construction. But even then, it was something of a drawn out process.
And so it seems with the adoption of blockchain technology.
The theory of blockchain has been with us for a while now, but nutting out the best way to scale it up has proved surprisingly elusive.
Vitalik Buterin's term 'Blockchain Trilemma' suggests that blockchain is up against a performance challenge that works three ways.
A blockchain needs to be distributed and decentralised to minimise the chances of bad actors getting together and launching a concerted ‘51%’ attack on the installation.
But, argues Buterin, increasing the distributed element of the design causes a knock-on in the increased amount of effort required to achieve consensus and validation. So greater decentralisation generally increases security but decreases speed. Decreased speed means decreased scalability.
So you can have a secure system that plods along and serves a handful of people, or a less secure system that serves plenty of people, or a secure system that serves plenty of people very slowly.
Such a three-way compromise is sometimes called a trilemma, but essentially it's a feature of a technology that has yet to fully mature. What it comes down to is that the performance needs to improve.
Of course everyone is looking for ways to mature it, and many of the Ethereum ETH enthusiasts are confident there is a clear path to doing this.
Essentially, the way this is done is analogous to the steam trains that struggle to get up a hill; the solution to having a locomotive that doesn’t go fast enough is to couple it to a second locomotive to help out.
What this looks like in blockchain is a second blockchain, called a ‘Layer 2’, that relieves some of the workload of the first blockchain layer, called a ‘Layer 1’. So everything can go more quickly and piggybacks on the first layer’s security.
And there’s no shortage of people who like this solution. Judging from the conversations at Crypto Valley’s premier conference, the CV Summit conference, held in early October in Zug, there’s plenty of enthusiasm about it.
But when you get down to the detail, there’s something rather inelegant about Layer 2s, something lacking in simplicity.
For one thing, they introduce a lot more friction as users have to buy and manage another native token attached to the Layer 2, for which there are gas fees to pay and this represents an additional cost for application users. For another, the bridge between Layer 1s and Layer 2s can get hacked. Yet another problem is that a layer two can go down, so you increase the number of parts that can break. All of this increases latency between the two protocols, slowing everything and everybody down.
And to cap it all, there’s the niggle that no matter how good the Layer 2 is, you are never really sure everything is confirmed properly unless it is done on Layer 1. In blockchain parlance, ‘finality’ is only achieved on the base layer.
But there is an obvious question here which needs answering.
The solution to the problem of having two locomotives is simply to have a better locomotive in the first place, and the same is true of blockchains. Build a better blockchain at Layer 1. Though preferably do it before the system enters service, otherwise you’re rebuilding the locomotive while it’s still moving. And that is always going to be an accident waiting to happen.
Rather than cascade them with Layer 2’s with workarounds like ZK Rollups, and the like, all you really need to do is build a better Layer 1 in the first place.
That really is the concept behind third-generation blockchains, and the idea is that by processing data in parallel you can speed the whole process up so that no extra layer is required. Such a technology exists, like Solana.
You might think that progression to a third-generation blockchain would be a no-brainer, and that the fledgling industry would want to progress to smooth and simple rather than get bogged down with patchy and complex fixes of a legacy technology. But it's sometimes not an easy choice for applications to move because there’s so much familiarity and investment in the legacy technology.
There are many newcomers, assessing the best tech to build applications, also Solana being the only blockchain capable of supporting many of their new ideas.
Perhaps there’s too much investment in second-generation blockchains for anyone in the second-generation camp to see the wood for the trees, but the history of technology shows complex workarounds don’t usually stay the course.
If the data is mainly stored in Layer 2, the Layer 1 could ultimately be just a decentralised cryptographic proofs repository. Whereas if a decentralised Layer 1 can do all that, you can keep records of cryptographic proofs with no additional cost.
Either way it could be an extended tussle as the best system wins through in the end.
But the world needs blockchain technology to perform and help us move to a fully fledged adoption of it. With the new third generation blockchain foundation, many new use cases can and are being contemplated that were not possible before due to the restrictions of legacy technology.
The problem with Layer 2s is that you have two problems. The leaders are seeing that and adopting highly performant Layer 1s from the get-go.
Maybe a new trilemma emerges in the future, resetting the bar, the current one however seems to have run out of road.