Blockchain scalability is a limit
In modern blockchain, scalability is a limit and is the main problem for meaningful planetary adoption.
To date, blockchain’s scalability is a limit for infrastructure and large-scale adoption; several attempts are underway to make improvements that would enable it to compete with major electronic payment methods.
Native Blockchain technology, is currently still limited by its ability to handle few transactions per second (TPS).
To clarify the problem, it is sufficient to quote the best-known blockchain as an example: Bitcoin processes 4.6 TPS, while Visa one of the largest electronic payment circuits, can process about 1,736 TPS (but has been able to reach peaks of 47,000 TPS).
The technology is driven by an idea of decentralization which makes scalability difficult since transactions must be transmitted to the entire network.
When generating a new block, a crucial factor to consider is the transmission time required to transmit the new block to all the nodes in the network ( Blockchain).
Whatever improvements one wants to make to the scalability of the blockchain one must necessarily come up against the main negative effect, which is security, so it is always necessary to find the right trade-off to scale.
Currently, the most common approach to achieving a scalable blockchain is “Layer 2,” which is a technological framework capable of handling off-chain transactions, reducing the load on the blockchain and consequently greater speed in transfer.
Types of layer2
L2 channels basically create direct or indirect communication channels between off-chain nodes; transactions between “connected” nodes are handled on Layer 2, reporting only two of them (the one that “opens” the channel and the one that “closes” it) on the main chain.
For example, the Lightning network for Bitcoin and Raiden Network for Ethereum are based on the L2 state channel.
L2 sidechains, on the other hand, are based on “daughter” blockchains anchored to the main chain, running in parallel with it.
Here the idea is similar to that of channels, but the major difference is that, in sidechains, off-chain transactions are executed on the blockchain (while communication channels are not based on a blockchain).
L2 has the advantage that no changes are needed to the main chain and that off-chain handling of transactions is, in a sense, independent of the main “Layer 1” chain, although the dependency is essential to record a “ summary of Tx” on the chain, but apart from this fact, the blockchain is not aware of what happens on “Layer 2.”
A mention regarding blockchain scalability solutions goes to sharding, which divides the main chain into subsets of nodes, each of which is responsible for a portion of the entire network: each node processes information belonging only to its own shard.
This solution certainly goes in the direction of improving scalability: “dividing” the load of the chain among different partitions leads to something similar to separate blockchains, which are characterized by higher transaction speed because they are lighter.
This technique is generally referred to as a “layer 1 solution,” signifying the fact that all transactions are handled on the blockchain itself.
Blockchain scalability is a limit, rollups may be a solution
We recently had a chance to write and elaborate on the future of off-chain transactions, and in particular on Rollups, a framework for Ethereum proposed in 2018, and considered a “hybrid” solution between L1 and L2 scaling
Unlike Lightning Network, Rollups implies that some information about each individual transaction sent on the L2 is published on the chain, causing a reduction in congestion and fees on the main one.
All information is always retrievable from the main chain, which is considered secure and always available.
A unique feature of Rollups is the ability to perform transactions outside the Rollup contract itself: this is to support transactions whose input comes from outside or whose output is destined for outside.
There are currently two Rollups solutions: the ZK Rollup and the Optimistic Rollup.
The ZK Rollups solution in particular, is based on the concept of validity proof and zero-knowledge proof, uses SNARK proof that allows observers to immediately prove batch validity.
SNARK compares a snapshot of the blockchain before transfers with a snapshot of the blockchain after transfers (i.e., wallet values) and reports only changes in a verifiable hash to the core network.
Although it is an inexpensive system to verify, the calculation on the other hand is expensive. Therefore, ZK Rollup is an appropriate system for transaction management, but it is not yet fully suitable for executing complex contracts.
Hotmoka solution for Takamaka Proof of Stake network.
We know that Rollups is a scalable and secure solution, but expensive in terms of computation and therefore unattractive for smart Contracts, while Hotmoka is presented as a cost-effective, low-fee solution even for SC computation.
Takamaka is for all intents and purposes a native PoS in which the computing power is dedicated to the proper management of the network; excluding PoW has minimized the management costs.
In addition, the data storage, calculation, and transaction management system is highly optimized.
Having an L0 that aggregates contract execution and an L1 that enables contract execution effectively reduces costs by freeing L1 from storage activities and trivial transactions such as payment management, blobs, and PoS.
Thus, the algorithm that determines transaction costs assigns an extremely small value to the instructions on the cpu to allow for complex logic and, most importantly, makes smart contract integrity checks non-penetrative.
Just as in ROLLUPS, an L2 layer with off-chain transactions can be implemented on Takamaka. Depending on the need, it is possible to connect the off chain part directly to L0, for storage and purposes, or to L1 if it were necessary to interact with a supporting smart contract.
L1 should handle a similar load to L0 and that is 280 transactions per second. This estimate refers to an average since this is a full turing processing layer where the transaction volume cannot be guaranteed for every usage scenario.