The blockchain and crypto technology landscape has evolved quite a lot over the last 5 years. We occupy a different world from when we first set out to build Polkadot. Though much has diverged from our original vision, many of our original theses have become canon. For example, our early bet on interoperability and cross-chain composability has progressed from theory to practice, and from speculation to fact – a multichain future is table stakes now.
Additionally, Parachains (as originally described, these were essentially optimistic rollups), Shared Security, and Data Availability as laid out in the vision paper from 2016 have sent ripples through the world of ideas since, and have been a source of architectural inspiration for projects both new and old. We have progressed from a universe of a few chains to one with an abundance of chains. But our goal was never to maximize the amount of blockchains for the sake of it, but rather to maximize the amount of work a decentralized network can do – or in other words, solve for scalability. The number of blockchains is related to scalability but not identical, and the differences between the two will be clarified here.
When we began Polkadot, we set out to create a system maximizing transaction throughput without compromising security guarantees and censorship-resistance. This aim has not changed, but progress at the application layer now allows us to lend more color and nuance to this vision. Application and protocol developers alike face new challenges in a multi-chain world. They must balance the requirements of secure execution, censorship-resistance, usability, costs, and composability. The emerging concept of blockspace serves as an abstraction and primitive which encompasses these requirements and goals.
In this piece, I’ll dive deeper into the definition and qualities of blockspace and how to evaluate different blockspace offerings in the market. Furthermore, I make a case for why we are shifting our perspective from blockchains to blockspace and why Polkadot is architecturally well-suited as the strongest generalized blockspace producer.
What is Blockspace?
“Blockspace is the best product to be selling in the 2020s.”
Chris Dixon, a16z, on the Bankless podcast
Blockspace is the capacity of a blockchain to finalize and commit operations. It’s a term that’s risen to prominence lately. It requires some unpacking. In some sense, it’s the primary product of the decentralized consensus systems running today. It’s an abstraction for reasoning about what blockchains actually produce: whether it is allocated to balance transfers, smart contracts, or computation is a concern for the application layer. At a high level, blockspace is a key ingredient for unstoppable applications. Unstoppable applications rely on decentralized systems for payment, consensus, or settlement. As such, the application layer is a prime consumer of blockspace as a good. As with any business, both applications and their developers should be concerned with both the quality and availability of goods in their supply chain.
Blockspace is an ephemeral good. When you intend to commit an operation to a chain you need blockspace in-the-moment: not yesterday’s, not tomorrow’s. Blockspace is either utilized or it is not. When a chain runs below capacity, consensus resources are wasted on producing unutilized blockspace.
Ethereum was the first major innovator in blockspace offerings. By introducing a virtual machine into the protocol and metering available blockspace via ‘gas’, it allowed the blockspace within a single block to be quantitatively parceled out to programs on the basis of the amount of computation performed and storage used. Since then, many projects have embarked on a journey to expand the types of blockspace. This lens provides insight into the key differentiators between Polkadot, Ethereum, Avalanche, Cosmos, Solana, and newer projects like EigenLayer or AltLayer.
The blockchain scaling trilemma tells us that out of security, latency, and throughput you can only pick two under heavy load. In Polkadot, our approach at the base layer has always been to maximize both security and throughput when we are forced to make a choice. While the trilemma is helpful in evaluating the theoretical utility of a base-layer protocol, the notion of blockspace allows us to reason better about how that throughput and security are allocated to the application layer.
Blockspace is not a commodity but rather a class of commodities. Blockspace produced by different systems will vary in quality, availability, and flexibility. The quality of blockspace can be judged by the security guarantees that the blockchain provides - the more secure, the higher the quality. Without a supply of blockspace, applications run into congestion or downtime, leading users to experience high fees, long wait times, or front-running. Without high-quality blockspace, applications are hacked and drained: low-quality blockspace is vulnerable to 51% attacks and toxic shock. Both types of occurrences will be familiar to readers who have spent time observing the blockchain ecosystem. These characteristics of blockspace are the key factors application developers must consider when choosing where to deploy.
Characteristics of Blockspace
Let’s dive deeper into the 3 main characteristics of blockspace as a good: Quality, Availability, and Flexibility.
Quality – As with any good, quality is a major factor for consumers of blockspace to consider. High-quality goods fulfill their purpose, and the purpose of blockspace is to be converted into a permanent record of state-machine execution. Within this framing, quality is equivalent to security, in crypto-economic parlance. I will use the two descriptions interchangeably going forward. Insecure or low-quality blockspace is vulnerable to 51% attacks and consensus faults. Under the hood, security is determined by two factors: the consensus protocol which is used to secure it, and the amount of real economic security (i.e. mining power or stake) utilized in the production and commitment of blockspace.
Availability – The availability of blockspace is determined by supply and demand. The supply of blockspace is driven by the throughput and liveness of the system producing it: blockchains that stall, halt, or require manual intervention and operation will have an intermittent supply of blockspace. Blockchains which don’t maximize throughput will cap out their supply at lower scales. Blockchains which run on insecure consensus mechanisms will deliver blockspace without strong guarantees of permanence.
Flexibility – Flexibility is the ability of the blockspace to be used in different types of operations. Bitcoin and Ethereum blockspace is somewhat flexible, in that blockspace can be allocated to user-submitted transactions. However, Bitcoin and Ethereum have a completely transactive blockspace mechanism which can act only on user-submitted transactions. It cannot be used on proactive operations that are performed without user input. Most blockchains have not advanced beyond this reactive model. Even most rollup protocols are primarily focused on user-driven balance transfers and smart contract invocations. The transaction formats, account models, and scripting languages supported by most blockchains are limited.
Highly flexible blockspace focuses entirely on execution, storage, and data consumption and leaves it up to the consumer of blockspace how to allocate those base resources to reactive and proactive operations. Blockspace consumers should be able to prioritize first-class application logic relative to user-submitted transactions so they can make meaningful progress even in the absence or overabundance of user-submitted transactions. This is not to say that transactive models are bad. In fact, it’s the opposite: transactive execution models can be used with good effect in interoperation with autonomous execution models. The underlying product behind both of these is blockspace, and blockspace can only support both models when it is maximally flexible. Flexible blockspace is a prerequisite for deep blockspace markets.
To add more nuance, we should acknowledge the fact that modern blockchain applications are based on interoperability between state machines utilizing blockspace. Mixing low-quality blockspace with high-quality spoils entire applications and exposes users to catastrophic tail risks. If we were building a restaurant, we wouldn’t serve our customers a meal prepared mostly of high-quality ingredients mixed with a small amount of garbage. Likewise, application developers shouldn’t serve their customers and users applications composed of mostly high-security blockspace and partially low-security blockspace. The low-quality ingredient ruins the rest of the dish. In the interoperable world, applications seeking to minimize risk to their users should use only high-security blockspace to deliver an end product.
Modern applications need parallel blockspace with predictable and consistently high quality. Furthermore, a class of blockspace is best suited to interoperable applications when all the blockspace in the class provides homogeneous security guarantees. In essence, interoperability is asynchronous state machine composability. A reliable network of asynchronously composed state machines unlocks super-additive value, and the classes of blockspace best suited to this thesis are those which provide standard and strong guarantees of security: creating value without incurring additional risk. These classes of blockspace are said to provide shared security for all blockspace within them.
Scaling solutions answer the blockspace supply problem. Sharding and rollups, for example, use crypto-economics to scale by introducing proof or dispute protocols where in the default case not every validator needs to check every state transition. Scaling solutions can be coupled with shared security architectures to address both the need for supply and quality.
Some ecosystems are now recognizing the need for shared security architectures - but they use voluntary opt-in by validators to determine how much security different blockspace products under the shared security umbrella receive. That is a poor architecture, because it enshrines particular validators as a rent-seeking special-interest group which supplicants must appeal to in order to get their project started or adequately secured. Barriers to entry between supply and demand have the potential to reduce both the availability and quality of blockspace.
Application and protocol developers should pay particular attention to these three characteristics, and structure their application moreso around blockspace than blockchains or smart contracts. Decentralized applications and protocols can operate at a lower cost to their users or token holders by acquiring blockspace on demand instead of running a chain 24/7. It’s quite common for early-stage blockchains to leak a large amount of tokens to validators producing blocks with minimal underlying usage. This is a side effect of inefficient blockspace allocation, which primarily benefits validators at the expense of application developers and token holders. Cloud computing outcompeted dedicated server space because it allocated physical resources in a more granular and adaptive manner. Similarly, blockspace-centric architectures for Web3 base layers will outcompete blockchain-centric architectures.
Polkadot: A Blockspace-Centric Architecture
Polkadot’s consensus system is, at its heart, an efficient and flexible blockspace generator. Like modern CPUs, the Polkadot network is a multi-threaded machine. This system is based around a single primitive: the Execution Core. Each core can execute one block from a state machine at a time. The network makes use of its resources in the form of validators and bonded stake to expose the maximum number of cores at any time. Due to the efficiency gains of Polkadot’s architecture, Polkadot’s validators are able to transform the real resources they consume into more blockspace than simply by running more standalone blockchains with the same staked value. Shared security on its own is not enough to build an effective blockspace producer. Shared security guarantees homogeneous quality of blockspace. It must be coupled with a scaling mechanism to guarantee supply.
As a blockspace producer, Polkadot does its best by opening up its services to the maximum number of users possible. Because Polkadot uses WebAssembly and a virtual machine architecture, Polkadot blockchains don’t need to convince validators to run their software. Like smart contracts, the only requirement is to post code on-chain and acquire blockspace.
Polkadot validators have no choice in which blockchain they are required to work on at any given moment - the only thing that matters to them is which core they’re assigned to at any time, and the corresponding blockchain scheduled on that core. Polkadot validators are pure service providers. They are not opinionated. They work on what the market tells them to work on. Purchasers of blockspace in Polkadot have a guarantee that validators will hold up their end of the bargain without any human intervention, and validators which do not do so will be missing out on rewards.
It’s fair to consider Polkadot a rollup protocol. However, unlike rollup protocols based on smart-contract systems, the rollups are enshrined in the base layer logic via Execution Cores. When rollups are built on top of a smart-contract layer, the system, to some extent, devolves into ‘every rollup for itself’, as they compete for gas, validators, inclusion, and scheduling. The transactive gas-based blockspace at the base layer is not best suited to allocating blockspace to rollups. In order to provide consistent guarantees about scheduling, security, and supply, we are building a specific system which is modular where modularity counts: at the application layer.
The architectural distinction between the Execution Cores and the actual blockchains or state machines which run upon them is of crucial importance. We see little value in maximizing the number of blockchains; this is only a proxy for what matters most: maximizing secure blockspace. Execution Cores are the engine of blockspace production and the scheduling rights to those cores open a design space for blockspace allocation products.
Mechanisms for Allocating Blockspace
Efficient blockspace allocation is critical. Usage patterns of blockchains are not consistent. Blockchains experience periods of heavy load and congestion as well as periods of under-utilization and emptiness. On the one hand, applications should be able to adapt to periods of heavy load. On the other hand, applications should not pay for blockspace they are not using. The product design space here is underexplored, but Polkadot’s architecture is uniquely amenable to improving the market’s offerings.
One parallel for thinking about the design of blockspace allocation products is the cloud computing market. Cloud computing business models often have two key features: reserved instances and spot instances. Reserved instances are cheaper but guaranteed for a prolonged period of time. Spot instances are more costly, available on-demand, and ephemeral. Applications with predictable load will save money by purchasing reserved instances but can scale to meet demand without service outages by utilizing spot instances. However, reserved instances also represent a commitment - the application operator is wasting money if real demand for the application falls below the reserved supply of cloud compute resources.
Let’s make this concrete. Long-term slots are the only current mechanism for allocating Polkadot’s Execution Cores. These are akin to reserved instances, allocated either by governance or by slot auctions: the winners earn a dedicated Execution Core for a predetermined time of 6, 12, 18, or 24 months. Parathreads, which we first introduced as a concept in 2019, are pay-as-you-go blockchains. Parathreads are like spot instances. Our current thinking is for this spot price to be set using an optimal controller: in simple terms, the price will go up when the cores exposed by Polkadot for parathreads are saturated and the price will go down when there are empty cores. This is just one further example of how blockspace can be allocated.
We can take this concept of allocating execution cores further. Polkadot’s architecture is such that a single chain can have multiple cores allocated to it simultaneously - imagine a blockchain that instead of advancing by 1 block at a time, advances by 2 or 3. This is possible, and is due to particularities of Polkadot’s design that allow for validation of sequential state transitions in parallel. In practical terms, the number of cores a chain can efficiently occupy is limited only by the number of cores it can acquire at a time and the rate at which the chain can produce blocks. We expect that as this market matures there will be a wave of innovation in block generation for Polkadot chains to maximize the utility of this feature. We expect that even chains with a simple sequential block authoring method, such as is currently available, would be able to make good use of 2 or 3 cores.
As an example of how multiple cores might be used by a single chain, we could introduce another type of blockspace allocation product: short-term auctions. These auctions would fall somewhere between pure spot allocation and long-term reserved allocation. It is certainly possible to design an auction mechanism for allocating slots for short durations - such as one hour, one day, or one month. This could be used for something I’ve been terming “Parachain Boost” - the ability for blockchains to expand their throughput during periods of heavy load, like a highway that gets wider during rush hour.
Furthermore, by changing our perspective from blockchain-centric to blockspace-centric it becomes clear that there is no reason a blockchain or state machine should run forever. Ephemeral blockchains are an interesting use-case I believe is highly under-explored - longer running processes should be able to offload their computations or protocols to short-lived chains just like programs running on a PC can offload computations or work to background threads.
One final avenue we can pursue for execution core allocation is the ability to transfer claims on execution cores. This will create a secondary market for Polkadot blockspace: chains will be able to trade extra capacity with each other and act as re-sellers for blockspace. Chains experiencing lower or higher demand than anticipated will be able to adapt accordingly or perhaps even speculate on future demand of blockspace.
Reframing the Meaning of Blockchain
In my opinion, the blockchain ecosystem has been thinking too small about the multi-chain world. Blockchains that start and run indefinitely with a steady pulse are an evidently inefficient mechanism. The multi-chain of tomorrow consists of blockchains that scale and shrink on demand. It contains ephemeral chains spawned by on-chain factories, spun up by contracts and imbued with automated purpose - to complete their work a few hours later and disappear. Our goal in Web3 is not to maximize the number of blockchains. Maximizing the number of blockchains is something I would state explicitly as a non-goal, as it primarily benefits validator cartels seeking to extract value. Our goal in Web3 is to maximize the amount of blockspace that exists and ensure it is allocated to the state machines which need it most at any time: a constant generation and allocation of global consensus resources to those who need it the most. An enterprise without waste. In other words: the most effective blockspace producer in the world.
Thanks to Fabian Gompf, Pranay Mohan, Björn Wagner, and Gavin Wood for review, edits, and discussion.