The concept of “block space” in Polkadot 1.0 and its future development
This year’s Polkadot Decoded Copenhagen conference was packed with insights!
After Parity CEO Bjorn shared the achievements of Polkadot 1.0 and Gavin talked about the long-term development direction, Polkadot founder Rob and Parity’s Vice President of Product Marketing, Steve Stover, discussed the concept of “block space” in Polkadot 1.0 and its future development. Below are some of the key takeaways summarized by PolkaWorld:
- Block space is a vital part of Polkadot’s transformation. It represents a shift in our mindset and differentiates between where we are now with Polkadot 1.0 and possible future directions.
- In fact, in Substrate, turning your chain into a hybrid one is very simple — just a few lines of code are needed.
- Hybrid chains can be seen as innovation engines. Teams behind these chains can incubate new use cases beyond the application logic they’ve already created and maintain.
- Polkadot’s goal is to offer a generic solution at the Core level to cater to a variety of block space requirements, thereby reaching a broader range of users or user groups.
- You shouldn’t pay for block space you don’t need and aren’t using, but you should always have the ability to scale up and improve performance.
- We need three things: scalability, security, and good scheduling. It’s pointless to allocate all your scaling resources to projects that aren’t making good use of them, no matter how much you expand.
- As a blockchain application grows, it might progress from generating blocks on-demand to booking long-term block space, then buying block space on secondary markets. In this way, gradually expanding the block space reserved for itself in the open market.
- Many opportunities arise when we move beyond the paradigm of “one chain, one core.” Future blockchains should be able to flexibly expand as needed, scaling up when dealing with large loads and scaling down during lighter loads.
- We should increasingly view blockchains in a way akin to the CPU model and begin to shift our understanding of applications within the blockchain world.
- In the long run, we can anticipate a vast amount of block space entering the Polkadot system, which will then be allocated precisely according to the application’s needs at any given moment through secondary markets.
Steve: Before we start, let’s briefly define the meanings of block space and ecosystem!
Rob: Block space is an essential part of Polkadot’s transformation. It represents a shift in our mindset and highlights the difference between where we are with Polkadot 1.0 and potential future directions. Essentially, it’s a vision of what product Polkadot is actually creating. We know about validators who get their compensation through staking, but what exactly are they producing? What is their basic unit of work? It’s the block space. You can think of it as a decentralized security unit. We have various primary resources, such as computing power, network, the value of staking, and storage. All these are input globally into a consensus algorithm, which then outputs these units of block space — that’s what we want to use to run applications. So, we’re transitioning from operating individual blockchains to providing block space.
Gav just mentioned the term “Coretime” in his presentation. Coretime is Polkadot’s mechanism for allocating these primary resources. We have execution cores, which are the cores of Polkadot. We assign chains or applications or program tasks to these cores to achieve block space allocation.
Steve: Going back to one of Gav’s points, the idea of running smart contracts on a Core is cool, but it will take some time to realize. You’ve always advocated for the concept of hybrid chains, that every blockchain should run a smart contract. One of the reasons is the benefits of asynchronous and synchronous processing on the same blockchain, and it can also optimize the use of block space. Can you share with everyone the concept and benefits of hybrid chains?
Rob: I wrote a blog post before, with a slightly humorous title called “Every Chain Should Have a Smart Contract,” just to grab attention and get people thinking. However, I believe that a chain that is 100% dedicated to a specific thing might not be the model that decentralized economies will eventually form. The reason is simple because communication between different programs or applications running in Polkadot or any system must connect through channels or other bridges. As such, there are cross-chain costs and they require time.
When some parts of a system are tightly coupled, you can combine specialization with generalization. This allows for immediate communication between parts, enhancing the system’s efficiency and response time. Therefore, considering the real costs of communication, how this economy structure evolves largely depends on how we couple things, whether tightly or loosely. This is the concept of hybrid chains. You have a blockchain with specific application logic, then anchored to some generic components, like smart contracts, for example. If you’re building new utilities, your users can establish and interact with them in real-time. This is a concept to optimize block space consumption. We’ve previously discussed how it’s produced, but this is about consumption, ensuring the economy is as efficient as possible.
Steve: Block space and hybrid chains are new concepts, and people are moving in that direction. I just met a team that said they want to move in this direction after discussing with you. Can you share some project examples that are developing towards block space, hybrid chains, or both?
Rob: I know some teams have adopted the hybrid chain approach. I certainly can’t list all of them, but the ones I can think of include Zeitgeist and Interlay. I know there are other teams, but I can’t remember them right now. The idea has gained quite a bit of acceptance because it allows teams to build their developer communities directly along the lines of what they’re doing.
In fact, in Substrate, turning your chain into a hybrid one is very easy; you only need to modify a few lines of code.
Steve: Are they primarily looking at the benefits in terms of development? Or are they considering the product advantages they offer to the market? How does this help them achieve their goals?
Rob: I believe for these teams, it’s mainly about building a developer community that can innovate side by side with what they’re constructing. Teams can develop base components (primitives) and expand upon them, and people will choose and code extensions that work in direct conjunction with them. When developing a new app or base component, teams need to decide whether they want this app or component to be used internally within the project or externally. This decision will be based on factors such as their objectives, resources, and the specific areas they wish to expand into. If a project’s app or extension (primitive) is deployed or used outside of the project ecosystem, then the cost of maintaining and managing these apps or extensions might increase, which could include but is not limited to technical support, security, updates, and maintenance.
Therefore, you can see these as the chain’s innovation engines. From my conversations with these teams, they wish to incubate new use cases beyond their existing application logic, that is, the apps they’ve already created and maintain. These new use cases can be seen as extensions or derivative applications, and they want these applications to run independently on the remaining Cores, i.e., operate on their state machines, rather than just relying on the original application or project.
Steve: Alright, can you delve deeper into the different types of block space allocation models in Polkadot and their advantages in terms of efficiency and usability? Some teams might be familiar with pay-as-you-go parachains or parathreads, how does this guide us towards the core allocation direction that Gav talked about?
Rob: Of course! Currently, in Polkadot, there’s the slot auction model, where you can bid for high-frequency block space that lasts for six months to two years. When I say high-frequency, I mean this large chunk of future block space or core time (Coretime) is allocated to a chain, and a block can be produced every 6 seconds. So we’re changing this model and introducing a pay-as-you-go model. The essence of the pay-as-you-go model is paying the market price, for instance, the market price to produce a block, you pay this to the relay chain, and then you have the right to produce a block, allowing for one state transition. This would allow chains that aren’t high-frequency to use our block space. So we’ll have two extremes, one is purely pay-as-you-go, purely intermittent, and the other extreme is long-term high-frequency.
We can imagine a distribution of block space needs, similar to different users or applications having various needs for block space. Therefore, Polkadot aims to provide a generalized solution at the core level to cater to different block space demands, allowing for a broader spectrum of users or applications. For instance, for startups or applications that are just launching, the “pay-as-you-go” mode might come in handy; you can write your code directly, deploy, and start. If your application is scaling, you might need to reserve block space to handle potential heavy use. At the same time, these applications also need to consider their operational frequency, like how often they transact or operate. So, Polkadot is working hard to develop an appropriate market model that can meet these needs.
Steve: We currently have the slot-based model, and we will introduce the pay-as-you-go model. One day in the future, we might combine these two models to meet specific needs. So, what are your thoughts and use cases for such a model?
Rob: The primary use case is to increase throughput. There are some hybrid models, but I believe it’s mainly about performance improvement. If you look at the usage patterns of applications on the Internet, they usually have some continuous low-frequency use and then experience intermittent peaks. This is the foundation of the whole cloud business model, with reserved instances and spot instances. You have some reserved instances for your app to handle continuous, expected loads, and then suddenly your website hits the front page of Reddit, with a million users to handle every minute, and you need to scale up. This is where spot instances come in, allowing for quick scaling during demand surges.
That’s the basic idea behind mixing these models. You shouldn’t be paying for block space you don’t need and aren’t using, but you should always have the capability to scale up and improve performance. I think this should be attractive for developers. When you’re launching an ecosystem, a developer network, this mix of different allocation models can scale appropriately at a reasonable cost.
Steve: Some other blockchains are stricter in processing and allocating block space. How is this different? I see this as a way to effectively avoid network congestion, although at a cost. I’d like to hear your thoughts on this.
Rob: Indeed, compared to the entire blockchain or crypto ecosystem, most focus on a pure bidding market, pay-as-you-go, just like in Ethereum or Bitcoin, where you just pay the fee and submit the transaction. They build all scaling technologies on this single scheduling primitive. If there’s no guarantee on the future trend of market prices (i.e., highly fluctuating market prices), it’s an issue. It leads to inefficiency, with lots of scheduling work required, like nodes determining which transactions would pay the highest fees and which paths (transaction execution paths) are most likely to bring the most profit. However, this strategy also leads to frequent changes and updates. If the market is too volatile, it might destabilize the network. So, I think it’s essential to strike a balance between liquidity and stability during development, making things predictable, enhancing system efficiency, and stability.
As I’ve always said, we need three things: scalability, security, and good scheduling. If you allocate all the scaling resources to projects that aren’t using them well, no matter how much you scale, it’s pointless. Allocation efficiency is one of the most crucial things. We want to move towards a pay-as-you-go model that might have high changes and unpredictable prices or transaction intervals, but we also want to head in the batch direction and then let governance decide how much block space in batches is appropriate and how we further divide it. We aim for batch direction and high liquidity, increasing efficiency, reducing overall costs, and providing projects with greater stability and predictability.
Steve: I’d like to discuss two concepts, one being the general concept of flexibility and how it’s applied, and the other is how you view the little players and how it relates to the product lifecycle. How does the flexibility offered by Polkadot in terms of block space help innovators move forward? How does the allocation model help solve this problem?
Rob: It’s an essential perspective. The current model isn’t particularly ideal or effective for certain groups with specific objectives or needs, and this issue has been raised. However, what we’re discussing is the direction and vision we hope to move towards. From the standpoint of creating or deploying a product, there are different types of people wanting to build products, applications, software protocols, etc. Some genuinely enjoy building things from scratch; they don’t want to raise funds from anyone. They want to launch the product alone or with a few people, then make it profitable and sustainable, and keep expanding. Others do want to raise funds, pitch an idea, and obtain resources. We can cater to both these types of audiences. Broadly, we can call this audience “bootstrappers” — they’re not highly capitalized, don’t have many resources available, and don’t have a very mature product.
The way I envision it is, a team initially deploys a version, they upload the code to the Polkadot network and immediately start building on-demand parachain blocks. When transaction numbers are sufficient to cover the cost of an on-demand block, collators will be incentivized to create a block. The collator will receive transaction fees and will have to pay for block generation. It’s a fair exchange, allowing startups to launch in this manner. So you can bootstrap in this manner, you do need to put in some resources, you’re trying to go more intermittently, you need to build an audience, which might require additional resources and effort.
As a blockchain application grows, it might transition from generating blocks on-demand, to reserving long-term block space, to buying block space on secondary markets, gradually increasing its reserved block space on the open market in this manner.
Steve: Other projects are trying to imitate Polkadot’s shared security, but they face an economic impact that may force them to make trade-offs. For instance, developers who want to achieve shared security in their projects might have to pay certain costs. How would this cost affect start-up teams of developers who don’t want to raise funds, and how would one handle this potential trade-off?
Rob: The main issue is that you have to pay validators in some way. Since they’re working and taking risks by staking, they need a reward. Therefore, for the system to be sustainable, there must be a mechanism to ensure that the party providing services or resources gets some return or benefit. You could think of it as “rent,” but it’s not necessarily the same as “rent-seeking” behavior. What we’ve seen is that this actually offers users a lower cost than directly paying the validators because they can re-stake and use their stake to protect many different applications simultaneously.
I’m not sure what specific example you’re referring to; I have some thoughts in mind. For instance, it’s worth noting that cross-chain security on Cosmos Hub has started charging a percentage of tokens from teams building on top of it. This is a different model, and I believe it’s best not to purchase shared security by giving up application ownership. There are also other technical challenges. In general, I think it’s better to have a specific cost rather than exchanging shared security by giving up ownership.
Steve: Moving on to the next topic, about the concept of block space flexibility. In the world of Web3, if you aim to scale and engage with the ecosystem to deliver value, what are the different dimensions of flexibility? How are they applied to a single blockchain or use-case, or even a broader ecosystem?
Rob: If we’re talking about block space flexibility, it means that you can do a lot of different things with it. For instance, certain systems (especially zero-knowledge systems) are very restrictive in what you can do; as a programmer, you have very little freedom in writing code. Or, many smart contract systems restrict you to a very specific programming language or have high constraints on the cost of small operations, which doesn’t provide flexibility when crafting applications of varying complexity. This is where we talk about flexibility; you should be able to write code that feels like regular code, without awkward constraints and in a typical programming language that people are familiar with.
I think this point about flexibility opens the door for a lot more applications. While these other systems are powerful and can be used for various tasks, being able to customize data storage and formats is crucial, and Polkadot really does that. It abstracts all of these elements, allowing applications to function in a generic manner.
Steve: Who do you see as the real custodians of Polkadot block space in the future?
Rob: It’s hard to say, but brainstorming here, I believe there’s a concept of elastic expansion. As we move beyond the paradigm of “one chain owning one core,” a lot of opportunities emerge. This means allowing an application to grab as many cores as possible over a period of time to process as many transactions as it can. This suggests that an application, when faced with a heavy load, could temporarily expand to 10x, 20x, or even potentially 100x in the future. Conversely, there may be times when the number of cores a chain uses is much less than one. In essence, the blockchains of the future should be able to scale flexibly, expanding when facing heavy loads and shrinking during lighter periods.
Perhaps we should start rethinking our understanding of blockchains, especially how applications exist on them. In the traditional thought pattern, once a blockchain is launched, it produces blocks at regular intervals and keeps running indefinitely. But now, we might consider using the cores and block space of blockchains to run applications with shorter lifecycles. Say your application needs to perform a task that’s entirely standalone. It hands this task to the blockchain, leveraging its cores and block space to execute it. Once completed, it sends a message to the original app indicating the work is done, delivers the result, and then concludes. So, we should perhaps view blockchains more like CPU models and start shifting our understanding of applications within the blockchain world.
Steve: When it comes to allocating and utilizing block space, there might emerge a market for block space, which could potentially form a new economic model for efficient utilization of block space. What are your views or thoughts on this possible future block space market?
Rob: Yes, in Gavin’s talk, he discussed NFTs corresponding to scheduling instances, and we might go for a design called “block space regions” — although the name might be rebranded. It’s essentially a descriptor of future block space usage rights, detailing the start time, end time, and how frequently the owner can get access to the blockchain’s core. These “block space regions” can be subdivided. For instance, a six-month region can be split into two three-month regions or a two-month and a four-month region. It can also be divided based on access frequency, for instance, reducing from every relay chain block access to the core to every second or third.
With this capability in place, we can envision a secondary market formed by individuals within the Polkadot ecosystem, enabling order matching. Some might say, “I need this much block space,” while others might go, “I don’t need this much; I’d like to sell some.” The system can then match orders, decomposing and re-combining based on various offers and demands in the market to cater to buyers’ needs.
In the long run, we can expect a substantial inflow of block space into the system, with the secondary market ensuring this block space is allocated in the precise quantities applications require at any particular moment.
Original video link: https://www.youtube.com/watch?v=mH2ABBErpTw&t=33s
Compiled by: PolkaWorld.
PolkaWorld is a Polkadot global community founded in 2019. We have gathered more than 40,000 Polkadot enthusiasts, and have always been committed to spreading Polkadot knowledge, training Substrate developers, and supporting Polkadot/Kusama ecosystem.