Polkadot TPS Breaks 100K! But Wait, If the Relaychain Has Barely Any Transactions, Does TPS Even Matter?
You’ve probably heard by now — Polkadot live-streamed a TPS test on the Kusama network yesterday! Let’s dive straight into the numbers:
🚀 TPS: 128,184
⏱️ Block time: 2 seconds (there’s more to this story — find the answer below!)
💻 Cores used: 23
🌐 Environment: The real deal — a live production network on Kusama, not some testnet!
In short: as expected, Kusama soared straight to the #1 spot in TPS rankings! 🎉
Based on yesterday’s livestream, PolkaWorld has gathered all the key highlights just for you. So why not grab a cup of coffee and dive into the goodness? ☕✨
Why Spammening? And Why Twice?
If you’ve been keeping up with Polkadot, you might know that engineers from Parity and Amforc conducted their first TPS test on the Kusama and Polkadot networks just last week! The results? A solid 82,000 batch transactions TPS.
This achievement is remarkable for a live network like Kusama, which operates with real value as a mainnet, unlike a simple testnet. Plus, Kusama’s relay chain is actively validating rollups (parachains), making this kind of “Spamming” a genuine technical challenge.
But it’s not just challenging — it’s also risky.
• Hardware limitations: Kusama’s reference hardware typically runs on 4-core machines — your smartphone might be more powerful! Some validators even share these limited resources across multiple machines.
• Decentralization: Kusama is a fully decentralized network, which means Parity has no control over node configurations or performance. They can’t predict or manage the machines used by validators.
This makes spammening in such an unpredictable, decentralized environment both risky and exciting. But Parity sees this as the perfect opportunity to truly test the network’s resilience and performance.
Polkadot already processes millions of real transactions — over 40 million in November alone. And with one Spamming test already completed, why run another on Kusama?
Here are a few reasons:
1. Showcasing potential: Polkadot and Kusama use the same technology, capable of much more than their current usage levels. These tests demonstrate the true capabilities of these networks.
2. Gathering critical data: Engineers gain valuable insights, such as how the network performs under load. While theoretical limits are known, real-world tests reveal practical performance and existing issues that can then be fixed to improve the network.
3. Addressing misconceptions: PolkaWorld suggests this might also be a response to critics who downplay Polkadot’s activity levels. By demonstrating real TPS in a production environment, they’re silencing doubters with hard evidence.
Why the Second Spammening?
While the first Spamming test achieved impressive TPS numbers, it used only 15 cores, leaving room for improvement. The network wasn’t fully saturated, and the community wasn’t involved.
For the second Spammening, Parity stepped it up:
• 23 cores were allocated for Spamming.
• The community actively participated via a live dashboard at spammening.live, submitting real transactions together.
This led to last night’s massive Spamming event, with over 30,000 participants online to witness pushing Kusama to its limits!
It was not just a technical test but a community-driven experiment to explore what Kusama — and by extension, Polkadot — can truly handle. 🚀
Polkadot’s Relay Chain Has Barely Any Transactions! So Does High TPS Even Matter?
There’s a common and significant misunderstanding: some tools display Polkadot network transactions, showing TPS on the Polkadot relay chain as 0.16 or similar. This leads some to conclude, “Polkadot barely has any transactions — it’s dead.” But in reality, the relay chain was never designed to process transactions in the first place.
Polkadot is a sharded, multi-chain architecture, and the relay chain’s job is to provide security for these chains. Parity even plans to reduce the number of transactions on the relay chain to zero next year. So, you can’t judge Polkadot’s activity based on this number alone. Polkadot is a multi-chain network where these rollup chains are secured within the same network and can communicate efficiently with each other. This is the essence of Polkadot’s design: a secure, efficient, and interconnected multi-chain network.
Now, what does it mean when we say 23 cores were used in this Spammening test?
In simple terms, you can think of a core as a small, decentralized computing environment. Each rollup connected to Polkadot or Kusama has its own dedicated core for processing the data submitted by the rollups.
The relay chain’s ultimate goal is to reach zero transactions, but all rollups connected to these cores rely on Polkadot’s computational resources to process transactions. In this Spammening test, Parity allocated 23 cores to 11 rollup chains registered for the event. Everyone was submitting transactions like crazy on these 11 chains, consuming Kusama’s 23 cores to test their performance — essentially testing Kusama’s performance. The result? 128,184 TPS!
But here’s the thing: Kusama currently has 100 cores. If you calculate based on this test, the entire network could theoretically handle far more TPS! And since those 23 cores weren’t even fully maxed out, the actual potential could be much higher. We’ll have to wait for Parity’s report for the details.
So, stop saying Polkadot has no transactions! Those clueless data platforms clearly don’t understand Polkadot’s architecture or its core principles. Every L1 or L2 tracked on those platforms is built on a single-chain architecture, which inherently lacks scalability.
As Hyperbridge scientist @seunlanlege said during the live stream:
“A single chain is already insufficient for its own users, so you can’t put all these cross-chain transactions onto one chain. That’s pure fantasy — it’s never going to work. Yet, I still see many new interoperability products relying on single-chain architecture, which obviously doesn’t scale. The only way to scale is to parallelize the process. You need to run multiple cores, just like traditional software scaling by adding more machines, workers, or nodes. Regardless of the method, the essence lies in horizontal scaling.”
So, Did Kusama Crash During the Spammening?
Kusama is a canary network, an experimental environment designed to provide a real, valuable production setting for extreme performance and functionality tests. Its motto says it all: “Make Chaos, No Promise.”
To be honest, both the community and even Parity were secretly hoping for Kusama to crash — just for the fun of it! 😂 During yesterday’s live stream, Gavin made it clear that he wasn’t at all worried about a potential crash. After all, Kusama is an experimental network, and pushing it to its limits is part of the engineers’ joy! In fact, finding issues would be a good thing because it means they can address those issues, making the system even more robust and resilient.
But as it turns out, Kusama held up like a champ under a whopping 128,184 TPS — not a scratch! The network remained super stable, with almost no noticeable impact on other parachains. The only observable effect was a slight increase in finality time to 18 seconds under the heavy load. Parity engineers noted this increase and are already working on improvements. Beyond software optimizations, they’re also considering hardware upgrades, such as running the network on more powerful machines. Currently, validator nodes only use 4-core CPUs!
And what if Kusama had crashed?
Given the complexity of the software, bugs are always a possibility, and something could have gone wrong in practice. However, Parity engineers reassured us that there’s always a way to recover a network. Kusama has demonstrated exceptional resilience in the past — even when things go seriously wrong, it usually recovers on its own.
Why Did 11 Chains Use 23 Cores?
If you’re asking this question, it might mean you’re not too familiar with Polkadot 2.0 and its roadmap!
As highlighted in Parity’s Tweet lately, Elastic Scaling has officially launched on Kusama! What is elastic scaling? It’s the final key feature of the Polkadot 2.0 roadmap, alongside Agile Coretime and asynchronous backing. Elastic scaling is a dynamic resource allocation mechanism that builds on Agile Coretime. Normally, a rollup uses one core, but with elastic scaling, a rollup chain can use multiple cores when network traffic is heavy.
In this Spammening, everyone got to experience elastic scaling in action. If you checked the dashboard at spammening.live, you might have noticed some yellow flashing dots. These represent rollup chains utilizing elastic scaling. For these chains, you’d see block times of 2 seconds, enabling them to pack more blocks into less time and significantly boost transaction throughput.
This is a brand-new feature, just launched on Kusama! I think elastic scaling is expected to roll out on Polkadot around February 2025, marking the full launch of Polkadot 2.0! 🚀
So, Does TPS Really Matter?
Many people think TPS is easily manipulated and somewhat “pseudo-scientific,” yet it remains a key metric in the industry for comparing tech stacks. So, does TPS actually matter?
Let’s take a look at what Gavin Wood said yesterday:
“For a very complex system, most linear metrics aren’t particularly useful. TPS, for instance, dates back to the Bitcoin era. I remember around 2014, people often said, ‘Bitcoin might not be very practical because it can’t handle Visa-level transaction throughput.’ Back then, the magic number thrown around was 50K TPS. If you think of blockchains purely as payment processors and equate transactions to payments, this metric makes some sense — how many payments can it process per second?
But when you think of blockchains as global computational resources, using payments as the only measure doesn’t make much sense. Payments are just one of the many things you could use this ‘computer’ for.
It makes far more sense to think in terms of computational resources — like data bandwidth, operations per second, or storage capacity. These metrics align more closely with reality, much like how we measure traditional computers. If you have a virtual machine or a global computer, you’d probably want to measure it similarly. This is the direction we’re moving in with JAM.
That said, given the industry hasn’t evolved to embrace these advanced standards yet, if we want to compare ourselves to other chains, the most straightforward way is to adopt the commonly used metric. So for now, we’re still using TPS.”
Transactions represent an expression of intent — something about to happen on-chain. While it’s true that TPS can be manipulated (as our experiment proved through system-internal spamming rather than user-driven activity), it’s still an important indicator of on-chain activity.
We’re not trying to claim these transactions represent active users but to demonstrate that under large-scale adoption, Polkadot can handle far more than initially anticipated. The reality is that Polkadot already has significant demand from rollups like Mythical Games, Origin Trail, and Frequency, which require more cores and higher performance to operate their networks.
Back to the Our Vision: What Have We Built Through Scalability?
At the end of the live stream, as the TPS hit 128,184, Parity’s Shawn shared his thoughts on scalability — a perfect way to summarize the event!
Ethereum’s design was essentially about creating a protocol for “virtual hardware.” Think of it this way: many computers around the world collaborate to form another computer. This is what Gavin often refers to as a “supercomputer.”
Our goal is to build a decentralized supercomputer. But what does that really mean?
It’s about coordinating a large number of different computers to construct a virtual computer. This isn’t a physical machine but a conceptual one capable of running programs. Early Ethereum required all the computers in the network to work together, yet the result was a virtual computer with the capacity of just one single computer, or even less, as it also had to handle proof-of-work (PoW) and other processes.
Polkadot’s design takes a different approach. It coordinates many computers across the globe to create a supercomputer with multiple cores. Each core typically requires about three computers to support it. For example, with approximately 400 validator nodes currently on Polkadot, we’ve created a virtual supercomputer with the performance of 100 individual computers.
The difference from Ethereum is clear. While Ethereum operates like a single computer, Polkadot achieves 100x the performance. Even more exciting is the potential of technologies like JAM, which could push this to 330x the capacity of a single computer.
We’ve designed a protocol that enables the creation of a virtual computer with 330 times the power of any individual machine running it. That’s an incredible leap forward — and it’s hard not to be excited about the possibilities!
More Importantly, We’re Already Meeting Web3’s Long-Term Needs — But We’re Not Stopping There.
While the current advancements in scalability are enough to meet Web3’s demands for a long time, we’ve already laid plans for even greater expansion. If we think of JAM or the Polkadot protocol as the foundation for designing a “single-computer protocol,” then Polkadot Cloud’s initial form can be seen as a “supercomputer” powering the entire cloud infrastructure.
However, other cloud platforms operate with multiple computers serving diverse users. How can we achieve the same?
In the future, I believe we’ll be able to run multiple JAM virtual machines and interconnect them. This concept, which Gav and I have discussed, is referred to as the “grid.” Multiple JAM VMs, all operating on decentralized hardware, will be bridged together to create a truly multi-computer cloud.
We’re talking about starting with 1 million TPS on JAM and scaling this to world-class — and potentially intergalactic — levels as demand grows. This technology is already awe-inspiring today, and we’ve mapped out how to scale it to meet the needs of the entire ecosystem. It’s a future that’s nothing short of thrilling!
All of these JAM computers, all of these parts of the grid, are powered by the DOT token.
As one of my favorite phrases goes: “The best scalability solution maximizes shared security while maximizing execution sharding.” This is precisely what we’ve built with Polkadot — a perfect blend of all the essential components.
Some of the guests on today’s show were fantastic, representing projects like Mythical Games, Origin Trail, and Frequency — some of the largest rollup projects in Polkadot. These rollups are now officially “deployed on the Polkadot cloud”!
But there’s also an exciting new trend coming in the next few quarters: Polkadot Hub. This Hub will serve as a gathering place for developers familiar with smart contracts, innovators who enjoy experimenting, and communities building applications. It will provide a launchpad for them to gain momentum on Polkadot, potentially progressing to cloud deployment in the future.
The vision behind the Hub is to shift our focus and innovation from platforms like Ethereum to cloud platforms capable of securely hosting other blockchains and running them seamlessly. Now, we’re finally applying this technology to host our own L1 blockchain.
We’re introducing our smart contract platform, which includes all the features you’d expect — plus the benefits of the cloud. For instance, our Hub can elastically scale as transaction volumes increase. Unlike other blockchains, which struggle with rising fees, stuck transactions, or lost data, our platform and cloud services can genuinely scale, leveraging multiple cores to deliver far higher throughput than any single blockchain.
The Hub is poised to become a place for communities to gather. It’s where DOT token holders and other parachain communities can connect. It offers secure bridges to other networks, participation in governance, voting, staking, and everything becomes programmable and open.
A fair criticism of Polkadot in the past has been that it’s too difficult to access and build on.
That’s true. Our cloud-building services have required a certain level of team and developer expertise. But the Hub changes that. The Hub enables anyone with an idea — and a small amount of DOT tokens — to deploy their contracts and ideas, connecting to the entire Polkadot community.
It’s truly gratifying to see the technology we’ve been talking about validated in action! In a real production environment, it works exactly as envisioned!
Will Polkadot Have Its Own Spammening?
Following the Kusama Spammening, many are wondering: will Polkadot host a similar event?
During the livestream, Gavin clearly stated that we should conduct a similar test on Polkadot! Polkadot nodes generally have more robust configurations and stronger connections, which means testing on Polkadot could yield even better results. While the data from the Kusama test serves as a reference, a Polkadot Spammening might showcase even greater capabilities.
Parity engineer Robert also mentioned that if Kusama performed well, Polkadot should perform even better. Polkadot validators typically have more powerful setups compared to their Kusama counterparts, which enables them to handle heavier loads. However, as Polkadot is a production network, such a test would require extra caution.
The Parity data team is set to release a comprehensive report next week analyzing the Kusama Spammening. This report will delve into what transpired during the event and use the insights to further improve Kusama — and, by extension, the Polkadot network.
The future of Polkadot is bright! 🌟
About PolkaWorld
PolkaWorld is a Polkadot global community founded in 2019. We have gathered more than 50,000 Polkadot enthusiasts, and have always been committed to spreading Polkadot knowledge, training Substrate developers, and supporting Polkadot/Kusama ecosystem.
From 2019 to 2021, PolkaWorld was funded by the Web3 Foundation and worked alongside them to establish the Polkadot Chinese community. After June 2021, PolkaWorld collaborated with the Polkadot Treasury to establish an Open Social Contract, successfully passing 9 motions! Following the launch of OpenGov in July 2023, PolkaWorld continued its operations, having passed 3 referendums in a row, thereby continuing to contribute to the Polkadot community.
Twitter: @polkaworld_org.
Youtube: https://www.youtube.com/c/PolkaWorld
Telegram: https://t.me/polkaworld