The Future of Computation is Sharing

When Uber enters a new market, it doesn’t go out and invest in taxis. Instead, it turns every car on the road into a potential taxi. So why is it that the tech industry, one of the most progressive sectors in the economy, is responding to the inability of fixed data centers to supply the accelerating compute requirements of the AI industry by investing in more fixed data centers?

The single largest barrier that AI developers have in bringing products to market is the cost and availability of the compute required for machine learning. The $186 billion cloud computing industry provides computational capacity through massive centralized data centers operated by Amazon, Microsoft, Google and IBM. If you’re an AI developer then it’s likely you’re renting GPU capacity from one of these providers. However, with compute requirements from AI companies doubling every 3.5 months — that’s five times greater than Moore’s Law — the big four can’t build fast enough. The result is the costs of compute to develop AI are rising every quarter driving ever-increasing pressure on innovation.

The kicker is, a massive amount of latent compute capacity already exists. Even the most optimistic estimates predict that the major cloud providers represent under half of the total global compute capacity. Before the rising costs squeeze the life out of AI innovation, someone had to find a way to harness, aggregate and redistribute all that latent capacity and make it available to AI innovators. So we did.

Tatau is the Sharing Economy Solution for Compute

Significant capital has been deployed over the last few years to build GPU-based data centers for activities such as crypto-mining and gaming. That capacity is often underutilized or unprofitable from a commercial perspective. To harness that capacity, Tatau has built a global supercomputer which sources compute from this existing GPU hardware and in doing so can provide compute at a fraction of the cost of incumbent cloud-based providers.

Unlike centralized data centers which require a return on investment to justify the cost of investment in purchasing new hardware, Tatau’s distributed platform utilizes hardware that Tatau suppliers already own, offering an alternative revenue stream and hedging against market fluctuations, while often outperforming returns from other sources.

With AI predicted to add as much as $15.7 trillion to the global economy by 2030, it’s time to take a new look at how we supply the AI industry’s critical inputs. We can’t solve tomorrow’s problems using yesterday’s solutions and I genuinely believe it just makes sense to use latent resources that already exist rather than burning more capital and resources, not to mention thousands of gigawatts of additional electricity, to build and operate new ones. Centralized data centers aren’t going to keep up with demand on their own. But redeploying and aggregating the global supply of GPUs to provide cost-effective, reliable and, most importantly, scalable compute will go a long way to meeting the increasing requirements of the AI industry, unlocking innovation and allowing us all to benefit from the health, lifestyle and business benefits that AI will undoubtedly provide.

For more information, visit

Distributed supercomputing platform

Andrew Fraser

Written by

CEO & Co-Founder of Tatau

Distributed supercomputing platform

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade