Andrew Fraser
Jan 30 · 3 min read

Artificial Intelligence or AI is changing everything — from improving the diagnosis and treatment of medical conditions, to the information we see on our Facebook feeds. It’s predicted, according to my old friends at McKinsey, to add additional global economic output of $13 trillion by 2030. Governments and start-ups alike are scrambling to ensure they are in a position to enjoy the economic benefits that AI will bring. However, despite the obvious potential, there’s one big bottleneck currently slowing everything down — the supply of computational power required to develop and drive AI products and solutions. The current leading cloud-based providers of computing resource are simply not going to cut it in the computing power hungry AI age.

Centralised data centres won’t keep up with demand

AI relies on data, and lots of it. As such, the computational power required to drive AI is immense and growing. Until now, the only alternative to selling the family home to purchase computing capacity is the $247 billion cloud computing industry which funnels everything through massive centralised data centres primarily operated by just four corporations — Amazon, Microsoft, Google, and IBM.

That model has worked incredibly well for functions such as web search, social networks and media streaming, but AI requires significantly more computational power. In an attempt to meet that demand, data centres are growing at a rate of knots, with investment expected to grow between 12–14% annually over the next five years. However, with market demand for AI computing doubling every 3.5 months, supply simply isn’t keeping up, and the resulting bottleneck will continue to slow progress.

The cost model is prohibitively expensive

One lever centralised data centres are using to stem demand is cost. At the moment, those costs are rising every quarter. With AI services requiring massive computational power before they can go to market, the rising cost of computing is quite literally stifling innovation.

I was recently talking to an AI-start-up in Australia that typified this situation. Their revenues had grown 6x over the last year, but their computing costs had increased 10x, putting real pressure on their ability to grow further. What should have been a success story was far from it, as the start-up was having to trade off customer needs with compute costs. We’re now looking to bring them on-board as our first Australian customer to help them unleash innovation — watch this space!

Data centres are unkind to the environment

The only way to meet demand is to build more data centres. However, more data centres mean more electricity-hungry machines. It’s reported that 2% of all CO2 emissions globally emerge from the data centre industry — more than the airline industry.

The United States of America’s Department of Energy reported that data centres in their country accounted for around 2% of the overall energy consumption. While owners are investigating green energy alternatives, the fact remains that more data centres will result in higher energy consumption.

One point of service equals one point of failure

Amazon famously brought down a number of large websites last year when an employee accidentally took more servers offline than intended. That event sparked a domino effect that was felt globally. It’s natural that single points of failure raise the risk of an event having a larger impact. With Cloud data primarily provided by just four companies, that risk is always present — distributed computing eliminates that risk completely.

The Uber of Compute

A better alternative is already here. While some commentators have predicted a wait of five to ten years for quantum computing solutions to cope with demand, Tatau has devised a scalable solution to the compute supply-gap. Utilising a decentralised model mitigates against risk. Unlike current centralised approaches to the provision of computing power, Tatau’s decentralised network taps into the estimated 39% of computing capacity that sits outside of centralised data centres. In doing so, it is able to provide more computing capacity, at a fraction of the price of the competition, and without adding significantly to the industry’s environmental footprint.

Distributed supercomputing platform

Andrew Fraser

Written by

CEO & Co-Founder of Tatau

Distributed supercomputing platform

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade