Global demand for computing is racing ahead of supply, especially in the artificial intelligence arena. Tatau.io is building a distributed computing platform that allows any company easy access to the world’s decentralised computing resources. Here’s why.
A few years back, it became obvious that software had stopped eating hardware — for everyday, general-purpose computing that is. Run a modern operating system and apps, and your computers won’t break a sweat even if they’re not the latest and greatest and models.
This was good news for harassed sysadmins and owners of computing resources, and bad news for hardware vendors.
Then artificial intelligence, machine learning, augmented reality/virtual reality and autonomous driving took off (amongst other cool things) and now the shoe’s on the other foot: the world can’t get enough computing power.
We’re starting to get a handle of just how much through. As an example, look at not-for-profit research organisation OpenAI, which is backed by tech luminaries such as Elon Musk and Peter Thiel, and counts Microsoft and Amazon Web Services among its sponsors.
That demand outstrips the observation made by Intel co-founder Gordon Moore, who noted that the number of transistors in integrated circuits, a measure of how powerful they are, would double every 18 months.
So, while Moore’s Law as it became known would’ve seen a twelve-fold increase in capacity since 2012, AI drove a 300,000 time increase.
This is due to algorithmic improvements and optimisations over the years, of course, but also driven by specialised hardware that’s massively parallel and specific to purpose.
Video card vendors such as NVIDIA have been quick to grasp the opportunity, with graphics processing units (GPUs) that feature thousands of cores, and even made AI-specific designs for Google’s Tensorflow training technology.
WIth the right workloads, GPU-based compute resources offer significant performance advantages over traditional central processing unit (CPU) offerings.
A practical example was outlined last year by Azeem Azhar. He noted that the Advanced Driving Assistance System (ADAS) in Tesla cars had switched from a Mobileye EyeQ3 processor, to an NVIDIA Drive PX 2.
The switch from the smaller Mobileye processor to a larger NVIDIA chip with many more transistors packed two and a half times more tightly than previously led to a staggering 90-fold performance improvement.
Tesla needed that performance bump because the self-driving ADAS processes vast amounts of data from sensors in the car — and requires enormous amount of compute capacity to figure out what to do with it.
Estimates are that self-driving cars will need around 200 tera floating point operations per second (TFLOPS) of processing power. That’s one car, and the world’s production of light vehicles is around 100 million annually.
Having the foresight that AI will outpace Moore’s Law has paid off for NVIDIA which has seen its share price skyrocket.
It doesn’t stop at GPUs though. Intel, the world’s largest CPU vendor, is betting on field programmable gate array (FPGA) processing devices for AI becoming the next big thing. FPGAs essentially allow you to put your software into fast, optimised hardware to accelerate application specific tasks.
Microsoft has already hopped aboard that bandwagon and is previewing FPGA compute resources in its Azure cloud.
The world’s compute demand isn’t going to slow down any time soon, and Tatau.io’s vision is to harness any available resource and make it available to those who need it.
We believe our decentralised, distributed compute platform built on the blockchain is the way to meet the burgeoning demand for resources that AI has brought, providing a function-as-a-service approach that takes away the complexity of setting up or renting servers for customers.
Have spare compute capacity? Need compute capacity? Join Tatau.io as we race to keep up with the giant demand, and make AI and associated technologies truly work for us.