Alphabet Inc’s Google revealed new information on Tuesday about the supercomputers it uses to train its artificial intelligence models.
According to the company, the systems are both faster and more power-efficient than comparable Nvidia Corp. systems.
Google created its custom chip, known as the Tensor Processing Unit, or TPU.
It uses those chips for more than 90% of its artificial intelligence training, which is the process of feeding data through models to make them useful for tasks like responding to queries with human-like text or generating images.
The Google TPU has reached the fourth generation. Google published a scientific paper on Tuesday detailing how it connected over 4,000 chips into a supercomputer using its own custom-developed optical switches to help connect individual machines.
Because so-called large language models that power technologies like Google’s Bard or OpenAI’s ChatGPT have exploded in size, they are far too large to store on a single chip, improving these connections has become a key point of competition among companies that build AI supercomputers.
Google claims that its supercomputers make it simple to reconfigure connections between chips on the fly, thereby avoiding problems and optimising performance.
According to the paper, Google’s chips are up to 1.7 times faster and 1.9 times more power-efficient than a system built on Nvidia’s A100 chip, which was on the market at the same time as the fourth-generation TPU.
Google stated that it did not compare its fourth-generation chip to Nvidia’s current flagship H100 chip because the H100 was introduced after Google’s chip and uses newer technology.