In its pursuit to narrow the gap with competitors in the generative AI sector, Meta commits billions to bolster its AI endeavors. A significant portion of this investment targets the recruitment of AI researchers, while a substantial sum is allocated to hardware development, notably focusing on chips tailored for Meta’s AI operations.
Today, Meta introduced its latest achievement in chip development, the “next-gen” Meta Training and Inference Accelerator (MTIA). This unveiling follows Intel’s recent announcement of its latest AI accelerator hardware by a day. The MTIA successor, built on a 5nm process, supersedes its predecessor, boasting enhancements in size, processing cores, power consumption, internal memory, and clock speed. Meta reports that the next-gen MTIA, operational in 16 data center regions, delivers up to triple the performance of its predecessor across various models.
However, Meta’s approach to hardware deployment raises eyebrows for a few reasons. Firstly, the company reveals that it currently does not utilize the next-gen MTIA for generative AI training, though it acknowledges ongoing exploration in this area. Secondly, Meta concedes that the next-gen MTIA won’t replace GPUs but will complement them.
The move indicates Meta’s gradual progress, potentially slower than desired. With an estimated $18 billion expenditure projected on GPUs by 2024, Meta explores in-house hardware solutions to mitigate costs. Yet, while Meta’s efforts advance, competitors surge ahead, exemplified by Google’s latest chip releases and Amazon’s established custom AI chip families.
Meta’s accelerated development timeline from silicon inception to production models is commendable, albeit it still faces a significant journey to reduce reliance on third-party GPUs and remain competitive in the evolving landscape of AI hardware.