In recent news, there have been rumors circulating that Meta is considering deploying Google TPU chips as soon as 2027. This development has sparked interest in the industry, particularly given Tim Arcuri’s analysis at UBS Equity Research. While some may view this move as a game-changer for Meta, it’s essential to consider the implications of such a decision and how it might impact the AI computing landscape.

Firstly, it’s worth noting that Tim Arcuri has long argued that Application Specific Integrated Circuits (ASIC) will only capture around 30% of the accelerated compute market in the long run. This means that a significant portion of the market will still be dominated by merchant GPUs. Therefore, any efforts to broaden the TPU ecosystem must be carefully considered to avoid cannibalizing Google Cloud Platform (GCP) revenues.

In this context, Meta and Apple are prime candidates for internal TPU capacity. Both companies have extensive AI efforts supporting their internal workloads, vast internal AI fleets, and a relatively small level of reliance on GCP. This reduces the risk to GCP AI cloud revenues. Furthermore, TPU is optimized for the JAX framework but also supports PyTorch through a PyTorch/XLA package, making integration with Meta’s stack relatively easy.

However, any decision to stand up a TPU deployment would need to be weighed against the resources allocated to Meta’s internal ASIC and AMD ramps. While this may seem like a minor detail, it could have significant implications for the company’s overall AI computing strategy.

Leave a comment