New custom silicon strategy aims to boost efficiency and strengthen cloud growth
Alphabet (GOOG) is reportedly deepening its push into custom artificial intelligence hardware through new discussions with Marvell Technology (MRVL). According to a report, the two companies are exploring the development of two advanced chips designed to improve the efficiency and performance of AI workloads.
One of the proposed chips is a memory processing unit that would integrate closely with Google’s existing tensor processing units (TPUs), helping optimize how data is handled during complex AI computations. The second chip under consideration is a next-generation TPU built specifically for running AI models more efficiently, reflecting Google’s broader strategy to tailor its infrastructure for artificial intelligence at scale.
This initiative underscores Google’s ongoing effort to position its TPUs as a credible alternative to the GPUs that have long been dominated by Nvidia. As demand for AI computing continues to surge, controlling more of its hardware stack could allow Google to improve performance while reducing reliance on third-party suppliers.
Custom silicon has also become increasingly important for Google’s cloud business, where TPU adoption is emerging as a key driver of growth. By offering specialized AI hardware, the company aims to attract enterprise customers seeking both performance and cost efficiency.
While details remain limited and timelines are still being finalized, the report suggests that design work on the memory processing unit could be completed as early as next year before moving into test production. If successful, the collaboration could mark another step in reshaping the competitive landscape of AI infrastructure.
You might like this article:Markets Navigate Geopolitical Tension and AI-Driven Transformation










