Meta Platforms announced the second generation of its Meta Training and Inference Accelerator (MTIA) AI chip as part of a plan to wean itself off more costly semiconductors from Nvidia and other vendors.

The company stated it could achieve greater efficiency by controlling its own stack and using its domain-specific silicon than it could with outside GPUs.

It stated the latest MTIA chip more than doubles the compute and memory bandwidth of the first, “while maintaining our close tie-in to our workloads”.

The company noted the chip’s architecture is focused on providing “the right balance of compute, memory bandwidth and memory capacity for serving ranking and recommendation models”.

Meta Platforms deployed MTIA in its data centres and stated the chip is now serving models in production.

Results to date show the MTIA chip can perform low and high complexity ranking and recommendation models, which Meta Platforms stated are key components for its products.

“We are already seeing the positive results of this programme as it’s allowing us to dedicate and invest in more compute power for our more intensive AI workloads,” it stated.

The company explained it currently has several programmes underway to expand the scope of MTIA, including support for generative AI workloads.

On its 2023 earnings call in February, Meta Platforms CEO Mark Zuckerberg stated it would buy about 350,000 of Nvidia’s Hopper H100 chips by end-2024 as part of a plan to advance its compute infrastructure. Bloomberg reported the chips cost tens of thousands of dollars each.