Intel unveiled its latest chip aimed at challenging Nvidia’s dominant market position for semiconductors which train and deploy large AI models.

The chip company claimed its Gaudi 3 chip is projected to deliver an average 50 per cent improvement in the time taken to train Meta Platform’s open-source Llama2 models compared with Nvidia’s H100 processor launched in 2023.

Intel predicted its Gaudi 3 accelerator inference throughput would typically outperform the H100 chips for some of the Llama and Falcon models it tested by up to 50 per cent and stated its chips use less power than Nvidia silicon.

It stated the Gaudi 3 AI accelerator will power systems with tens of thousands of accelerators connected through ethernet and said the chip would “deliver a significant leap in AI training and inference for global enterprises looking to deploy GenAI at scale”.

Reuters reported Intel employed Taiwan Semiconductor Manufacturing Co’s 5nm process to build the chips.

The chip will be widely available in Q3 to Dell Technologies, HPE and Supermicro, among others.

Last month, Nvidia announced its flagship B200 Blackwell chip, which can perform certain tasks 30-times faster than the H100.

The Wall Street Journal reported Nvidia has an estimated 80 per cent share of the AI chip market, which has propelled it to new heights over the past year.