You are viewing a single comment's thread from:

RE: LeoThread 2024-10-30 08:13

AI inference chips incoming, no more fab plans
In a bid to reduce reliance on Nvidia, OpenAI initially considered developing its chips both for training and inference and then facilitating the building of a dozen fabs (operated by prominent foundries like TSMC and Samsung Foundry), but high costs and long timelines made it impractical. Instead, OpenAI has prioritized designing custom AI chips for inference together with Broadcom and producing them at TSMC. For now, OpenAI will keep using GPUs from Nvidia and AMD for training.