The future of artificial intelligence, according to Jensen Huang, CEO of Nvidia Corp., will involve services that can “reason,” but computer costs must first decline to reach that point.
During a podcast with Arm Holdings Plc CEO Rene Haas, Huang explained that upcoming tools will be able to address questions by analyzing hundreds or thousands of steps and reflecting on their conclusions. This ability will allow future software to reason, setting it apart from existing systems like OpenAI’s ChatGPT, which Huang mentioned he uses on a daily basis.
Huang asserted that Nvidia plans to drive these advancements by improving chip performance by two to three times each year while keeping costs and energy consumption stable. This transformation will change how AI systems conduct inference—the process of identifying patterns and making decisions.
“We can realize significant reductions in the cost of intelligence,” he said. “We all recognize the significance of this. If we can significantly cut costs, we could enable reasoning capabilities during inference.”
Headquartered in Santa Clara, California, Nvidia holds more than 90% of the market share for accelerator chips—processors that enhance AI performance. The company has also diversified its portfolio by offering computers, software, AI models, networking, and other services to promote broader adoption of artificial intelligence among businesses.
Nevertheless, Nvidia faces competition from various players aiming to reduce its market hold. Leading data center operators, including Amazon.com Inc.’s AWS and Microsoft Corp., are working on in-house alternatives. Moreover, Advanced Micro Devices Inc. (AMD), already a competitor in gaming chips, is positioning itself as a contender in the AI arena. AMD is expected to showcase its latest advancements in artificial intelligence products at an event scheduled for Thursday.