At the conference, Jensen Huang was also questioned about how he sees the AI shift and how the older chip models would act and interact with newer technology.
In his response, he talked about o1, and how test-time scaling could play a bigger role in Nvidia’s business moving forward, naming it “one of the most exciting developments” and “a new scaling law.”. This means that o1 can be a new way in which the AI industry could be improved.
Even though recent rapports are showing that AI models have met a slower development, Huang reported that AI models are still improving as more data is added to the pretraining phase.
CEO of Anthropic, Dario Amodei also said this Wednesday at the conference that he does not see a slowdown in the AI process of development.
“Foundation model pretraining scaling is intact and it’s continuing,” said Huang on Wednesday. “As you know, this is an empirical law, not a fundamental physical law, but the evidence is that it continues to scale. What we’re learning, however, is that it’s not enough.”, reported TechCrunch.
Which is what every Nvidia investor was longing to hear at the conference. Yet, Andresen Horowitz and other executives had stated that the already-used methods are starting to fade away. But, Nvidia’s computing workloads are happening around their pretraining phase.
“Our hopes and dreams are that someday, the world does a ton of inference, and that’s when AI has really succeeded,” said Huang. “Everybody knows that if they innovate on top of CUDA and Nvidia’s architecture, they can innovate more quickly, and they know that everything should work.