new “reasoning” model has been launched in the market industry recently. What is impressive about this new model, is that it is able to compete with the o1
reasoning model developed by OpenAI.
Is created and developed by the Alibaba Qwen team, and it is called QwQ-32B-Preview. The new model powered by artificial intelligence includes about 32.5 billion parameters which means that is capable of answering prompts up to 32,000 words in length. It seems that QwQ-32B-Preview can offer better performances than its competitors o1-mini and o1-preview, which are the only reasoning models that OpenAI released currently.
It’s important to mention that usually, models with more parameters will perform better compared with those that include a lower number of criteria. But for now, OpenAI refused to disclose the number of parameters that its reasoning models include.
So, regarding the tests of the new reasoning model, it seems that Alibaba e-commerce QwQ-32B-Preview manages to surpass the o1 models from OpenAi in the AIME and also MATH testing rounds. While AIME uses other artificial intelligence-powered models in order to evaluate the performance of the model, MATH represents a group of word problems in evaluation.
After these tests, it was revealed that the new Alibaba online shopping reasoning model has the capability to solve logic puzzles and also to answer difficult math questions due to all its reasoning features. But like other artificial intelligence models is not quite perfect yet because it is still available just in a preview version and sometimes can get stuck in a loop, or suddenly switch the language.
Subscribe to our newsletter
Currently, the new Alibaba AI QwQ-32B-Preview is available to download from the AI development platform Hugging Face. It is “openly” accessible under the Apache 2.0 License, which allows it to be commercially used. However, only some selected components of the Alibaba Online Shopping AI model have been released, in order to prevent the total replication of its AI technology.
Recently, various reasoning models have gained attention due to the fact that “scaling laws” are facing growing concerns. Reports suggest that models from companies such as OpenAI, Anthropic, or Google are not developing their models as rapidly as before.
This has stimulated new AI strategies, including test-time computing or inference computing which are capable of offering the new models an extra processing time in tasks. The new approach is powering reasoning models such as QwQ-31B-Preview or OpenAI’s o1.
By
Adam Brown
•
November 28, 2024 9:00 AM