The rapid advancement of artificial intelligence (AI) technologies has piqued the interest of researchers worldwide, leading to in-depth exploration of large language models such as ChatGPT. In a recent development, a research team from Arizona State University made waves with a compelling paper published on the preprint platform arXiv, challenging our understanding of these AI models. The study suggests that these models do not actually think or reason in the way we might assume, but rather, they are merely seeking correlations.
The researchers argue that despite the AI models generating a series of seemingly logical intermediate steps before providing answers, this does not equate to reasoning. They emphasize that anthropomorphizing AI models could lead to a misunderstanding of their working mechanisms. The so-called "thinking" of large models is actually a process of calculating correlations between data, not understanding causal relationships.
To support their argument, the researchers cited reasoning models like DeepSeek R1, which excel in certain tasks but do not necessarily demonstrate human-like thinking capabilities. The study indicates that there is no actual reasoning process in AI outputs. Therefore, viewing AI-generated intermediate inputs as reasoning processes could lead to misleading confidence in their problem-solving abilities.
This research serves as a cautionary tale in an era increasingly reliant on AI. As our understanding of the capabilities of large models deepens, future AI research may shift towards a more interpretable direction, helping users gain a clearer understanding of the actual workings of AI.