
Credit: Liz Zonarich/Harvard Staff
Imagine ants roaming through the sand and following a path that looks like Winston Churchill. Do you think Ali created the image of the former British Prime Minister? According to the late Harvard philosopher Hillary Putnam, most people will say no: Ali needs to know about Churchill, Lines and Sand.
Thought experiments updated the relevance of the age of generator AI. As artificial intelligence companies release models where models of research, creation and analysis are always flaunted, because the meaning behind these verbs are becoming faster and slippery. What does it mean to really think, understand and know? The answer has a great deal of implications for how we use AI, but even so, those studying intelligence still consider it.
“When we see how we speak like humans, we can do a lot of work like humans, we can write evidence and rhymes, it’s very natural for us to have a mental model of the world, just as humans do.” “We’re taking steps, as a field, to try to understand. Does that even mean to understand something? There’s definitely no consensus.”
In human cognition, expression of thought means understanding it, said Cheryl Chen, senior lecturer in philosophy. Those who say “it’s raining” know the weather and have experienced the feeling of rain in their skin and perhaps the frustration of forgetting to pack an umbrella. “For real understanding, ChatGpt needs to be embedded in the world in ways that are not,” Chen said.
Still, today’s artificial intelligence systems may seem grossly persuasive. Both large-scale language models and other types of machine learning are made up of neural networks. This is a computation model that passes information through layers of neurons that loosely model the human brain.
“There are numbers in neural networks. We call them weights,” said Stratos Idreos, professor of Gordon McKay in Computer Science at the Sea. “These numbers start randomly by default. They get data through the system and do mathematical operations based on those weights, and get results.”
He gave examples of AI trained to identify tumors in medical imaging. The model feeds hundreds of images known to contain tumors and hundreds of images that do not. Based on that information, can the model correctly determine whether the new image contains a tumor? If the results are incorrect, it will provide the system with more data, tinker with the weights, and the system will slowly converge to the correct output. You may even identify tumors that your doctor may miss.
Vafa is dedicated to much of his research on getting AI to go through its pace and knowing both what the model actually understands and how we know for sure. His standards depend on whether the model can reliably demonstrate a world model. It is a stable yet flexible framework that can generalize and infer even under unfamiliar conditions.
Sometimes Vafa said, it certainly seems yes.
“If you look at a large language model and ask a question you probably haven’t seen before, “If you want to balance the marble on an inflatable beach ball above a stove pot on the grass, what order should you put in?” This suggests that the model has an effective world model. In this case, it is the law of physics.
However, Vafa argues that world models often fall into in-depth testing. In previous research, he and his team of colleagues trained street-directional AI models around Manhattan to sought routes between different points. For 99% of the time, the model spits out the exact direction. However, when they tried to build a cohesive map of Manhattan from the data, they discovered that the model invented the road, jumping across Central Park, and diagonally moved across the city’s famous right corner grid.
“When I turn right, I’m given one map of Manhattan, then turn left and I get a completely different map of Manhattan,” he said. “Those two maps should be consistent, but AI is basically rebuilding the map with every turn. There was no real concept of Manhattan.”
He argues that rather than working from a stable understanding of reality, AI remembers countless rules and makes the most of its capabilities.
Openai CEO Sam Altman said he will reach AGI. This states that AGIs can do “relatively quickly” with which people can perform cognitive tasks. VAFA keeps an eye on more elusive evidence. AIS certainly demonstrates a consistent world model: what they understand.
“I think one of the biggest challenges about reaching AGI is that it’s not clear how to define it,” Vafa said. “This is why it’s important to find ways to measure how AI systems can ‘understand’ or have a great world model. It’s difficult to imagine the concept of AGI that does not have a good world model. Although there is a shortage of current world models for LLM, once you know how to measure their quality, you can make progress towards improving them. ”
The Idreos team at Data Systems Laboratory develops a more efficient approach, allowing AI to process more data and infer more rigorously. He sees a future in which special custom-built models can solve important problems, such as identifying treatments for rare diseases, even if the model doesn’t know what disease is. Idreos said whether it counts as understanding or not is it counts as useful indeed.
Provided by Harvard University
The story is published courtesy of Harvard Gazette, the official Harvard newspaper. For additional university news, please visit harvard.edu.
Quote: Do you understand AI? (July 17, 2025) Retrieved from July 18, 2025 https://techxplore.com/news/2025-07-ai.html
This document is subject to copyright. Apart from fair transactions for private research or research purposes, there is no part that is reproduced without written permission. Content is provided with information only.