
Color association distributions of snow (top) and hatred (center) in “normal” (left) and “negated” (right) states, as well as for pseudoboodoma (bottom left) and counterpart brodoma (bottom right) modified with its audio. Color associations for each group are ordered from the most common (bottom) to the most common (top), with the bar segment colors reflecting the response. Note that research questions use color terminology rather than visual color patches. Each bar segment above 5% is labeled with a corresponding color term. Credit: Cognitive Science (2025). doi:10.1111/cogs.70083
ChatGpt works by analyzing huge amounts of text, identifying patterns, and synthesizing them to generate responses to user prompts. Color metaphors such as “Feeling Blue” and “Seed Red” are common throughout English, and therefore form part of the dataset where ChatGPT is being trained.
But ChatGpt “reads” billions of words about the meaning of feeling blue or seeing red, but has never actually seen a blue sky or a red apple like a human. I’ll ask you for a question. Does embodied experience, the ability of the human visual system to perceive color – allow people to understand colorful language beyond the way ChatGpt does? Or is it just enough language to understand the color ratio phors for both AI and humans?
New findings from research published in Cognitive Science, led by Professor Lisa Aziz Zadeh, and a team of university and industry researchers will provide some insights and raise more on these questions.
“ChatGpt uses a huge amount of linguistic data to calculate probability and generate highly human-like responses,” says Aziz-Zadeh, senior author of the publication. “However, what we are interested in exploring is whether it is still a second-hand form of knowledge compared to human knowledge based on first-hand experience.”
Aziz-Zadeh is the director of the USC Equivalent Cognitive Neuroscience Center and is a co-appointment at the USC Dornsife Brain and Creativity Institute. Her lab uses brain imaging techniques to examine how neuroanatomy and neurocognition are involved in higher-order skills such as language, thought, emotion, empathy, and social communication.
The interdisciplinary team in this study included psychologists, neuroscientists, social scientists, computer scientists, and astrophysicists from Montréal University, a Google AI research company based in San Diego, California, Stanford University, Montréal University, University of England, and London.
chatgpt understands “very pink party” more than “Burgundy meetings”.
The research team conducted a large online survey comparing four group of participants. ChatGpt is a color-blind adult, color blind person, and a painter who regularly deals with color pigments. Each group was tasked with assigning colors to abstract words such as “physics.” The group was also asked to decipher familiar coloured phors (“they were red alerts”) and unfamiliar things (“it was a very pink party”) and to explain their reasoning.
The results suggest that color-seeking humans and color-blind humans have surprisingly similar colour-related relationships, and that, contrary to researchers’ hypotheses, visual perception is not necessarily a requirement for a specific phorical understanding. However, the painter gave a great boost to correctly interpreting the new coloured phors. This suggests that practical experience using colour unlocks deeper conceptual representations of language.
Also, ChatGpt produced very consistent color associations, and when asked to explain its reasoning, it often referred to emotional and cultural associations in a variety of colors. For example, to explain the metaphor of pink parties, Chatgup replied, “Pink is often associated with happiness, love, and kindness.
However, ChatGpt used no more embodied explanations than humans. It also broke more often when prompted to interpret new metaphors (“the conference made him Burgundy”) or inverted color associations (“the opposite of green”).
As AI continues to evolve, such research highlights the limitations of language-only models in representing the full scope of human understanding. Future research may investigate whether integrating sensory inputs, such as visual and tactile data, can help AI models approach human cognitive approximations.
“This project shows that there is still a difference between imitation of semantic patterns and the spectrum of human ability to harness the practical experience embodied in our reasoning,” Aziz Zadeh said.
More details: Ethan O. Nadler et al, statistically or embodied? Compare color seeding, color brands, painters, and large-scale linguistic models of color phor processing, cognitive science (2025). doi:10.1111/cogs.70083
Provided by the University of Southern California
Quote: Can chatgpt actually “see” red? New research results retrieved from subtle (July 8, 2025) July 8, 2025 https://techxplore.com/news/2025-07-chatgpt-red-results-nuction.html
This document is subject to copyright. Apart from fair transactions for private research or research purposes, there is no part that is reproduced without written permission. Content is provided with information only.
