
Credit: CC0 Public Domain
AI has a great ability to formulate thoughts and statements for us, which weakens our judgment and ability to think critically, says media professor Petter Be Brandzag.
Just three years ago, no one knew about Chat GPT. Currently, 800 million people are using this technology. The speed at which AI is being deployed has broken all records and become the new normal.
Many AI researchers like Brandtzæg are skeptical. AI is a technology that interferes with our ability to think, read, and write. “We can largely avoid social media, but AI is unavoidable. Social media is integrated into social media, Word, online newspapers, email programs, etc. We become AI’s partners whether we want to or not,” says Brandtzæg.
Professor of Media Innovation at the University of Oslo examined how AI will affect us in his recently completed project, “An AI-Powered Society.”
Freedom of Expression Committee ignored AI
This project was carried out in cooperation with the research institute SINTEF. It is the first of its kind in Norway to study generative AI, i.e. AI that creates content and how it impacts both users and the public.
This was in response to the fact that the Norwegian Freedom of Expression Committee report released in 2022 did not sufficiently address the impact of AI on society, or at least the impact of generative AI.
“There is research showing that AI can weaken critical thinking. AI affects the way we speak, think, understand the world, and make moral judgments,” Brandtzæg says.
A few months after the Freedom of Expression Committee report, ChatGPT was launched, making his work even more relevant.
“We wanted to understand how this kind of generative AI would impact society, and specifically how it would change social structures and human relationships.”
AI-Individualism
Since the social implications of generative AI is a relatively new field that still lacks theories and concepts, researchers have launched the concept of “AI individualism.” It is based on “networked individualism,” a framework launched in the early 2000s.
At the time, it was necessary to express how smartphones, the Internet, and social media allow people to build and coordinate social networks that extend beyond family, friends, and neighbors.
Networked individualism has shown how technology can weaken old constraints of time and place and enable flexible, personalized networks. With AI, something new happens. As AI begins to take on roles that once belonged to humans, the lines between humans and systems will also begin to blur.
“AI can also address personal, social and emotional needs,” says Brandtzæg.
With a background in psychology, he has spent years studying the relationship between humans and AI using chatbots like Replika. ChatGPT and similar social AIs can provide instant personal support for a variety of things.
“AI strengthens individualism by enabling more autonomous behavior and reducing dependence on those around us. It can increase individual autonomy, but it can also weaken community ties. Therefore, the transition to AI individualism has the potential to reshape core social structures.”
He argues that the concept of “AI individualism” provides a new perspective for understanding and explaining how AI changes social relationships. “We use it as a partner in our relationships, a collaborative partner in our work, to make decisions,” says Brandtzæg.
Student selects chatbot
The project is based on several studies, including an open-ended survey of 166 high school students about how they use AI.
“They (ChatGPT and MyAI) give us straight to the point about what we’re asking, so we don’t have to search endlessly in books or online,” one high school student said about the benefits of AI.
“ChatGPT helps me solve problems. I can open up about difficult things and get comfort and good advice,” said a student.
Another study using a blind online experiment found that when people have mental health-related questions, more people prefer getting answers from chatbots than from experts. More than half preferred answers from chatbots, less than 20% preferred answers from experts, and 30% said both.
“This shows how powerful this technology is and that we may prefer AI-generated content over human-generated content,” Brandtzæg says.
“Model power”
The “model power” theory is another concept they have put forward. It is based on the power relations theory developed by sociologist Stein Broten 50 years ago.
According to the article “Modellmakt og styring” (online newspaper Panorama, Norwegian), model power is the influence gained by possessing an influential model of reality, which others must accept in the absence of an equivalent model of their own power.
In the 1970s, it was about how the media, science, and various groups with authority influenced people and had exemplary power. Now it’s AI.
Brandtzæg’s argument is that AI-generated content no longer works on its own. It’s everywhere: public reports, new media, research, encyclopedias. When you run a Google search, you first get an AI-generated overview.
“There’s a sort of AI layer covering everything. We suggest that the modeling capabilities of social AI could lead to model monopoly and have a profound impact on human beliefs and behavior.”
AI models like ChatGPT are conversation-based, which is why they are called social AI. But how authentic are our interactions with machines fed vast amounts of text?
“Social AI fosters the illusion of genuine conversation and independence, a pseudo-autonomy through pseudo-dialogue,” says Brandtzæg.
Important but following AI advice
According to a survey conducted by the Norwegian Communications Authority (Nkom) in August 2025, 91% of Norwegians are concerned about the spread of disinformation from AI services such as Copilot, ChatGPT, and Gemini.
AI can experience hallucinations. A known example is that a report used by the city of Tromsø as the basis for a proposal to close eight schools was based on sources fabricated by AI. Therefore, AI can foster misinformation and undermine user trust in both AI, service providers, and public authorities.
Brandtzæg asked how many other small municipalities and public institutions were doing similar things and said he was concerned about this unintentional spread of misinformation.
He and his fellow researchers have reviewed a variety of studies showing that we are following AI’s advice, even though we like to say we are critical, highlighting the modeling capabilities of such AI systems.
“It’s probably not surprising that we follow the advice we get. It’s the first time in history that we’re talking to some kind of omnipotent being that we read so much about. But it gives us a terrifying example of power. We believe we’re having a dialogue, and we believe it’s a collaboration, but it’s a one-way communication.”
american monoculture
Another aspect of the power of this model is that the AI companies are based in the United States and built on vast amounts of American data.
“We estimate that only 0.1% of Norwegians are in AI models like ChatGPT. This means we are concerned with American information that can influence our values, norms, and decisions.”
What does this mean for diversity? The principle is “winner takes all.” AI does not consider minority interests. Brandtzæg points out that never before has the world faced such intrusive technology, which needs to be regulated and balanced with real human needs and values.
“We have to remember that AI is not a public, democratic project. It is a commercial one, with a few American companies and billionaires behind it,” Brandzerg said.
Further information: Marita Skjuve et al, Unge og helseinformasjon, Tidsskrift for velferdsforskning (2025). DOI: 10.18261/tfv.27.4.2
Petter Bae Brandtzaeg et al., AI Individualism, Oxford Intersection: AI in Society (2025). DOI: 10.1093/9780198945215.003.0099
Provided by University of Oslo
Quote: Media professor says AI’s superior thought-forming ability weakens ability to think critically (November 16, 2025) Retrieved November 16, 2025 from https://techxplore.com/news/2025-11-media-professor-ai-superior-ability.html
This document is subject to copyright. No part may be reproduced without written permission, except in fair dealing for personal study or research purposes. Content is provided for informational purposes only.
