
Credit: Pixabay/CC0 Public Domain
Some video game players have recently criticized cover art with new video games generated by artificial intelligence (AI). However, the Little Droid cover art, which was also featured in the launch trailer for the YouTube game, was not concocted by AI. The developers claimed it was carefully designed by human artists.
Stunned by the attack on “AI Slop”, studio Stamina Zero has posted a video showing an earlier version of the artist’s handicraft. However, while some accepted this evidence, others remained skeptical.
Additionally, several players felt that Ai was similar to the work that was generated, even if the cover art of a small droid was made of humans.
However, some art is intentionally designed to have a futuristic shiny look associated with image generators such as Midjourney, Dall-E, and stable diffusion.
It’s becoming increasingly easier to misrepresent images, videos, or audio made with AI as if they were real or human-made. The twist in a case like Little Droid is that a human or “real” can be mistakenly perceived as a machine being produced.
In such cases, we highlight the increasing balance of trust and mistrust in the generational AI era. In this new world, both the irony and the ease of being fooled about what we encounter online are potential issues and can lead to harm.
Unlawful charges
This problem goes far beyond the game. There has been growing criticism that AI, which is used to generate and publish music on platforms such as Spotify, is being used.
However, the result has resulted in some indie music artists being falsely accused of producing AI music, which has resulted in damage to their burgeoning careers as musicians.
In 2023, the Australian photographer was misqualified from a photography contest due to the false judgment that her entry was created by artificial intelligence.
Writers, including students submitting essays, can also be slyly falsely accused using AI. Some argue that the AI detection tools currently available are far from completely difficult and are not entirely reliable.
Recent discussions have drawn attention to common characteristics of AI writing, such as the EM dash, which, as authors, often employ ourselves.
Given the distinctive features of text from systems like ChatGpt, writers face difficult decisions. Should they continue writing in their own style and be accused of using AI or try to write in a different way?
A delicate balance between trust and distrust
Graphic designers, voice actors, and many others are rightly worried that AI will replace them. They are also, of course, concerned that tech companies will use labour to train AI models without consent, credit or compensation.
There is further ethical concern that images generated in AI threaten indigenous peoples by erasing cultural nuances and challenging Indigenous cultural and intellectual property rights.
At the same time, the above cases illustrate the risk of rejecting the efforts and creativity of a real human being due to false beliefs that it is ai. This is also unfair. People who are mistakenly accused of using AI can suffer emotional, financial and reputational harm.
On the other hand, being fooled by the realism of AI content is a problem. Consider deepfakes, fake videos and false images of politicians and celebrities. AI content claiming to be authentic can be linked to scams or dangerous misinformation.
On the other hand, it is also a problem to mistakenly develop distrust of real content. For example, refusing to authenticate a video of war crimes or hate speech by a politician can lead to great harm and injustice based on a false or prudent belief that the content was generated.
Unfortunately, the growth of suspicious content allows uncruel individuals to claim that the video, audio, or images exposing actual fraudulent behavior is fake.
As mistrust grows, democracy and social cohesion can begin to flake. Given the potential consequences, we must be aware of excessive skepticism about the origin or origin of online content.
The road ahead
AI is a cultural and social technology. It mediates and shapes our relationships with one another, potentially transformative influence on how information is learned and shared.
It’s not surprising that AI is challenging trust with companies, content and each other. And people don’t always take responsibility when they’re being fooled by AI-made materials. Such outputs are becoming more and more realistic.
Furthermore, the responsibility to avoid a deception should not be fully applicable to Internet users or to general public. Digital platforms, AI developers, high-tech companies, and AI materials producers should be responsible through regulatory and transparency requirements regarding the use of AI.
Still, internet users still need to adapt. The need to exercise balanced, fair skepticism about online materials is becoming more urgent.
This means adopting the right level of trust and distrust in a digital environment.
The philosopher Aristotle spoke of practical wisdom. Through experience, education and practice, virtually wise people develop skills to make good judgments in life. They tend to avoid poor judgments, including excessive skepticism and naive things, so effectively wise people can thrive and work better by others.
To explain the harm and deception caused by AI, you need to keep high-tech companies and platforms. We also need to educate ourselves, our communities and our next generation to develop practical wisdom in a world that is caught up in AI content.
Provided by conversation
This article will be republished from the conversation under a Creative Commons license. Please read the original article.
Quote: AI distrust is on the rise, but with healthy skepticism, there is a risk of harm obtained on July 3, 2025 from https://techxplore.com/news/2025-07-distrust-ai-healthy-skeptisisic.html (July 2, 2025)
This document is subject to copyright. Apart from fair transactions for private research or research purposes, there is no part that is reproduced without written permission. Content is provided with information only.
