
Diagram outlining a team evaluation methodology aimed at convergence of simulated personalities towards human personalities. Credit: Bai et al.
After the advent of ChatGPT, the use of large-scale language models (LLMs) has become increasingly popular around the world. LLM is an artificial intelligence (AI) system trained on large sets of written text that can quickly process queries in a variety of languages and generate responses that appear to be written by a human.
As these systems become more and more advanced, they could potentially be used to realize virtual characters that simulate human personality and behavior. Additionally, several researchers are currently conducting research in psychology and behavioral sciences involving LLMs, for example, testing LLMs’ performance on specific tasks and comparing it to human performance.
Researchers from Hebei University of Petroleum Technology and Beijing Institute of Technology recently conducted a study aimed at evaluating LLM’s ability to simulate human personality traits and behaviors. Their paper, published on the arXiv preprint server, introduces a new framework for assessing the coherence and realism of characters represented by constructed identities (i.e., personas) and LLMs, and also reports several important findings, including the discovery of a scaling law governing the realism of personas.
“Using LLM to advance social simulation is clearly a major research frontier,” Tianyu Huang, co-author of the paper, told Tech Xplore. “Compared to controlled experiments in the natural sciences, social experiments are expensive, sometimes even historically expensive for humanity. Even in much smaller areas like business and public policy, the potential applications are vast.
“From the perspective of LLM research itself, these models have already shown impressive mathematical and logical abilities. Some studies even suggest that LLMs have internalized temporal and spatial concepts. Whether LLMs can further infer human attributes and engage the humanities is another big question.”

Diagram outlining the critical density at which simulated personalities converge towards human personalities. Credit: Bai et al.
A key challenge in using LLMs to emulate human-like characteristics and abilities is the systematic biases often found in existing models. Most previous studies have attempted to address this issue on a case-by-case basis, such as adjusting for identifiable biases in the training dataset or individual outputs produced by the model. In contrast, Huang and colleagues set out to develop a general framework that addresses the root causes of LLM bias.
“First, we point out a methodological misconception in the current literature, which is that many researchers directly apply psychometric validity testing methods developed for humans to evaluate LLM personality simulations,” explained Yuqi Bai, co-author of the paper. “We argue that this is a categorical mismatch. Our approach reverts to a broader perspective and focuses on overall patterns rather than individual validity indicators.”
As part of the study, the researchers sought to determine whether the statistical characteristics of personality simulated by LLM converged with patterns observed in humans. Rather than trying to pinpoint the characteristics that LLM and human personality currently have in common, the research team wanted to outline a path or set of variables that could lead to a gradual convergence of AI and human personality.
“Our research has gone through a period of deep turmoil,” Bai said. “Using LLM-generated persona profiles initially introduced strong systematic biases, and as others have discovered, the effectiveness of rapid engineering was limited. Progress stalled. Later, during team discussions, we realized that LLM-generated persona profiles often behaved like writing resumes, emphasizing positive traits and suppressing negative ones.”
Ultimately, Huang, Bai, and their colleagues decided to evaluate the character that the LLM would convey in the novel. Because literary works of fiction are often effective in capturing the complexity of human emotion and behavior, we asked LLMs to write their own novels.

Diagram outlining the age curve as simulated personalities converge towards human personalities. Credit: Bai et al.
“This was our third population-level experiment, and the results were surprising because the systematic bias was significantly reduced,” Bai said. “Subsequent experiments using literary characters from Wikipedia showed simulations with converging personality distributions much closer to human data. The conclusion was clear: detail and realism can overcome systematic bias.”
The findings collected by these researchers suggest that LLMs can partially emulate human personality traits. Furthermore, we found that the ability of these models to simulate realistic personas increases when they are provided with richer, more detailed descriptions of the expected “virtual characters.”
“Our main contribution is to identify the level of persona detail as a key variable determining the effectiveness of LLM-driven social simulations,” explained Kun Sun, co-author of the paper.
“From an application perspective, social platforms and LLM API providers already have large and detail-rich user profile data, forming a strong foundation for social simulation. This poses both tremendous commercial potential and serious ethical and privacy concerns. Therefore, preventing manipulative control and protecting human autonomy is a key challenge.”
In the future, this recent research could help develop conversational AI agents or virtual characters that realistically simulate specific personas. Additionally, it may stimulate research investigating the risks of AI-simulated personas and introduce methods to limit or detect unethical use of LLM-based virtual characters.
Meanwhile, the team plans to further investigate the scaling laws that guide LLM simulations of human personality. For example, you might want to train your model on a richer persona dataset or adopt more sophisticated data management tools.
“We also plan to investigate whether similar scaling phenomena appear for other human-like traits, such as values,” Sun and Yuting Chen added. “Using a linear regression-based exploration technique, we examine whether LLM internalizes prior distributions on human attributes within its latent representation. Understanding this implicit world model may reveal the underlying mechanisms behind human trait simulation.”
This article written for you by author Ingrid Fadeli, edited by Gabby Clark, and fact-checked and reviewed by Robert Egan is the result of careful human labor. We rely on readers like you to sustain our independent science journalism. If this reporting is important to you, please consider making a donation (especially monthly). As a thank you, we’re giving away an ad-free account.
Further information: Yuqi Bai et al., LLM Simulated Personality Scaling Method: All You Need Is a More Detailed and Realistic Persona Profile, arXiv (2025). DOI: 10.48550/arxiv.2510.11734
Magazine information: arXiv
© 2025 Science X Network
Citation: Literary character approach helps LLMs simulate more human-like personalities (October 29, 2025) Retrieved October 29, 2025 from https://techxplore.com/news/2025-10-literary-character-approach-llms-simulate.html
This document is subject to copyright. No part may be reproduced without written permission, except in fair dealing for personal study or research purposes. Content is provided for informational purposes only.
