OpenAI has published the so-called GPT-4o System Card, a report detailing the results of its research on the safety of its latest generative artificial intelligence model. This report reveals concerning conclusions about the potential impact of GPT-4o in various areas, particularly highlighting the risks associated with its persuasive capabilities and user interactions.
Risks
The team of OpenAI engineers evaluated the possible risks of GPT-4o in several areas, including cybersecurity, the creation of biological threats, and its autonomy. In most of these fields, the risk was categorized as "low." However, in the area of persuasion, the risk was rated as "medium."
During the tests, both the synthesized voice and the texts of the model were evaluated. Although the voice did not present significant dangers, the texts generated by GPT-4o did "marginally" cross into the medium-risk category. Specifically, it was observed that on political topics, these texts could be more persuasive than those created by humans, raising serious concerns about the model's ability to influence opinions and decisions.
Human Interaction with AI
Another significant aspect of the report is the evaluation of GPT-4o's anthropomorphic interface, which mimics the human voice with remarkable precision, including pauses, intonations, and emotions. This capability, though impressive, is not without risks. A few months ago, the original version of the voice was withdrawn due to its unsettling similarity to that of actress Scarlett Johansson in the movie 'Her.' The new version, though modified, still raises concerns.
OpenAI's report warns that the "exceptional quality" of the synthesized voice can facilitate more human-like interactions with the model, which could lead to unintended consequences. On the one hand, it could be a useful tool for people who feel lonely, but it could also promote social isolation and create an emotional dependency on the machine. The company acknowledges that socializing with an AI could modify social norms, such as deference and interruption during conversations, behaviors that are acceptable in interactions with an AI but not in human relationships.
Growing Concerns
While OpenAI has taken a first step in identifying these risks, experts suggest that much more needs to be evaluated regarding the impact of these technologies on our social relationships. An analysis published by Google DeepMind in April had already addressed this concern, highlighting that a chatbot's ability to communicate creates an "impression of genuine intimacy," which can lead to complex emotional entanglements. The Replika platform has experienced cases where users developed romantic feelings toward their chatbots, a phenomenon that underscores the need for greater scrutiny and regulation in the development of these technologies.