In the world of social networks, a new disruptive proposal has emerged: SocialAI, a platform where each human user interacts exclusively with artificial intelligence bots. In an environment reminiscent of the X (formerly Twitter) interface, this network seeks to offer a controlled and personalized experience, eliminating all interaction with real people.

Completely artificial interactions

SocialAI allows users to create their own artificial social environment. By registering, users can select what type of bots they want to "follow" their posts, as well as the personalities and type of responses they will receive. The creator of this platform, Michael Sayman, presents it as a "safe space" where people can express themselves without fear of being judged or facing the toxicity of typical human interactions on social networks.

Every time something is posted on SocialAI, instant responses are generated from bots, creating the illusion of being on an active and relevant social network. However, all interaction is completely artificial and under the user's control. Unlike other networks where bots exist in an uncontrolled manner, in SocialAI these bots are the only component of the platform, and their behaviors are completely under the user's influence.

 

A different approach: security or bubble

The purpose of SocialAI has been described as liberating, as it removes the social pressure to interact with other humans, according to Sayman, who is a former Facebook employee. However, some critics see a troubling aspect: this network allows users to reinforce their own "ideological bubbles" by designing an environment where there is no possibility of confronting opposing viewpoints or dealing with real interactions.

Although presented as a "safe" social network, SocialAI could also become a tool that reinforces social disconnection, by allowing people to interact only with responses that match their expectations and desires, without any challenge or insecurity.

 

Criticism or experiment?

The launch of SocialAI can be interpreted in different ways. Some might see it as an ironic criticism of current social networks, while others see it as an experiment on our emotional needs online or even a therapeutic tool for training social interactions without fear of judgment. Either way, it represents one more step towards the normalization of the relationship between humans and artificial intelligence.