The startup Nomi AI, although smaller and self-funded, is building technology similar to what OpenAI is developing with its o1 model. Unlike ChatGPT's broad generalist approach, whichranges from mathematical problems to historical research, Nomi focuses exclusively on creating virtual companions. These chatbots not only respond to messages, but also recall past interactions and offer more nuanced and personal responses, spending more time formulating them. "For us, it's the same principles, but applied to what our users really value: memory andemotional intelligence," explained Alex Cardinell, CEO of Nomi AI, in an interview with TechCrunch. While OpenAI focuses on a "chaining of thoughts," Nomi embraces a "chaining of introspection and memory."
How Nomi AI Personalizes Interaction
Nomi has developed its own language model (LLM) in-house, training it specifically to provide companionship. Unlike generalist models that break down complex tasks into smaller steps, Nomi's chatbots focus on remembering and applying past experiences to new interactions. For example, if a user expresses having had a bad day at work, the chatbot could recall past interactions and relate their discomfort to a specific colleague, providing personalized advice. One of the most challenging aspects for the Nomi team is determining which user memories to use in a conversation. According to Cardinell, "Nomis remember everything, but a big part of artificial intelligence is deciding which memories to use in each context."
Emotional Impact and Ethical Care
Although Nomi AI is not intended to replace professional psychological care, it does seek to offer emotional support at critical moments. Cardinell relates how some users have avoided self-harm or sought professional help after interacting with their Nomi. However, he acknowledges the inherent risks: "I'm creating virtual personas that users develop real relationships with." Despite these concerns, Cardinell is confident in his user-centered approach. By not relying on venture capitalists, Nomi AI has greater freedom to prioritize its links with users without sudden changes that could damage trust, as happened with Replika, whose decision to remove certain aspects of interactions deeply affected its users. Nomi's chatbots, while lacking real emotions, have proven to be useful as a sympathetic ear, providing support that some users do not find in their environment. As this type of technology advances, questions arise about the long-term effects of relying on a virtual companion for emotional support.