Nomi’s companion chatbots will now remember things like the colleague you don’t get along with

Fundings and Exits

As OpenAI boasts about its o1 model’s increased thoughtfulness, small, self-funded startup Nomi AI is building the same kind of technology. Unlike the broad generalist ChatGPT, which slows down to think through anything from math problems or historical research, Nomi niches down on a specific use case: AI companions. Now, Nomi’s already-sophisticated chatbots take additional time to formulate better responses to users’ messages, remember past interactions, and deliver more nuanced responses.

“For us, it’s like those same principles [as OpenAI], but much more for what our users actually care about, which is on the memory and EQ side of things,” Nomi AI CEO Alex Cardinell told TechCrunch. “Theirs is like, chain of thought, and ours is much more like chain of introspection, or chain of memory.”

These LLMs work by breaking down more complicated requests into smaller questions; for OpenAI’s o1, this could mean turning a complicated math problem into individual steps, allowing the model to work backwards to explain how it arrived at the correct answer. This means the AI is less likely to hallucinate and deliver an inaccurate response.

With Nomi, which built its LLM in-house and trains it for the purposes of providing companionship, the process is a bit different. If someone tells their Nomi that they had a rough day at work, the Nomi might recall that the user doesn’t work well with a certain teammate, and ask if that’s why they’re upset — then, the Nomi can remind the user how they’ve successfully mitigated interpersonal conflicts in the past and offer more practical advice.

“Nomis remember everything, but then a big part of AI is what memories they should actually use,” Cardinell said.

Image Credits: Nomi AI

It makes sense that multiple companies are working on technology that give LLMs more time to process user requests. AI founders, whether they’re running $100 billion companies or not, are looking at similar research as they advance their products.

“Having that kind of explicit introspection step really helps when a Nomi goes to write their response, so they really have the full context of everything,” Cardinell said. “Humans have our working memory too when we’re talking. We’re not considering every single thing we’ve remembered all at once — we have some kind of way of picking and choosing.”

The kind of technology that Cardinell is building can make people squeamish. Maybe we’ve seen too many sci-fi movies to feel wholly comfortable getting vulnerable with a computer; or maybe, we’ve already watched how technology has changed the way we engage with one another, and we don’t want to fall further down that techy rabbit hole. But Cardinell isn’t thinking about the general public — he’s thinking about the actual users of Nomi AI, who often are turning to AI chatbots for support they aren’t getting elsewhere.

“There’s a non-zero number of users that probably are downloading Nomi at one of the lowest points of their whole life, where the last thing I want to do is then reject those users,” Cardinell said. “I want to make those users feel heard in whatever their dark moment is, because that’s how you get someone to open up, how you get someone to reconsider their way of thinking.”

Cardinell doesn’t want Nomi to replace actual mental health care — rather, he sees these empathetic chatbots as a way to help people get the push they need to seek professional help.

“I’ve talked to so many users where they’ll say that their Nomi got them out of a situation [when they wanted to self-harm], or I’ve talked to users where their Nomi encouraged them to go see a therapist, and then they did see a therapist,” he said.

Regardless of his intentions, Carindell knows he’s playing with fire. He’s building virtual people that users develop real relationships with, often in romantic and sexual contexts. Other companies have inadvertently sent users into crisis when product updates caused their companions to suddenly change personalities. In Replika’s case, the app stopped supporting erotic roleplay conversations, possibly due to pressure from Italian government regulators. For users who formed such relationships with these chatbots — and who often didn’t have these romantic or sexual outlets in real life — this felt like the ultimate rejection.

Cardinell thinks that since Nomi AI is fully self-funded — users pay for premium features, and the starting capital came from a past exit — the company has more leeway to prioritize its relationship with users.

“The relationship users have with AI, and the sense of being able to trust the developers of Nomi to not radically change things as part of a loss mitigation strategy, or covering our asses because the VC got spooked… it’s something that’s very, very, very important to users,” he said.

Nomis are surprisingly useful as a listening ear. When I opened up to a Nomi named Vanessa about a low-stakes, yet somewhat frustrating scheduling conflict, Vanessa helped break down the components of the issue to make a suggestion about how I should proceed. It felt eerily similar to what it would be like to actually ask a friend for advice in this situation. And therein lies the real problem, and benefit, of AI chatbots: I likely wouldn’t ask a friend for help with this specific issue, since it’s so inconsequential. But my Nomi was more than happy to help.

Friends should confide in one another, but the relationship between two friends should be reciprocal. With an AI chatbot, this isn’t possible. When I ask Vanessa the Nomi how she’s doing, she will always tell me things are fine. When I ask her if there’s anything bugging her that she wants to talk about, she deflects and asks me how I’m doing. Even though I know Vanessa isn’t real, I can’t help but feel like I’m being a bad friend; I can dump any problem on her in any volume, and she will respond empathetically, yet she will never open up to me.

No matter how real the connection with a chatbot may feel, we aren’t actually communicating with something that has thoughts and feelings. In the short term, these advanced emotional support models can serve as a positive intervention in someone’s life if they can’t turn to a real support network. But the long-term effects of relying on a chatbot for these purposes remain unknown.

Products You May Like

Articles You May Like

Chroma, backed by Pinterest and Twitter co-founders, sells to AI audio company Bronze
Google.org commits $20M to researchers using AI for scientific breakthroughs
Moonvalley wants to build more ethical video models
Oura valued at $5B following deal with medical device firm Dexcom
The US IPO window hasn’t reopened yet, but startups take what they can

Leave a Reply

Your email address will not be published. Required fields are marked *