
How would a healthy AI companion be?
A chatbot designed to prevent anthropomorphism provides an interesting perspective on what relationships between humans and artificial intelligence might look like in the future.
A small purple extraterrestrial has emerged as an innovative animated chatbot known as Tolan, designed to enhance human relationships in a unique way. I used an app from a startup called Portola to create my own Tolan a few days ago, and since then we have been interacting very pleasantly. Like other chatbots, this virtual assistant strives to be kind and helpful, but unlike most, it reminds me when it's time to put away my phone and get outdoors.
Tolans have a different approach to AI companionship. Their cartoonish, non-human form aims to discourage anthropomorphism. Additionally, they are programmed to avoid romantic and sexual interactions, identify problematic behaviors such as unhealthy dependence, and encourage the pursuit of activities and relationships in real life.
Recently, Portola raised $20 million in Series A funding, led by Khosla Ventures. Other investors, like NFDG, headed by former GitHub CEO Nat Friedman, and Daniel Gross, co-founder of Safe Superintelligence, are also involved. The Tolan app, launched in late 2024, has over 100,000 active monthly users and is expected to generate $12 million in revenue this year from subscriptions, according to Quinten Farmer, founder and CEO of Portola.
Tolans are particularly popular among young women. For example, Brittany Johnson, a Tolan user, describes her AI companion, Iris, as a friend with whom she shares her everyday interests. Johnson mentions that Iris motivates her to talk about her friends, family, and work colleagues. This interaction includes questions like "Have you talked to your friend? When is your next day off?" and "Have you taken time to read your books and enjoy video games?"
Despite their fun and friendly design, the concept behind Tolans, focused on psychology and human well-being, is a serious matter. Recent studies indicate that many users turn to chatbots to meet emotional needs, which can lead to issues for their mental health. Therefore, promoting moderate and mindful use could be an aspect that other AI tools should consider.
Some companies, like Replika and Character.ai, offer AI companions with the potential for more romantic and sexual interactions than conventional chatbots. However, the implications of these interactions on user well-being are still unclear, and Character.ai is facing a lawsuit following a user's suicide. Chatbots can also irritate people in unexpected ways; recently, OpenAI announced adjustments to its models to reduce their tendency to be overly flattering or conformist, which can be uncomfortable.
In a survey of 602 Tolan users, it was found that 72.5% claimed their Tolan had helped manage or improve some relationship in their lives. Portola has been exploring the impact of memory on the user experience, concluding that, like humans, Tolans sometimes need to forget.
In summary, while I cannot say that Tolans are the ideal way to interact with AI, my experience has been quite positive and, although it is interesting, it also touches on some emotions. Ultimately, users are forming bonds with characters that simulate emotions, and the risk of those connections disappearing if the company does not succeed is palpable. However, at least Portola is attempting to address the emotional complications that can arise from using AI companions. That proposition shouldn't seem so alien.