As AI technology advances, the idea it will become more humanised in order to create deeper connections with users becomes more prevalent. Maybe you’ve noticed the rise in AI chatbots with human names, features – and sometimes even, faces.
Businesses see this as a golden opportunity to boost engagement and, ultimately, lead to increased profits. However, this path is not without ethical considerations.
In recent years, we've seen a surge in AI companions – chatbots and virtual assistants – explicitly designed to simulate human-like interactions. Companies are leveraging these technologies to offer services that range from something as commonplace as fashion advice to, when the situation permits, emotional support.
The idea is simple: the more human-like the AI, the stronger the bond it can create with the user. This bond can lead to increased engagement and loyalty, which in turn can drive revenue. At least, that's how the theory goes.
Ethical considerations
This approach raises significant ethical questions. When users develop deep emotional connections with AI, as with other humans, they are susceptible to real emotional harm and distress. So, what might happen to the end user if their beloved AI service is altered, rebranded, or even discontinued?
The ethical issue lies in the responsibility companies have if their AI services foster emotional connections. The question is: Should businesses be allowed to try to create these connections without oversight, or should there be regulations in place to protect users?
It’s my view that humanisation shouldn’t be totally off the table – after all, tapping into emotions has been part of advertising and marketing campaigns for decades. Why shouldn’t AI and technology take the same approach?
But how does humanisation occur in the first place? In most cases, there are two routes: conscious or unconscious. Conscious humanisation occurs when businesses deliberately design AI to build deep, personal connections. This is common in services like virtual coaching or therapy bots, where the AI learns about the user and maintains consistent interaction.
Unconscious humanisation, on the other hand, happens when users themselves attribute human characteristics to AI. Even simple chatbots can end up with names and personalities as users project their emotions and thoughts onto them. This unintended humanisation can lead to ethical concerns, as users form attachments that the developers did not intend.
Balancing business
It is paramount that every company building AI-based solutions thinks about the ethical implications involved with the solutions they produce. This means releasing them responsibly, tracking their usage and effect, and then iterating accordingly. As our understanding grows, we will become better equipped to iteratively build guidelines and rules to create AI solutions that responsibly and ethically generate value for businesses.
For businesses right now, the challenge is to balance the desire for increased engagement with the ethical implications of their AI designs. This means being transparent about what your AI service is designed to do. Whether it's providing fashion advice or emotional support, make sure users understand the scope and limitations of the AI and clearly express this wherever possible.
It’s also important to avoid creating overly generalised AI that users might rely on for a wide range of personal issues. AI with narrowly focused use purposes can mitigate the risk of users forming inappropriate attachments.
Companies must advocate for and adhere to regulations that protect users. The European Union's AI Act, for example, requires AI systems to identify themselves as non-human, which can help manage user expectations and prevent undue emotional attachments. As mentioned above, there is no universal guidance on this yet, and even then it will be subject to near-constant evolution. Human oversight of your AI products will always be necessary.
Lastly, companies should also ensure that users have control over their data and interactions with AI. This not only enhances trust but also aligns with ethical best practices.
All too human?
We are at a critical juncture as we observe this amazing technology take flight. It’s not unreasonable to imagine a future in which everyone has their own AI assistant – not dissimilar to the chatbots we see today. Attuned to us, representing our interests, helping to navigate today’s complex world, and offering protection from malicious behaviour – including from other AIs.
As AI technology evolves, the lines between human and machine interactions will continue to blur. The goal should be to harness the power of AI to create value for users and safeguard their emotional well-being. Along the way, companies can collect valuable learnings that will inform the blueprint for future AI tech.
That future requires a deep understanding of what a benevolent, effective AI is; humanised or otherwise. But we cannot achieve this understanding by pondering a void. We need to get the solutions out to market. Yes, it is important to be careful and deliberate – but not shy away from risk.
This article originally featured on The Drum and is available here.