
When I die, please do not create an AI version of me.
"Let me go."
The recent situation involving a man from Arizona, a victim of a road rage incident, has sparked deep reflections on the use of digital avatars based on artificial intelligence. In this case, the avatar of the deceased, Christopher Pelkey, appeared to deliver a poignant speech before a judge responsible for sentencing the person accountable for his death. Although the digital representation captured certain emotional aspects, it had evident limitations, as the voice sounded robotic and the animation lacked fluidity. In this judicial event, the words spoken by the avatar were not spontaneous from Pelkey himself but rather written by his sister.
This sad episode raises moral questions about the digital recreation of deceased individuals. While it may seem like a touching gesture, it is questioned whether it truly reflects the genuine intentions and feelings of the deceased. The possibility that similar technologies might be used in the future within legal and personal processes could lead to complex situations. We are already seeing hints in this direction, with families seeking to digitally revive loved ones for various purposes, from class-action lawsuits to divorces.
Recently, the family of Jim Fagan, a respected NBA commentator, approved the recreation of his voice using artificial intelligence for future league events. As these practices become normalized, questions arise about who controls these digital representations and the inherent risks of reviving the image of people who are no longer with us.
I understand the motivations behind wanting to "revive" a loved one, but it is crucial to point out that a digital avatar will never be more than a superficial replica. While there are technologies that allow for the creation of more authentic interactions, the perception that someone is being brought back to life is misleading. As this technique advances, it is essential that individuals publicly express their wishes regarding how their image and voice can be used in the future, as the availability of online material could make them vulnerable to unauthorized use.
This is a slippery slope that could lead us to a disconcerting reality, where our digital identities could be exploited without our consent.