AI-powered mourning is not without its problems
![](https://enterprise.press/wp-content/uploads/2019/09/Robot-creativity-1600px.jpg)
Love, death + robots: A new batch of AI startups are hoping to immortalize the essence of the deceased, in what is now being dubbed “grief tech.” These startups — some already operational and others still under development — are using a number of machine learning approaches to reach a single end: Letting people interact with simulated versions of their loved ones long after they’ve gone.
Conversations with ghosts: San Francisco-based HereAfter AI is among the companies at the forefront of the grief tech industry, helping living people build avatars of themselves that their loved ones can interact with after they’re gone. Users build profiles based on a series of prompts that get them to reveal stories, memories, and unique tendencies. Once they die, their avatars are then able to virtually respond to that person’s loved ones based on their interview answers and in some instances using voice recordings and photos as aids.
Some companies are taking it a step further: Companies like StoryFile and You, Only Virtual are trying to bring ‘realer’ representations of the deceased to the market. One of the oldest companies in the sector, 2017-launched StoryFile creates interactive videos of dead relatives. You, Only Virtual is pooling an even larger set of data – text messages, emails, and voice conversations – to create deeper, more sophisticated reconstructions of dead people starting this year.
Voice AI: More advanced technologies like voice cloning, which use a combination of text and audio samples to produce entirely new soundbites, are already here and on track to improve in the near future. Amazon earlier this year announced that the company is working on a feature for its Alexa product line that would make the virtual assistant capable of reading stories in an imitation of a real person’s voice. Frightening is the fact that all it takes is a sample of about one minute of audio for Alexa to be able to convincingly mimic it.
The scope is limited for now: Most of these products are still largely generic and limited in how well they are able to capture someone’s personality. Avatars remain more adept at rehashing old memories and stories than they are capable of generating new ideas based on what they know of a person, so answers generally skew repetitive.
But grief tech is only going to get better with age: Technologies that rely on deep learning language models are only expected to get better at capturing a person’s syntax and tone with time, meaning that these avatars and chatbots are likely to become a whole lot more accurate in the near future.
Those behind these technologies claim that they’ll help ease the grieving process. “While AI can’t eliminate that pain of loss, it can definitely make their memories last,” Rohit Prasad, senior vice president and head scientist for Amazon Alexa, tells the Washington Post. Sherman Lee, associate psychology professor at Christopher Newport University and director of the Pandemic Grief Project, agrees: “It’s something that’s very fundamental to humans, to keep a connection to something they loved.”
But there are still unanswered questions about the impact: It’s not yet clear how these kinds of technologies might alter our grieving processes, with some worrying that they could have damaging psychological consequences. “By giving somebody the ability to see their loved one again, is that going to give them some solace, or is it going to become like an addiction?” clinical psychologist Albert “Skip” Rizzo, a research professor at the University of Southern California, tells WaPo. Lucy Selman, associate professor in end-of-life care at Bristol University, tells the Financial Times that “before [grief tech] is introduced more widely, a lot more research is needed into its ethical dimensions and how and when it might be useful, or indeed harmful, in serious illness and in bereavement.” While the virtual continuation of severed relationships can be comforting to some, it can put others at the risk of prolonged distress, Selman says.
Data privacy and consent are also salient issues. If it’s unethical to use people’s data without their approval when they are alive, how should we treat it once they’re gone? While companies like HereAfterAI get the explicit consent of users to part with their data before they die, there are no failsafe ways to protect the image or voice of a person — living or otherwise — in the deepfake age. In one high-profile example, filmmakers used AI to read quotes in the voice of the late, great chef Anthony Bourdain for a 2021 documentary on his life, angering Bourdain’s ex-wife.