Is AI technology appropriate for creating digital 'ghosts' of deceased individuals?

16 May 2024 2242
Share Tweet

If you're longing for a deceased loved one, you may find comfort in photographs or old voicemails. However, innovations in the field of artificial intelligence (AI) now allow you to converse with a virtual bot modelled after your loved one, mirroring their appearance and sound.

Silicon Intelligence and Super Brain are two companies that offer these services. They utilize generative AI technology similar to the one used by ChatGPT to create digital "ghosts" by analyzing snippets of text, photos, audio recordings, video and other data related to the deceased person.

These digital replicas dubbed as griefbots, deadbots or re-creation services, offer an illusion of interaction with the deceased, making it seem as though they're still alive. Katarzyna Nowaczyk-Basińska, a researcher at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, who investigates the impact of technology on death, loss and grief, makes these mentions.

Recently, Nowaczyk-Basińska and her colleague Tomasz Hollanek, a technology ethicist at the same university, published a paper on May 9 in Philosophy & Technology on the ethical implications of "digital immortality" technology, questioning whether it compromises human dignity.

SN: The TV show Black Mirror showcased a gripping 2013 episode about a woman bestowed with a robot that imitates her deceased boyfriend. How plausible is this scenario?

Nowaczyk-Basińska: We're already at a stage akin to the scenario, minus the tangible embodiment. The concept of digital resurrection based on extensive data is now a reality.

In my academic journey, I've observed a drastic shift from viewing digital immortality as a fringe concept to recognizing it as the "digital afterlife industry." As a researcher, it is intriguing, but as an individual, it can be quite disconcerting.

Our research utilizes speculative scenarios and design fiction as a method, focusing on what is currently technologically and socially feasible.

SN: Your paper outlines three hypothetical problems associated with deadbots. Which scenario do you perceive as most dystopian?

Nowaczyk-Basińska: We describe an instance where a terminally ill woman decides to leave a griefbot for her eight-year-old son to help him deal with loss. Exposing children to such technologies can be potentially risky.

Arguably, these technologies could be used to shield children from the reality of a close relative's death. We currently know very little about its potential impact on children.

We argue for stringent measures to mitigate potential harm, exuding special care for the most vulnerable. It could imply age-restricted access to these technologies.

SN: What additional safeguards do you propose?

Nowaczyk-Basińska: Users should be informed that they're interacting with AI. This technology simulates linguistic patterns and personality traits by processing vast volumes of personal data, but it is not a conscious entity. It is also vital to develop structures for de-commissioning or deleting of these deadbots. Furthermore, we stress on the factor of consent.

SN: Can you describe the consent scenario you've proposed for bot users?

Nowaczyk-Basińska: We propose a scenario where an elderly person privately subscribes to a personal deadbot for 20 years, expecting it to provide solace to their adult children after they pass away. Imagine the children receiving a slew of emails and notifications from the re-creation service, inviting interaction with the bot modelled after their deceased father.

It is necessary to respect the choice of individuals on their preferred method of grieving. For some, it could be a source of comfort, but not necessarily for everyone.

SN: You also emphasize protecting human dignity even after death. In your hypothetical scenario, an adult woman uses a free service to create a deadbot of her late grandmother. What is the outcome in this case?

Nowaczyk-Basińska: She asks the deadbot about the recipe for homemade carbonara spaghetti that she loved cooking with her grandmother. Instead of receiving the recipe, she receives a recommendation to order that food from a popular delivery service. Our concern is that griefbots might become a new space for a very sneaky product placement, encroaching upon the dignity of the deceased and disrespecting their memory.

SN: Different cultures have very different ways of handling death. How can safeguards take this into account?

Nowaczyk-Basińska: We are very much aware that there is no universal ethical framework that could be developed here. The topics of death, grief and immortality are hugely culturally sensitive. And solutions that might be enthusiastically adopted in one cultural context could be completely dismissed in another. This year, I started a new research project: I’m aiming to explore different perceptions of AI-enabled simulation of death in three different eastern nations, including Poland, India and probably China.

SN: Why is now the time to act on this?

Nowaczyk-Basińska: When we started working on this paper a year ago, we were a bit concerned whether it’s too [much like] science fiction. Now [it’s] 2024. And with the advent of large language models, especially ChatGPT, these technologies are more accessible. That’s why we so desperately need regulations and safeguards.

Re-creation service providers today are making totally arbitrary decisions of what is acceptable or not. And it’s a bit risky to let commercial entities decide how our digital death and digital immortality should be shaped. People who decide to use digital technologies in end-of-life situations are already in a very, very difficult point in their lives. We shouldn’t make it harder for them through irresponsible technology design.


RELATED ARTICLES