10606902855?profile=RESIZE_400xIf Artificial Intelligence applications (Ai) like Alexa really can convert voices, using less than a minute of recorded voice into real-time speech, it opens the door to dystopian gaslighting to a whole new level.  This could be frightening, creepy, disturbing and maybe even criminal.  The definition of gaslighting according to Merriam-Webster:  psychological manipulation of a person usually over an extended period of time that causes the victim to question the validity of their own thoughts, perception of reality, or memories and typically leads to confusion, loss of confidence and self-esteem, uncertainty of one's emotional or mental stability, and a dependency on the perpetrator.  The term gaslighting originated in a 1920s stage play, which was then produced as a 1944 movie called "Gaslight." 

See:  https://www.imdb.com/title/tt0036855/

In a recent article and presentation titled, "Amazon's Alexa reads a story in the voice of a child's deceased grandma."  An Amazon's Alexa Ai Senior VP showed a clip of a young boy asking an Echo device, "Alexa, can grandma finish reading me 'The Wizard of Oz'?"  The video then showed the Echo reading the book using what Prasad said was the voice of the child's dead grandmother.  Amazon characterized it as beneficial, saying "Human attributes of empathy and affect are key for building trust.  They have become even more important in these times of the ongoing pandemic, when so many of us have lost someone we love.  While Ai can't eliminate that pain of loss, it can definitely make their memories last."

There is a psychological sensory experience clinically described as SED, for "Sensory and Quasi-sensory Experiences of the deceased."  This is a more modern clinical term for what used to be described as hallucinations.  According to a November 2020 clinical study, Sensory and Quasi-Sensory Experiences of the Deceased in Bereavement: An Interdisciplinary and Integrative Review, SED experiences are not necessarily a psychological disorder.  Instead, somewhere between 47% and 82% of people who have experienced life events like the death of a loved one have experienced some sort of SED.

According to the study, SED experiences cross boundaries and are experienced by all age groups, members of many religions, across all types of relationship loss, and even death circumstances.  But whether an SED experience is considered comforting or disturbing depends both on the individual and that individual's belief system.  SED also manifests in all sorts of ways, from hearing footsteps to experiences of presence, to sightings.  It does not always have to be voice reproduction.  Overall, the report stops short of making a clinical value judgement about whether SED experiences are psychologically beneficial or detrimental, stating that further study is needed.

It is odd that Amazon chose to show voice replication from a deceased relative, rather than, say, a live and healthy grandmother who could record her voice for her cherished grandchild.  But hey, if Amazon's researchers wanted to go for the macabre, who are we to judge?  That brings us to the discussion of voice replication overall. With a few limited constructive applications, researchers are not sure releasing voice replication Ai technology into the wild is a good idea.  Amazon says they can take a short sample and construct an entire dialog from that short sample. There is something about this that seems terribly, horribly wrong.

Gaslighting entered the digital age in 2018.  The New York Times ran an article describing how digital thermostats, locks, and lights were becoming tools of domestic abuse.  The NYT described how these devices are "being used as a means for harassment, monitoring, revenge and control." Examples included turning thermostats to 100 degrees or suddenly blasting music.

The American Public University Edge also talks about digital gaslighting.  The article explains, "This type of activity allows an abuser to easily demonstrate control over the victim, no matter where the abuser may be.  It is another method that the abuser uses to slowly chip away at a victim's self-esteem and further exacerbate the victim's stress."  How easy would it be to send someone over the edge, if they kept hearing the voice of their dead father or mother?  If an abuser can convince someone they are being haunted or are losing control of their ability to discern reality, that abuser could then substitute in a malevolent subjective reality.

The whole idea sounds like bad fiction, but gaslighting is so prevalent in domestic abuse that the National Domestic Violence Hotline has an entire page dedicated to the gaslighting techniques an abusive partner might use.  If you find yourself in this situation, you can reach the hotline at 1-800-799-7233.

Now add stalkers to this mix.  What if you get a call from your mother or another loved one?  It appears to be your mother's number on caller ID.  You answer and it sounds like your mother.  She says she has been in an accident, or is in some kind of trouble.  She begs you to come out and help her.  You immediately stop to help her, because it is your mother in need and of course you know what she sounds like.   But it is not your mother.  There are methods for spoofing caller ID, and with Ai voice replication, the potential for luring a victim increases considerably.  Pair that with the ability to purchase personal identifying information (PII) with a shocking level of detail from many shady online purveyors, and you have a frightening scenario.

The CDC reports that 1 in 6 women and 1 in 19 men have been stalked in their lifetime.  The US Justice Department reports that "81 percent of women who were stalked by a current or former husband or cohabiting partner were also physically assaulted by that partner and 31 percent were also sexually assaulted by that partner."  More than a million women and about 370,000 men are stalked annually.   As a civilized society, do we really want to put the power of perfectly simulating a voice in the hands of stalkers and abusers?

What if a father receives a call at work from his daughter in college.  She has had an emergency.  Can he please send her a few thousand dollars (right now)?

In combination with deepfake video technology, the potential for creating fake videos of individuals increases considerably.  Whether that video is used by teenagers to bully a schoolmate, or by a disinformation campaign to convince a populace that a leader is up to no good, the idea of deepfakes with accurate voice representation is very troubling.

There are some entertainment industry applications where voice replication can add value. It is only fair to say that this sort of technology has some positive revenue generating potential too.  Entertainment AI software ‘Respeecher’ needs one to two hours of voice samples to recreate a voice.  Amazon's new technology requires less than a minute of recordings.  That opens the door to a great many more recordings, including messages captured from voice mail and even commands given to Alexa and Siri.

Another possible application might be in smart assistants (and smart assistance) for dementia sufferers.  While it might be a very fine line between gaslighting someone with diminished mental capacity and helping them cope, under proper psychiatric care, voice recreation might have positive applications.

Due to the prevalence of sales of Pii information, and even medical information, on the Dark Web, it is logical to expect that hackers will also traffic in short voice recordings of potential victims, especially when those recordings only need be a minute or so.  Most of us use our own voice as a greeting on our voice mail accounts.  That means that if Amazon does release its dead grandma skill to the Alexa platform, it will be accessible to a broad audience.  It is even possible to use Alexa and Alexa skills on home-grown non-Alexa devices, as this article shows.  That means that even if this technology is limited to Alexa, it has the potential to be very problematic.

Amazon is not going to be the only company exploring voice replication. Fortune Business Insights predicts the global speech and voice recognition market will reach $28.3 billion by 2026 at an annual growth rate of almost 20%.  With those kinds of numbers, you can be sure there will be other participants in this arena.

Protecting users from digital gaslighting, stalking, and scams will get progressively more difficult, and voice replication only makes it worse.  Writing in the Lawfare Blog, the deputy director of cyber strategy and execution at the MITRE Corporation and a visiting fellow at the Hoover Institution, describes this situation as "PsyOps in the home."  He states that although there are anti-stalking laws on the books, "Many of these measures cannot be applied directly to cyber gaslighting because, unlike the stalkerware situation, abusers are not adding software to home-based smart devices in order to harass their victims.  Instead, they are using the devices as they were intended to be used."  He also says that legal challenges are more difficult where executives at companies producing these technologies are somewhat untouchable.  He additionally states, "The technologies in question have legitimate and positive uses, and one cannot realistically target the executives of smart device companies just because their equipment has been used to cause someone harm."

Clearly, this is an issue that needs more consideration.  Companies like Amazon need to evaluate carefully whether the features they are adding do more harm than good.  Cybersecurity experts need to continue to harden IoT devices against outside hacking.  Therapists and psychologists need to increase their awareness of digital gaslighting and other 21st-century threats.

Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization.    For questions, comments or assistance, please contact the office directly at 1-844-492-7225, or feedback@wapacklabs. com    

Source: https://www.zdnet.com/article/just-say-no-the-potential-horrors-of-ai-voice-replication/

Weekly Cyber Intelligence Briefings:

Weekly Cyber Intelligence Briefings:

REDSHORTS - Weekly Cyber Intelligence Briefings


E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!