Breaking

Amazon uses kid’s dead grandma in morbid demo of Alexa audio deepfake - Ars Technica
Jun 23, 2022 1 min, 15 secs

Amazon is figuring out how to make its Alexa voice assistant deepfake the voice of anyone, dead or alive, with just a short recording.

The company demoed the feature at its re:Mars conference in Las Vegas on Wednesday, using the emotional trauma of the ongoing pandemic and grief to sell interest.

During the second-day keynote, Rohit Prasad, senior vice president and head scientist of Alexa AI at Amazon, showed off a feature being developed for Alexa.

Prasad only said Amazon is "working on" the Alexa capability and didn't specify what work remains and when/if it'll be available.

Although, this notably differs from Amazon's demo, in that the owner of the product decides to provide their vocals, rather than the product using the voice of someone likely unable to give their permission.

Besides worries of deepfakes being used for scams, rip-offs, and other nefarious activity, there are already some troubling things about how Amazon is framing the feature, which doesn't even have a release date yet.

However, Amazon's AI voice assistant isn't the place to satisfy those human needs.

It's not hard to believe that there are good intentions behind this developing feature and that hearing the voice of someone you miss can be a great comfort.

And as we've discussed above, there are other companies leveraging deepfake tech in ways that are similar to what Amazon demoed.

But framing a developing Alexa capability as a way to revive a connection to late family members is a giant, unrealistic, problematic leap.

RECENT NEWS

SUBSCRIBE

Get monthly updates and free resources.

CONNECT WITH US

© Copyright 2024 365NEWSX - All RIGHTS RESERVED