GET IN TOUCH WITH PAKKO, CREATIVE DIRECTOR ALIGNED FOR THE FUTURE OF CREATIVITY.
PAKKO@PAKKO.ORG

LA | DUBAI | NY | CDMX

PLAY PC GAMES? ADD ME AS A FRIEND ON STEAM

 


Back to Top

Pakko De La Torre // Creative Director

Artificial intelligence can replicate any voice, including the emotions and tone of a speaker with just 3 seconds of training - Innovation Toronto

Artificial intelligence can replicate any voice, including the emotions and tone of a speaker with just 3 seconds of training – Innovation Toronto

What you need to know

Microsoft recently released an artificial intelligence tool known as VALL-E that can replicate people’s voices (via AITopics). The tool was trained on 60,000 hours of English speech data and uses 3-second clips of specific voices to generate content. Unlike many AI tools, VALL-E can replicate the emotions and tone of a speaker, even when creating a recording of words that the original speaker never said.

paper out of Cornell University used VALL-E to synthesize several voices. Some examples of the work are available on GitHub.

The voice samples shared by Microsoft range in quality. While some of them sound natural, others are clearly machine-generated and sound robotic. Of course, AI tends to get better over time, so in the future generated recordings will likely be more convincing. Additionally, VALL-E only uses 3-second recordings as a prompt. If the technology was used with a larger sample set, it could undoubtedly create more realistic samples.

At the moment, VALL-E is not generally available, which may be a good thing as AI-generated replications of people’s voices could be used in dangerous ways by threat actors and others with malicious intent.

Windows Central take: Impressive but scary

While VALL-E is undoubtedly impressive, it raises several ethical concerns. As artificial intelligence becomes more powerful, the voices generated by VALL-E and similar technologies will become more convincing. That would open the door to realistic spam calls replicating the voices of real people that a potential victim knows.

Politicians and other public figures could also be impersonated. With the speed social media travels and the polarity of political discussions, it’s unlikely that many would stop to ask if a scandalous recording were genuine, as long as it sounded at least somewhat authentic.

Security concerns also come to mind. My bank uses my voice as a password when I call. There are measures in place to detect voice recordings and I’d assume the technology could sense if a VALL-E voice was used. That beings said, it still makes me uneasy. There’s a good chance that the arms race will escalate between AI-generated content and AI-detecting software.

While not a security concern, some have brought up the fact that voice actors may lose work to VALL-E and competing tech. While it’s unfortunate to see people lose work, I don’t see a way around this. If VALL-E reaches a point where it can replace voice actors for audio books or other content, companies are going to use it. That’s just the reality of technology advancing. In fact, Apple recently announced a feature that uses AI to read audio books.

Like any technology, VALL-E will be used for good, evil, and everything in between. Microsoft has an ethics statement on the use of VALL-E, but the future of its usage is still murky. Microsoft President Brad Smith has discussed regulating AI in the past (via GeekWire). We’ll have to see what measures Microsoft puts in place to regulate the use of VALL-E.

This content was originally published here.