
-- The new white paper discusses how the rise of AI technology is blurring the lines between the authentic and the artificial. As people struggle to differentiate between what is real or otherwise, Voices' guide explores how an understanding of AI ethics and risk management can help people navigate the alarming security and privacy issues created by advanced voice synthesis technology.
More details can be found at https://www.voices.com/navigating-ai-voice-fraud
Voices highlights how humans struggle to tell deepfake and real voices apart. A study by University College London revealed that AI-generated voices were only correctly identified 73% of the time. Criminals are capitalizing on this and deploying ever more sophisticated voice cloning software to target all areas of society.
“We’ve seen convincing financial and biometric-access fraud, misinformation campaigns, social engineering, impersonation scams targeting the vulnerable, and even “swatting” incidents, where hoax calls are weaponized to bring the authorities down on someone,” explains the paper.
However, Voices suggests there are solutions in the form of biometric authentication systems that can identify deepfakes by spotting patterns that are common in synthetic voices, unnatural breath patterns, unusual fluctuations in background noise, and data artifacts from the voice generation process.
The paper discusses the need for clear ethical guidelines around the use of AI in voice synthesis such as those being developed by the Open Voice TrustMark Initiative, Respeecher, EthicalAI, and the Partnership on AI.
Voices is committed to helping build greater trust between voice talent providers and end users, ensuring that AI use is transparent and follows robust ethical guidelines.
A spokesperson says, “At Voices, we talk a lot about our Three Cs: consent, compensation, and control. Paired with transparency and accountability, this basic ethical framework ensures AI voices and datasets are collected, maintained, and used ethically and fairly.”
With the challenges posed by deepfakes, Voices points out that it is easy to lose sight of the many positive and transformative aspects of AI voice technology. For example, those who suffer from speech impairments can use AI voices to reclaim their voice identity rather than having to rely on generic text-to-speech programs.
There are upsides too for content creation across language barriers and the repair and restoration of damaged historical voice recordings.
For more information, go to https://www.voices.com/navigating-ai-voice-fraud
Contact Info:
Name: Patrice Aldave
Email: Send Email
Organization: Voices
Address: 100 Dundas St Suite 700, London, Ontario N6A 5B6, Canada
Website: https://www.voices.com/
Release ID: 89159903