Technology

#Shhhh, they’re listening – inside the coming voice-profiling revolution

#Shhhh, they’re listening – inside the coming voice-profiling revolution

You decide to call a store that sells some hiking boots you’re thinking of buying. As you dial in, the computer of an artificial intelligence company hired by the store is activated. It retrieves its analysis of the speaking style you used when you phoned other companies the software firm services. The computer has concluded you are “friendly and talkative.” Using predictive routing, it connects you to a customer service agent who company research has identified as being especially good at getting friendly and talkative customers to buy more expensive versions of the goods they’re considering.

This hypothetical situation may sound as if it’s from some distant future. But automated voice-guided marketing activities like this are happening all the time.

If you hear “This call is being recorded for training and quality control,” it isn’t just the customer service representative they’re monitoring.

It can be you, too.

When conducting research for my forthcoming book, “The Voice Catchers: How Marketers Listen In to Exploit Your Feelings, Your Privacy, and Your Wallet,” I went through over 1,000 trade magazine and news articles on the companies connected to various forms of voice profiling. I examined hundreds of pages of U.S. and EU laws applying to biometric surveillance. I analyzed dozens of patents. And because so much about this industry is evolving, I spoke to 43 people who are working to shape it.

It soon became clear to me that we’re in the early stages of a voice-profiling revolution that companies see as integral to the future of marketing.

Thanks to the public’s embrace of smart speakers, intelligent car displays and voice-responsive phones – along with the rise of voice intelligence in call centers – marketers say they are on the verge of being able to use AI-assisted vocal analysis technology to achieve unprecedented insights into shoppers’ identities and inclinations. In doing so, they believe they’ll be able to circumvent the errors and fraud associated with traditional targeted advertising.

Not only can people be profiled by their speech patterns, but they can also be assessed by the sound of their voices – which, according to some researchers, is unique and can reveal their feelings, personalities and even their physical characteristics.

Flaws in targeted advertising

Top marketing executives I interviewed said that they expect their customer interactions to include voice profiling within a decade or so.

Part of what attracts them to this new technology is a belief that the current digital system of creating unique customer profiles – and then targeting them with personalized messages, offers and ads – has major drawbacks.

A simmering worry among internet advertisers, one that burst into the open during the 2010s, is that customer data often isn’t up to date, profiles may be based on multiple users of a device, names can be confused and people lie.

Advertisers are also uneasy about ad blocking and click fraud, which happens when a site or app uses bots or low-paid workers to click on ads placed there so that the advertisers have to pay up.

These are all barriers to understanding individual shoppers.

Voice analysis, on the other hand, is seen as a solution that makes it nearly impossible for people to hide their feelings or evade their identities.

Building out the infrastructure

Most of the activity in voice profiling is happening in customer support centers, which are largely out of the public eye.

But there are also hundreds of millions of Amazon Echoes, Google Nests and other smart speakers out there. Smartphones also contain such technology.

All are listening and capturing people’s individual voices. They respond to your requests. But the assistants are also tied to advanced machine learning and deep neural network programs that analyze what you say and how you say it.

Amazon and Google – the leading purveyors of smart speakers outside China – appear to be doing little voice analysis on those devices beyond recognizing and responding to individual owners. Perhaps they fear that pushing the technology too far will, at this point, lead to bad publicity.

Nevertheless, the user agreements of Amazon and Google – as well as Pandora, Bank of America and other companies that people access routinely via phone apps – give them the right to use their digital assistants to understand you by the way you sound. Amazon’s most public application of voice profiling so far is its Halo wristband, which claims to know the emotions you’re conveying when you talk to relatives, friends and employers.

The company assures customers it doesn’t use Halo data for its own purposes. But it’s clearly a proof of concept – and a nod toward the future.

Patents point to the future

The patents from these tech companies offer a vision of what’s coming.

In one Amazon patent, a device with the Alexa assistant picks up a woman’s speech irregularities that imply a cold through using “an analysis of pitch, pulse, voicing, jittering, and/or harmonicity of a user’s voice, as determined from processing the voice data.” From that conclusion, Alexa asks if the woman wants a recipe for chicken soup. When she says no, it offers to sell her cough drops with one-hour delivery.

A page from an Amazon patent depicts a woman interacting with a home assistant.
Close

Please allow ads on our site

Please consider supporting us by disabling your ad blocker!