Photo by Anete Lusina on Pexels

Voice Profiling and Freedom

Joseph Turow—

Voice profiling is a gateway drug to a new era of hyper-personalized targeting. Since Apple introduced Siri for the phone in 2010 and Amazon debuted Alexa in 2014—with Google Assistant catching up and Chinese players moving forward since then—speaking to a device and expecting a relevant response has become an ordinary experience for billions of people. Few of us realize that we are turning over biometric data to companies when we give voice commands to our smartphones and smart speakers, or when we call contact center representatives, but that’s exactly what is happening. In each of these situations, our voice impulses are converted into binary code, and although nothing in civilization or the laws of economics requires that those zeroes and ones be exploited, a robust and growing industry has developed to do just that. From the home and the car to the store, the hotel, and beyond, companies devoted to personalized marketing are gearing up to add what individuals’ bodies say about them to more traditional demographic, psychographic, and behavioral tags.

Consider what would happen if only a bit more of what is taking place in contact centers began happening in the home and in cars, stores, hotels, and schools. Our worlds would increasingly be filled by offers—not necessarily explicit ads—based on our putative emotions and sentiments. Need help while in the car? If so, how do you know that the intelligent agent is not responding to you based on your voice profiles? Will companies want to deal with you when you sound irascible? Will they answer your questions quickly or linger to offer you discounts? Will the sound of your voice open or close the door to certain deals? Will firms link physical characteristics that they inferred from your voice last week to what you buy today—and combine that data with other information they have collected—to draw conclusions about your health, and so about the benefits and risks in striking up a long-term relationship with you?

The possibilities are endless, and these examples are likely just the tip of a huge discriminatory iceberg. We’re already subject to differential offers and opportunities based on various facts about us—such as our income, where we live, our race and sex, and other attributes. Voice profiling adds an especially insidious means of labeling us. We could be denied loans, refused insurance or have to pay much more for it, or turned away from jobs, all on the basis of physiological characteristics and linguistic patterns that we typically don’t change and whose existence is certified by a science that may not actually be good at predicting behavior. What if voice profiling tells a prospective employer that you’re a bad risk for a job that you covet—or desperately need? What if it tells a bank that you’re a bad risk for a loan? What if a restaurant decides it won’t take your reservation because you sound low-class (read Black or Hispanic, though the algorithm supposedly corrected for that), or too demanding, or somehow not cool enough for its image? What if a public advocacy organization won’t take your donation because its algorithms profile you as gay? Discrimination through voice profiling can be extremely subtle and hard to detect—and thus hard to fight. The problem may be compounded when digital thieves enter the picture. They may steal corporate profiles based on voice—and in some cases voiceprints themselves—and use them for malicious purposes that could range from trying to steal your identity, to spreading unfavorable ideas that companies have about you, to extortion.1

Even if shoppers are only dimly aware that these activities are widespread across several industries, they may start to worry about their position in the marketplace and to suspect that the system of commerce is stacked against them. They may also begin to worry that opening their mouths anywhere in public may result in unwanted inferences about them, because microphones are everywhere. As Echo and Google Home were first gaining popularity, reviewers suggested that users push the off button when discussing topics they didn’t want the voice assistant to know. Turning the device off, however, takes away the spontaneity that is at the heart of the assistants’ seductiveness; when on, the assistant is always open to a question or command from across the room. So people leave it on, leaving themselves open to voice surveillance that statements and patents from Google, Amazon, and others indicate will lead to discriminatory treatment in the American public sphere, where the self and shopping are defined together.


  1. Thanks to William Frucht for help in formulating these examples.

From The Voice Catchers: How Marketers Listen In to Exploit Your Feelings, Your Privacy, and Your Wallet by Joseph Turow. Published by Yale University Press in 2021. Reproduced with permission.


Joseph Turow is the Robert Lewis Shayon Professor of Communication at the University of Pennsylvania’s Annenberg School for Communication. He is the author of numerous books, including The Aisles Have Eyes.

Recent Posts

All Blogs

Categories