Google’s Ethical Blindspot: Making AI That Imitates Us

AI masquerading as Sundar Pichai.


At Google I/O 2025, Sundar Pichai took the stage to introduce a new feature for Gmail: personalized smart replies to messages in your inbox. The pitch was slick, showing how Google’s AI technology rifled through Pichai’s e-mails and documents to craft a seemingly thoughtful reply that even aped his writing style (down to his annoying habit of omitting “I am”—”Excited to see you” rather than “I am excited to see you”—which he uses to make statements that sound committed, but which he may not actually stand behind). But beneath the surface of convenience lies a deeper concern. It’s a concern, Google’s ethical blindspot, that the company has more than once failed to acknowledge.

In 2018, Google introduced Duplex, an AI assistant that made phone calls sounding uncannily human, complete with filler words like “um” and “uh” to enhance believability. It could book a hair appointment or reserve a table at a restaurant—all while the person on the other end believed they were speaking with a real human.

Both features are marvels of engineering. But they also reveal a company increasingly willing to trespass on ethical boundaries in the name of innovation.

Impersonation Without Transparency

The core problem with Duplex and now with Gmail’s smart summaries is not that they work—it’s that they are designed to pretend to be human. There’s a profound difference between an AI assistant that helps and one that deceives. Google has entered the territory of impersonation. They’re deploying AI systems that interact without clear disclosure that a machine is speaking or writing.

This is the crux of Google’s ethical blindspot: an AI system masquerading as a person erodes trust. If you’re on the receiving end of a call or e-mail, you should know whether you’re engaging with a person or an algorithm. That’s not just good design—it’s ethical necessity.

Convenience Over Conscience

Google’s AI products often prioritize user convenience over human agency. Auto-generated e-mails may save time, but they also risk reducing communication to a synthetic echo of ourselves. The subtle transformation of our digital voices into AI-generated scripts creates a chilling effect on authenticity. We begin to outsource not just labor, but what it means to be human.

This is part of a larger pattern. Google’s internal decisions—like continuous layoffs while reporting record profits—betray a short-term mindset, a failure to invest in long-term human capital. In the same way, its AI development reflects a drive to build fast and dazzle, without pausing to ask what the long-term consequences might be—or whether they should be building it at all. The failure here is not technical. It’s moral.

Reimagining Human-Centered AI

AI doesn’t have to be deceptive. It can be transparent, collaborative, and respectful of the human beings it serves. But that requires leadership with a moral compass. It requires companies like Google to see its users (and its employees) not as inputs to their machine, but as people.

Until that shift happens, we should view every AI demo from Google not just with wonder—but with caution.

Jayson Adams is a technology entrepreneur, artist, and the award-winning and best-selling author of two science fiction thrillers, Ares and Infernum. You can see more at www.jaysonadams.com.