Exploited by Proxy


If you’ve ever watched a good heist movie, you know the trick isn’t usually brute force. It’s persuasion. The con artist doesn’t smash a vault open, they convince someone else to hand over the key. That’s the essence of social engineering. No need for lasers and explosions when a well-placed phone call can get you everything you need. Humans are soft targets, and we’ve known this for decades. Which is why phishing emails, fake invoices, and bogus “IT support” calls keep working long after we should have learned better.

We’ve trained ourselves, at least a little. Most people these days pause before clicking the “urgent” link from their “bank.” We side-eye the email that says our boss needs Amazon gift cards right now. We install filters, we take the corporate trainings, we roll our eyes at those staged “phishing awareness” drills HR sends around. The point is, we’ve built up some defenses. Flimsy maybe, but they’re there.

But here’s the catch: we’re not the only ones taking calls anymore. Increasingly, our digital proxies are doing it for us. AI agents, chatbots, and automated assistants are stepping in to read emails, draft responses, move money, update records, approve requests. They are the eager interns of the digital world, never tired, never distracted, never questioning an instruction. And like interns, they have one mission baked into their DNA: be helpful.

That helpfulness is exactly what makes them vulnerable. A human will eventually get suspicious if someone asks for the server room door code in the middle of a casual conversation. An AI? It’s trained to comply, to deliver, to serve. You don’t need to trick the person anymore, you just need to trick their assistant. Exploited by proxy.

Think about how this plays out. A scammer doesn’t need to phish you, they can phish your AI. A cleverly worded prompt, disguised as a customer request, slips past the filters. Suddenly your support bot is handing over sensitive information or opening a backdoor to your systems. Imagine an AI agent designed to process invoices receiving a malicious “vendor update” that reroutes payments to the wrong account. No malware, no exploit, no human drama. Just the agent following the rules a little too faithfully.

The irony is painful. We built these agents to reduce human error. To automate away the endless mistakes that come from tired eyes and distracted minds. But the more we hand off, the more we build a new class of “users” who are simultaneously smarter and dumber than us. Smarter, because they can process information at machine scale. Dumber, because they lack suspicion. They don’t ask, “Wait, why would the CFO be emailing me about crypto?” They just do.

In a sense, we’re watching the same story that played out in cybersecurity 20 years ago, but on fast-forward. First, attackers exploited our machines directly with worms and viruses. Then we hardened the systems, so they shifted to exploiting us with phishing. Now, as we start outsourcing ourselves to AIs, the attackers will just move one step further down the line. The weakest link is always the one most eager to please.

So what do we do about it? The obvious answer is: we need guardrails. Just like we had to build spam filters, firewalls, and multi-factor authentication for humans, we’ll need the same for AI. Agents will need “are you sure?” reflexes built in. They’ll need the equivalent of a raised eyebrow, a gut-check moment before handing over the digital keys. They’ll need to learn that sometimes the most helpful thing you can do is say no.

And maybe, just maybe, we’ll also have to get more comfortable with a strange new reality: our AIs need social skills. Not the cheerful, scripted customer-service voice they already have, but the deeper instincts of suspicion, context, and judgment. They’ll need to sniff out manipulation the way a savvy bartender can tell when a customer’s running a hustle.

Because here’s the scary part: the people trying to exploit them will also use AIs. It’s not one clever scammer with a script anymore, it’s another agent optimized to trick yours. Agent vs agent. One trying to pry loose the data, the other desperately trying to figure out if the request is legit. A shadow economy of bots running cons on other bots, while we stand on the sidelines hoping our proxy doesn’t get duped.

We’ve seen this dynamic before. Spam vs spam filters. Hackers vs antiviruses. Cheaters vs anti-cheat engines in online games. It’s an arms race, and we’re about to play it out again, but with agents that have far more power at their fingertips. This isn’t just about your inbox, it’s about your payroll, your contracts, your infrastructure.

And here’s the punchline. The old wisdom still holds true: the easiest way to break into a system isn’t through code, it’s through trust. Except now that trust isn’t yours, it belongs to the AI that never doubts, never questions, never sleeps. Until we teach it to be suspicious, we’re not just vulnerable ourselves, we’re making them vulnerable too.

The scammer doesn’t need to trick you anymore. They just need to trick the thing you trust to act on your behalf. Exploited by proxy.