
Every week I talk to organizational leaders who want to restrict ChatGPT, block Copilot, or build an AI acceptable use policy from scratch. That instinct isn't wrong. But here's what nobody wants to say out loud:
Your email is the breach that already happened.
AI is new, visible, and scary. Email is familiar, invisible, and catastrophic. That gap between perception and reality is costing organizations real money and real data — right now — while leadership committees debate AI risk frameworks.
Let's put some numbers on the table.
The FBI's Internet Crime Complaint Center reported that Business Email Compromise alone cost U.S. organizations $2.9 billion in 2023. Not email breaches broadly — just one category of email fraud. Ransomware, overwhelmingly delivered via phishing email, added billions more in recovery costs, downtime, and payments that nobody talks about at the board meeting.
Now ask yourself: how much documented financial loss has your organization experienced from an AI tool?
I'm not dismissing AI risk. I'm asking you to be honest about proportionality.
Here's what's happening in virtually every organization I've worked with — tribal governments and professional services firms alike:
This isn't hypothetical. This is Tuesday.
Your email system — unless you've deliberately configured Data Loss Prevention, sender authentication (SPF, DKIM, DMARC), encryption in transit, and mailbox retention policies — is an open exfiltration channel. And it doesn't require any technical sophistication to abuse.
Here's the part that surprises people.
When an employee uses a properly governed AI tool — Microsoft Copilot inside an M365 tenant, or Claude on an enterprise plan with SSO and audit logging — every interaction has a record. Access is tied to identity. Data boundaries are enforced at the tenant level. There are signed data processing agreements that specify exactly what happens to your inputs.
When an employee forwards your client's financial records to their personal Hotmail account? Nothing. No log you control. No recovery path. No audit trail. Just your data, living somewhere you'll never see again.
The AI tool, paradoxically, is more governable than the tool you've been trusting for three decades.
New technology triggers visible anxiety. We can see the AI input box. We imagine the prompt. We picture our data "going somewhere."
Email is invisible infrastructure. It feels like sending a letter. It doesn't feel like opening a port to the internet — but that's exactly what it is.
Security professionals call this familiarity bias, and it kills organizations. The known risk gets normalized. The new risk gets all the governance attention.
I've watched organizations spend months building ChatGPT policies while running mail servers with no DMARC record, no email archiving, and default Outlook configurations from 2018.
That's not an AI problem. That's a leadership problem.
Here's the part that makes the "should we allow AI?" debate irrelevant.
You already have AI. It's been in your tools for years. You just didn't call it that.
When your employee opens Gmail and sees a suggested reply — that's AI reading their email and generating a response. When someone drafts a memo in Microsoft Word and Copilot offers to rewrite it — that's AI processing your confidential document. When your team finishes a Zoom call and gets an auto-generated summary with action items — that's AI that just listened to your entire conversation.
Here's a partial list of AI features already active in the tools most organizations use every day:
Nobody opted into most of these. They shipped with software updates or were enabled by default. Some — like Microsoft Copilot — require admin activation. But others, like Gmail's Smart Compose or Zoom's AI Companion, were turned on quietly. And most organizations never audited which features are active across their stack.
So when an employee types a client's Social Security number into an email and Smart Compose finishes the sentence — AI just processed that PII. When a meeting about a pending lawsuit gets auto-summarized by Zoom — AI just created a record of your legal strategy. When someone pastes financials into a Word doc and Copilot offers to "help" — your confidential data is being processed by a model.
The question was never "should we use AI?"
The question is: Do you know which of your tools are already using AI, what data they're touching, and under whose terms?
Most organizations I work with can't answer that. They've spent months drafting a ChatGPT policy while Gmail's AI has been reading every email in the building.
That's not a technology gap. That's an awareness gap. And you can't secure what you don't know is happening.
This isn't either/or. You need both. But sequence matters — and most organizations have it backwards.
First, lock down email:
Then, govern the AI you already have — and the AI you choose next:
If you don't have email security controls in place, you don't have a data security posture. You have a posture-shaped gap with a phishing target painted on it.
AI deserves governance. But it doesn't deserve the front of the line while your email infrastructure runs on assumptions and good intentions.
The breach that defines your organization probably won't come from a language model. It'll come from a message with a convincing subject line and a link someone clicked at 4:47 on a Friday afternoon.
That's the risk worth losing sleep over.
DeSoto Consulting helps tribal governments and professional services firms build security postures that match real threat landscapes — not headline fears. If your email security hasn't been reviewed in the last 12 months, that's where we start.
Additional blog posts