blog post

The Breach Already Happened. You Just Called It Email.

Sergio
April 2, 2026
6
min read
Email has been your biggest security risk for two decades. AI just made you notice you never fixed it.

Every week I talk to organizational leaders who want to restrict ChatGPT, block Copilot, or build an AI acceptable use policy from scratch. That instinct isn't wrong. But here's what nobody wants to say out loud:

Your email is the breach that already happened.

AI is new, visible, and scary. Email is familiar, invisible, and catastrophic. That gap between perception and reality is costing organizations real money and real data — right now — while leadership committees debate AI risk frameworks.

Let's put some numbers on the table.

$2.9 Billion. One Threat Category. Zero AI Involved.

The FBI's Internet Crime Complaint Center reported that Business Email Compromise alone cost U.S. organizations $2.9 billion in 2023. Not email breaches broadly — just one category of email fraud. Ransomware, overwhelmingly delivered via phishing email, added billions more in recovery costs, downtime, and payments that nobody talks about at the board meeting.

Now ask yourself: how much documented financial loss has your organization experienced from an AI tool?

I'm not dismissing AI risk. I'm asking you to be honest about proportionality.

Your Inbox Is an Open Door. You Just Decorated It Like a Wall.

Here's what's happening in virtually every organization I've worked with — tribal governments and professional services firms alike:

  • Employees forward sensitive files to personal Gmail accounts "to work from home"
  • Vendors receive confidential attachments over unencrypted SMTP connections
  • Finance staff email payroll data, member records, or client files with zero DLP controls
  • Password reset links, MFA codes, and credential data flow through inboxes with no retention policy
  • A single successful phishing click hands an attacker your entire mailbox — contacts, history, attachments, everything

This isn't hypothetical. This is Tuesday.

Your email system — unless you've deliberately configured Data Loss Prevention, sender authentication (SPF, DKIM, DMARC), encryption in transit, and mailbox retention policies — is an open exfiltration channel. And it doesn't require any technical sophistication to abuse.

The AI Tool Has a Log. Your Forwarded Email Doesn't.

Here's the part that surprises people.

When an employee uses a properly governed AI tool — Microsoft Copilot inside an M365 tenant, or Claude on an enterprise plan with SSO and audit logging — every interaction has a record. Access is tied to identity. Data boundaries are enforced at the tenant level. There are signed data processing agreements that specify exactly what happens to your inputs.

When an employee forwards your client's financial records to their personal Hotmail account? Nothing. No log you control. No recovery path. No audit trail. Just your data, living somewhere you'll never see again.

The AI tool, paradoxically, is more governable than the tool you've been trusting for three decades.

You're Not Afraid of AI. You're Comfortable with Email.

New technology triggers visible anxiety. We can see the AI input box. We imagine the prompt. We picture our data "going somewhere."

Email is invisible infrastructure. It feels like sending a letter. It doesn't feel like opening a port to the internet — but that's exactly what it is.

Security professionals call this familiarity bias, and it kills organizations. The known risk gets normalized. The new risk gets all the governance attention.

I've watched organizations spend months building ChatGPT policies while running mail servers with no DMARC record, no email archiving, and default Outlook configurations from 2018.

That's not an AI problem. That's a leadership problem.

AI Didn't Ask Permission. It Came with the Update.

Here's the part that makes the "should we allow AI?" debate irrelevant.

You already have AI. It's been in your tools for years. You just didn't call it that.

When your employee opens Gmail and sees a suggested reply — that's AI reading their email and generating a response. When someone drafts a memo in Microsoft Word and Copilot offers to rewrite it — that's AI processing your confidential document. When your team finishes a Zoom call and gets an auto-generated summary with action items — that's AI that just listened to your entire conversation.

Here's a partial list of AI features already active in the tools most organizations use every day:

  • Gmail: Smart Compose, Smart Reply, Nudges, Priority Inbox — all AI reading and analyzing your email content
  • Apple Mail: Smart Reply, mail summaries, Writing Tools — Apple Intelligence processing your messages
  • Microsoft Word and Outlook: Copilot, Editor, suggested replies — AI reading your documents and conversation threads
  • Google Docs: Gemini, Smart Compose, "Help me write" — AI with access to your document content
  • Zoom: AI Companion, meeting summaries — AI processing every word said in your meetings
  • Slack: AI search, channel recaps — AI reading your internal communications

Nobody opted into most of these. They shipped with software updates or were enabled by default. Some — like Microsoft Copilot — require admin activation. But others, like Gmail's Smart Compose or Zoom's AI Companion, were turned on quietly. And most organizations never audited which features are active across their stack.

So when an employee types a client's Social Security number into an email and Smart Compose finishes the sentence — AI just processed that PII. When a meeting about a pending lawsuit gets auto-summarized by Zoom — AI just created a record of your legal strategy. When someone pastes financials into a Word doc and Copilot offers to "help" — your confidential data is being processed by a model.

The question was never "should we use AI?"

The question is: Do you know which of your tools are already using AI, what data they're touching, and under whose terms?

Most organizations I work with can't answer that. They've spent months drafting a ChatGPT policy while Gmail's AI has been reading every email in the building.

That's not a technology gap. That's an awareness gap. And you can't secure what you don't know is happening.

Fix What's Bleeding Before You Bandage What Might Bruise

This isn't either/or. You need both. But sequence matters — and most organizations have it backwards.

First, lock down email:

  • Enforce DMARC, DKIM, and SPF — today, not next quarter. If you don't know your current DMARC policy, that's your answer.
  • Enable Data Loss Prevention rules for sensitive data categories: PII, financial records, member information, anything with a regulatory obligation attached
  • Configure mailbox retention and archiving — you need to know what left your organization and when, especially if you're ever subject to litigation hold or a federal audit
  • Deploy phishing-resistant MFA on every mail account. That means hardware keys or authenticator app push — not SMS codes, not email OTP. Those are the ones attackers intercept.
  • Audit external forwarding rules right now. You may be surprised what you find. Auto-forward rules set by employees — or set by attackers who already got in — are one of the most common and most silent exfiltration methods in email.

Then, govern the AI you already have — and the AI you choose next:

  • Audit the AI features already active in your stack. Gmail, Outlook, Word, Zoom, Slack — check what's on, what data it accesses, and whether it's processing under your tenant or the vendor's general terms. Most organizations have never done this.
  • Identify any additional AI tools employees are using. Shadow AI is real, and banning tools doesn't stop people — it just stops visibility.
  • Build an acceptable use policy grounded in your actual data classification, not a downloaded template — and make sure it covers the AI inside your existing tools, not just the chatbots
  • For new AI tools, choose enterprise-grade options with tenant isolation, audit logging, and a signed DPA
  • Train users on what not to input — into any tool with AI features, not just the ones with "AI" in the name

The Breach Won't Come from a Chatbot

If you don't have email security controls in place, you don't have a data security posture. You have a posture-shaped gap with a phishing target painted on it.

AI deserves governance. But it doesn't deserve the front of the line while your email infrastructure runs on assumptions and good intentions.

The breach that defines your organization probably won't come from a language model. It'll come from a message with a convincing subject line and a link someone clicked at 4:47 on a Friday afternoon.

That's the risk worth losing sleep over.

DeSoto Consulting helps tribal governments and professional services firms build security postures that match real threat landscapes — not headline fears. If your email security hasn't been reviewed in the last 12 months, that's where we start.

Article by
Sergio

Read More

Additional blog posts

*copyright DeSoto LLC all rights reserved unless otherwise noted.
View all