Skip to content
cyber security Cybersecurity security

Beware of AI Poisoning: Caution for Law Firms in 2026

Team STS
Team STS

What is AI Poisoning?

AI poisoning is a phenomenon in which bad actors feed AI systems false information in order to influence its recommendations, recall, and fact-based outputs.

Bad actors use a variety of methods (ironically, often powered by AI tools) to “poison” or negatively influence AI models to attack businesses, create phishing opportunities, and more.

Here’s how AI poisoning can affect your law firm in 2026 as cases rise and impact grows.

Poisoned search results can get ugly for your law firm’s reputation.

Disgruntled former employees, competitors, dissatisfied clients, and the average hacker all have something in common: a potential motive to damage your law firm.

Whereas corporate sabotage and revenge plots are a slightly less common threat, hackers employ similar tactics to extort your law firm into paying them to stop. AI poisoning has made it easier than ever for anyone, regardless of their motive or technological experience, to extort or defame your law firm.

A bad actor can use AI to quickly generate numerous articles or blog posts and publish them on the web, accompanied by AI-powered “review bombs”-- clusters of negative and/or false reviews across platforms like Google and Yelp.

Because AI pulls from web content and reviews to make recommendations and search summaries -- both in the AI app itself and on major search platforms like Google and Bing—these clusters of recent, negative information and false experiences can influence both your law firm’s rank against competitors and the recommendations AI might give to your potential clients, partners, employees, or investors.

False reviews or defamatory statements can be synthesized into the AI’s learning and output, leading it to caution against your law firm or even steer clients away (and potentially toward a competitor).

Clients (and potential clients) can be scammed by AI poisoning

We’ve covered common cybersecurity scams run against law firms before, including impersonation schemes in which hackers pretend to be your law firm and contact clients to intercept payment, or trick clients into granting hackers access to private networks and tools.

AI poisoning makes it even easier to do so. In fact, hackers can have your clients calling them directly – all it takes is a prompt injection.

A client or prospect may Google your firm’s phone number, contact form, or email address and be met with an AI-generated response that contains falsified information, which instead connects them directly with hackers who are impersonating your firm and team.

Often, clients and prospects won’t think to check twice--they trust the result given to them by the AI agent or AI-generated search summary. Because they believe they’re corresponding with your team, your law firm can get dragged through the mud even if you’ve never spoken with the victim of this scam before.

Are you AI-Ready?

Assess your law firm's AI readiness with a consultation from our experts. 

 

AI poisoners can phish your employees—causing a firm-wide data breach.

The contact-based scam outlined above also works to fool employees of your firm. Hackers often target large, frequently used business vendors—like ADP, Microsoft, and Google, for example—and poison AI tools to return falsified contact information for these companies.

A well-meaning employee of your law firm may search the phone number for your business software or vendors and accidentally dial a scammer. Given the use of AI voice tools and sophisticated phishing techniques, it can be extremely difficult to tell the difference between a real support center, payment portal, or customer service email and a fake one created to steal your information.

All it takes is one easy mistake to be made by an employee of your law firm to then endanger the rest of your employees, clients, business, and reputation. Once a hacker finds a way into your network, they can easily breach your sensitive client data and confidential business information, including banking, accounting, and staff’s personal information.

Some legal research AI tools can be poisoned—and so can your casework.

If your AI-powered legal research tools pull information from the internet at large, they’re bound to turn up errors from time to time. In the age of AI poisoning, these errors are magnified.

Our colleague Spencer X Smith of Ampliphi recently published a briefing on AI poisoning, which highlights the issue this may pose for tools such as legal research AI assistants, saying:

“The problem isn’t just wrong answers. The larger problem is corrupted judgment.

[...] Think about how your team uses AI today: Drafting emails, summarizing documents, or evaluating options. If the information feeding those tools has been tampered with, the output looks normal but the judgment behind it is compromised.”

Without critical AI training and careful evaluation of results, your attorneys and other employees risk carrying unchecked biases, judgments, and falsehoods over into casework from poisoned legal AI tools.

How to Protect Your Law Firm

The number one defense against AI poisoning is a strong security background. This includes:

  • Mandatory AI training for all employees
  • Strong governance and cybersecurity policy enforcement
  • A culture of skepticism—make sure employees double-check the output from generative AI tools.

It’s not enough to address only AI-related security concerns. Your law firm’s cybersecurity policy should be robust and holistic, with AI as just one part in a larger system of defense.

 

Prevent AI Poisoning from Spreading – Start Today

If you’re concerned about your law firm’s security posture in the age of AI poisoning, help is here.

Start today with a Free Security Vulnerability Scan to pinpoint areas of concern and begin strengthening your cybersecurity posture.

As a premier legal IT provider and consultant, our team can lend their expertise to your law firm’s AI-readiness and cybersecurity. We’re happy to provide a free discovery session to discuss your AI planning and usage—fill out the form below or send us an email to get the guidance you need for free.

 

 

Share this post