Reading the Tide: When AI Is Enough—and When It Isn’t for Legal Questions
A practical guide for individuals, businesses, and public entities navigating AI use in legal decision-making
Artificial intelligence has become the first stop for a lot of legal questions. I see that firsthand in my practice almost every day.
Many clients come to us after they’ve already run an issue through AI. Some have gone further—submitting comments to an administrative agency or even filing things in court based on AI-generated content without involving an attorney or other appropriate expert. Even after we’re retained, we sometimes recognize the hallmarks of AI in client emails responding to draft work we’ve shared. It’s a fair assumption that those drafts are being run through an AI tool to generate comments or revisions before coming back to us.
None of that is unusual. And in many cases, it starts from a good instinct—trying to understand the issue and engage more effectively. But it also reflects something I see regularly: people are getting information without always having a clear sense of what to trust or what to do with it.
How do you know when you’ve gotten what you reasonably can from AI—and when it’s time to take the next step.
·
Am I trying to understand something, or am I
about to act on it?
·
If I’m off base, what are the real-world
consequences?
· How would I feel if this exchange showed up in a dispute or a file review?
You don’t need perfect answers. The exercise itself usually clarifies where you are.
Where AI really helps
For getting oriented, AI is a useful tool. It can translate dense language, highlight issues you might not have considered, and give you enough context to have a more productive conversation if you decide to bring in a lawyer.
Used this way, it helps you get your footing. It gives you a sense of the landscape before you decide how far you need to go.
And candidly, lawyers are using it for many of those same purposes.
Where things get less predictable
At some point, the question shifts from understanding to action. That’s where the limitations become more important—and where we’ve seen consistent patterns.
AI is not just occasionally off in minor ways. It can be completely wrong in ways that are not obvious unless you already know the subject matter.
We’ve seen it reverse the roles of parties in a contract structure, suggest provisions that don’t fit the transaction, and cite cases that don’t actually support the point being made. In some instances, the authority looks close enough on first read to seem usable, but falls apart once you look at the facts more carefully.
Even within legal-specific research platforms, the output still needs to be checked. The system can point you in a useful direction, but it doesn’t replace the step of confirming whether the authority actually supports the conclusion in your situation.
What makes this difficult is that the answer usually sounds right. It’s clear, organized, and confident. Without experience—or a habit of verification—it’s easy to move forward without realizing anything is off.
The part that catches people off guard
Another issue tends to surface later, when it’s harder to unwind. When you communicate with a lawyer to seek legal advice, those communications are generally protected. When you interact with an AI platform, you are working with a third-party tool. That difference can matter if a dispute develops.
Courts are already addressing this. In United States v. Heppner (S.D.N.Y. 2026), a federal court considered whether materials generated through an AI platform—and later shared with a lawyer—were protected. The court said they were not. The original exchange was with the AI platform, not with counsel, and sharing it afterward did not change that.That principle shows up in a more practical way as well. If you take draft work your attorney has provided—whether a memo, contract, or litigation document—and run it through an unrestricted AI tool to generate comments or revisions, you may be stepping outside the protections that would otherwise apply. The material is no longer confined to the attorney–client relationship, and depending on how and where it is used, privilege may be affected.
Most clients don’t intend that result. They are trying to engage thoughtfully with the work. But how that engagement happens can change the footing in ways that aren’t immediately visible.
A word on criminal exposure
There is one area where it’s worth being more direct. If there is any realistic possibility of criminal liability—whether due to past conduct, a regulatory investigation, or potential prosecution—it is better to speak with a criminal defense attorney first, whether private counsel or a public defender.
Digital records can become evidence. Explorations that feel hypothetical in the moment may be viewed differently later. Some situations require a different level of caution from the outset.
Why this lands differently for organizations
Everything above applies to individuals. Within a business, nonprofit, or public entity, the same behavior carries additional implications. Information that feels routine may involve employee matters, donor data, internal strategy, or procurement decisions. There may be obligations around how decisions are made and documented. In a public setting, some of what gets created may be subject to disclosure.
In those settings, the consequences tend to travel further than the original decision and cry out for organizational AI policies.
How lawyers are approaching it
It’s also worth being clear about how this is playing out on the other side. Lawyers are not avoiding AI. Used carefully, it improves efficiency. It helps organize issues, speeds up research, and streamlines drafting.
The difference lies in how it’s used—and what happens next. There is attention to the platform itself. Not all AI tools handle data the same way. Some enterprise and professional tools offer stronger protections around confidentiality and limit how user inputs are stored or used. Others are designed to meet specific regulatory standards. Many publicly available tools, especially in default settings, may retain or use inputs in ways that are not appropriate for sensitive information.
At Watershed Legal Counsel, we use reputable legal-specific research tools and closed general-purpose AI platforms that do not train on user data and limit the retention of prompt information. That allows us to benefit from efficiency while maintaining appropriate safeguards.
There is also judgment about what information should be entered at all. Even with stronger protections, certain facts and strategy discussions are handled carefully.
And there is always verification. When AI surfaces a case, statute, or regulatory interpretation, the underlying source is still pulled and read. The question is whether it actually supports the point being made in this context. Experience is what allows you to recognize when something needs a second look—and to know how to check it.
So where does that leave you? For most people, AI is a very good place to start. It helps you understand what you’re looking at. It helps you prepare. It gives you a foundation to move forward. At some point, though, the question shifts from understanding to judgment.
If you find yourself wondering whether you’re missing something, or what the downside looks like if things don’t go as planned, that’s usually the moment to bring another perspective into the conversation.
That doesn’t necessarily require a large engagement. Sometimes it’s just a short discussion to make sure you’re on the right track.
A final thought
AI can take you a good distance. But it won’t tell you when you’ve reached the point where experience matters more than speed. Recognizing that point—knowing when to pause and take a closer look—is the real skill.
And if you’re not sure, that’s a perfectly reasonable
time to pick up the phone. When you reach your turning point, we’ll be there.
Watershed Legal Counsel advises private clients and government instrumentalities in environmental and natural resources matters, serves as outside general counsel for mission-driven enterprises in the environmental sector, and provides strategic legal services that help organizations manage change. Founder Jennifer Wazenski is a Maryland attorney who has practiced environmental and natural resources law since 1991. She served as Principal Counsel to the Maryland Department of Natural Resources from 2010 through 2021, and, prior to that, Deputy Counsel to the Maryland Department of the Environment.
Disclaimer: Attorney advertising. The information provided at this site is for general purposes only. It is not, nor is it intended to be, legal advice.
© 2026 Watershed Legal Counsel. All rights reserved.

Comments
Post a Comment