The use of AI systems such as ChatGPT in legal contexts increasingly blurs the line between general legal information and regulated legal advice. Large Language Models generate outputs that may closely resemble legal counsel in both form and substance, raising fundamental questions of permissibility, responsibility, and liability. In legal systems such as Germany and the European Union, where the provision of legal advice is a protected professional activity, this distinction carries direct legal significance.
This article examines the conditions under which AI-generated content may qualify as legal services within the meaning of the German Legal Services Act (Rechtsdienstleistungsgesetz, RDG) and explores parallels to international concepts such as the Unauthorized Practice of Law (UPL). It demonstrates that the primary legal risk lies not in the technology itself, but in the context of its use—specifically, whether AI is employed as an internal support tool under professional legal supervision or offered as an autonomous, publicly accessible advisory service. By identifying key risk factors, including case-specific application, legal interpretation, and the absence of human oversight, the article illustrates how AI-driven applications may inadvertently cross regulatory boundaries. The conclusion is clear: while AI can effectively support legal work, responsibility, authorization, and liability remain strictly human obligations.
AI systems like ChatGPT are increasingly used in legal contexts — from contract suggestions to quick legal explanations. But this raises a crucial legal question: Are the answers from large language models (LLMs) considered legal advice? And if so, who is allowed to give it?
Legal Advice: A Protected Activity
In many jurisdictions — especially in Germany and across the EU — legal advice is a regulated profession. The German Legal Services Act (Rechtsdienstleistungsgesetz, RDG) defines legal advice as:
“Legal services are any activity in concrete third-party matters that requires a legal examination of the individual case.”
(§ 2 Abs. 1 RDG)
This means: whenever someone (or something) offers guidance on how a specific law applies to a specific situation, it’s considered legal advice — and requires authorization, typically through qualification as a lawyer.
What About AI?
LLMs, such as GPT-based systems, can generate responses that look like legal advice. But does that qualify as “legal advice” under the law? It depends on several factors:
- Concrete reference: Is the answer tied to a specific, individual case?
- Legal analysis: Does the output contain an evaluation of legal rules?
- Target audience: Does the answer support internal legal work — or is it publicly provided to individuals seeking legal advice?
- Context: Is the AI answer used in a legal advisory setting or as general information?
Legal Evaluation in Germany
According to legal scholars and recent debates in Germany, AI-generated output can qualify as legal advice under § 2 RDG — especially if it:
- Provides personalized guidance (“You should…”),
- Interprets legal provisions for a specific factual scenario,
- Is used without human legal review.
If a company or individual offers such AI-based legal services to others without being authorized under the RDG, it may be an unauthorized legal service, which is subject to prohibition (§ 3 RDG) and potentially fines.
AI as a Tool vs. AI as a Service
The key legal distinction is:
- AI as a tool: Used by lawyers or legally authorized professionals internally. ✅ Legal.
- AI as a service: Provided to the public without legal review. ❌ Possibly illegal.
This means: a lawyer using GPT to draft a first version of a contract, then reviewing it — perfectly acceptable. But an AI app giving legal recommendations to consumers, without human oversight — potentially a violation of RDG.
International Perspectives
In the United States, the situation varies by state. Unauthorized Practice of Law (UPL) statutes apply similarly: AI systems giving legal advice may violate UPL rules unless operated under a lawyer’s supervision. The ABA (American Bar Association) and several state bars have issued cautionary opinions.
Best Practice: Use with Transparency and Caution
Anyone using or offering AI in legal contexts should:
- Ensure disclaimers make clear that output is not legal advice.
- Limit AI output to general legal information, not case-specific advice.
- Have qualified legal review before clients act on AI-generated answers.
- Clarify responsibilities — AI is not liable, but you might be.
Conclusion
Yes — LLMs can provide content that qualifies as legal advice under the law. But they cannot be held accountable, and they are not qualified to do so. That puts responsibility squarely on the humans who use them — or profit from them.
The safest legal position: AI can support legal work — but it must never replace a licensed professional when giving binding guidance. When in doubt, treat AI output as informational — not advisory.
As the line between information and advice continues to blur, law firms and providers face a new strategic choice: regulate themselves — or wait to be regulated.
- Schirmbacher, M. Artificial Intelligence and Legal Services in Germany. Härting Rechtsanwälte/ International Bar Association. URL: https://www.ibanet.org/MediaHandler?id=df7ac29b-7cc9-43e1-94df-799ea4b00017
- American Bar Association (2024). Formal Opinion 512: Ethics Guidance on Lawyers’ Use of Generative AI. URL: https://www.americanbar.org/news/abanews/aba-news-archives/2024/07/aba-issues-first-ethics-guidance-ai-tools/
- Merken, S. (2024) Lawyers Using AI Must Heed Ethics Rules, ABA Says in First Formal Guidance. Reuters. URL: https://www.reuters.com/legal/legalindustry/lawyers-using-ai-must-heed-ethics-rules-aba-says-first-formal-guidance-2024-07-29/
- Krieger, M. M.; Cohen, D. R. (2024) Navigating the Seven Cs of Ethical AI Use by Lawyers. Reuters. URL: https://www.reuters.com/legal/legalindustry/navigating-seven-cs-ethical-use-ai-by-lawyers-2024-12-20/