Skip to main content

Navigating the Bureaucratic Jungle with Artificial Intelligence? – Björn Steinrötter on Legal AI Tools for Public Authorities

Professor of Law Björn Steinrötter knows how “legal tech” can speed up lengthy planning and approval procedures. Together with Honorary Professor Christian Czychowski and others, he is assisting the Federal Ministry of the Interior (BMI) with the introduction of AI-based legal reviews. Potsdam Honorary Professor Christoph Wagner is also advising the BMI on the same project with another team. 

Prof. Steinrötter, how widespread are AI tools in the legal system today?
Theoretical considerations on how to map legal processes in computer science have existed since the 1960s. At some point in the 1990s, people started talking about the “death of legal informatics”, mainly because the technical tools to implement this sometimes considerable theoretical research simply did not exist at the time. Today, the situation is completely different: legal service providers that are revitalizing the legal market with automation solutions and are referred to as “legal tech” are now widespread. Now that generative AI is making major advances, government administrations and the judiciary are stepping up and saying: we can hardly cope with the number and scope of legal proceedings in a reasonable period of time using conventional methods.

German administrative institutions are often criticized for their lengthy application and review procedures. How can artificial intelligence speed up a complex legal review?
Planning and approval procedures are a relatively clearly demarcated legal area. These can include procedures for the approval of wind turbines or hydrogen pipelines. Some of these procedures follow clear rules, while others require assessments or discretion regarding legal consequences. In law, there is very often no single right or wrong decision, but rather a corridor of acceptability and a margin of judgement. That is why the BMI – and I consider this to be correct because it is realistic – is not concerned with completely autonomous systems, but rather with AI-supported preparation of legal decisions by humans.

What does AI need to be capable of to be used for planning and approval procedures?
Ultimately, all relevant laws, associated case law, and administrative practice must be fed into the system. The BMI has already given us an impressive preview of its plans. Basically, learning-based and rule-based AI systems are to be combined for this purpose. The latter achieve their results through pure if-then programming. Learning-based AI systems, on the other hand, use large amounts of data and identify patterns. These AI systems can also handle ambivalence and vagueness, which exist in the legal context both at the factual level and within the norms themselves. In any case, sufficient data quality is required, which means, for example, entering any changes in case law immediately, as well as requests from parties affected by construction projects. The weighting of judgments must also be reflected so that it is clear, for example, that a new ruling by the highest court outweighs many previous rulings by lower courts.

Can we expect such AI systems to be available soon?
We are practically on the verge of implementing such systems in public administration. At least, that is the clear goal of the BMI in the aforementioned project. The legal publisher C.H. Beck has also recently introduced an AI-supported research tool that draws on the publisher’s databases and is already showing quite decent results. At the BMI, our work has mainly involved theoretical preparatory work so far. However, the BMI’s ambitious goal was to develop a test version by the end of this year. The planned platform will initially map planning and approval procedures for the hydrogen core network and will be expanded later.

How much faster could construction projects then be approved?
To determine whether, for example, the construction of a wind farm is lawful, a great deal of information has to be reviewed and evaluated by humans over a very long period of time. An AI system can do this in a matter of seconds and highlight the areas where the user needs to take action. A process that previously took several months, perhaps even years, could thus be ready for decision much more quickly.

What would work in governmental legal departments look like if AI took over most of the diligence work?
The initial phase would probably involve significant change for everyone concerned. But I expect it will become normal quite quickly, as these applications work very intuitively. Human legal review would then be limited, in a sense, to checking whether the result is acceptable. It must also be possible to understand why the system has come to a particular result. The people involved must therefore be no less competent than before, quite the contrary. We still need skilled lawyers who do not simply approve AI-generated results without scrutiny, as this would also be highly problematic from a constitutional perspective.

Would procedural errors be eliminated with AI support?
I wouldn’t say so. In a large number of “normal” cases, we would probably see fewer errors. However, instead of human errors, other errors would occur. AI derives its results from databases, i.e., from the majority of comparable past cases. It simply looks at how decisions were made previously. This can become a problem when the system encounters an atypical case. It may recognize the atypical circumstances, but it lacks the ability to evaluate them as well as innate legal creativity. The fact that such systems formulate legal texts with great conviction, sometimes even “invent” judgments, and that we tend to trust them uncritically is ultimately also a serious risk. In an administrative context, the consequences could extend to official liability, meaning that the state could be held liable for damages.

Are there also legal reservations regarding the use of “legal tech”?
If AI systems were to make sovereign decisions themselves, there would be clear constitutional objections. But even if AI is “only” used to prepare decisions, there are still issues relating to data protection, copyright, and IT security. The former has been investigated by Janko Geßner in our team, and the latter two issues by Christian Czychowski. Furthermore, when I, as a human being, apply the law, it is a creative thought process that always involves evaluations based on prior legal knowledge and judgment, as well as a kind of “world knowledge”. Various hermeneutic processes take place when applying the law. This begins with the mere determination of facts, which must first be constructed from a legal perspective. To ensure that this process is not arbitrary, legal methodology defines criteria that interpretation of the law must follow. These include wording, systematics, legislative history, and the actual meaning and purpose of a law. The question I specifically addressed in the BMI project was whether this traditional methodology can be applied to AI-based case review.

What are your conclusions in your expert opinion for the BMI?
We need an entirely new methodology for AI-based legal reviews. Compared to human case review, artificial intelligence surprisingly often arrives at similar results that fall within the corridor of acceptability. However, the path to those results is different. After all, learning-based AI relies on stochastic patterns and does not make judgments in the human sense. The hermeneutic steps that conventional legal methodology seeks to encompass are omitted. AI-based case review differs from human application of the law when we look at the respective processes leading to the result. Therefore, we also need a different methodology that takes into account the specific processes of AI systems. Here, I have outlined some initial possible cornerstones of such a methodology, including the completeness of input data, data quality requirements including bias control, traceability or explainability of results, the need for human supervision and final decision-making, and many other aspects.


Björn Steinrötter is Professor of Civil Law, IT Law, and Media Law at the University of Potsdam. He is chair of the board of trustees of the German Foundation for Law and Informatics, a board member of the Robotics & Artificial Intelligence Law Society e.V., and a member of the GRUR expert committee on data law.

 

This text was published in the university magazine Portal - Zwei 2025 „Demokratie“. (in German)

Here You can find all articles in English at a glance: https://www.uni-potsdam.de/en/explore-the-up/up-to-date/university-magazine/portal-two-2025-democracy

Published

Online editorial

Nele Reimann

Translation

Susanne Voigt