Updated: Category: AI

Here is a prediction: the adoption of AI tooling will lead to government regulation of the software industry.

Tell me I am wrong. But hear me out first.

In my view there are two major ways how the recent advent of AI contributes to software products:

  1. Agentic AI as part of a larger software system. LLMs, SLMs and other AI models are integrated into software.
  2. AI-assisted development. Tools such as Copilot, Cursor, and Claude Code generate the code for us and speed up software development.

AI-assisted development promises significant efficiency gains. Software can be developed much quicker. We see this happening all over the place, the consequences thus far are mainly layoffs (or so the big companies let us believe, at least).

Companies must keep pace with the efficiency gains, and the only choice is to adopt AI for engineering. The best amongst will use these tools without sacrificing quality (actually, they would use it to improve quality). This requires that you know what you are doing; you need to have good software engineering practices in place to integrate AI into your process. It also requires money, as these tools come at a cost that easily balloons.

Many organization lack one or the other, but they mostly lack best practices. And so it becomes a race to the bottom, where quality is sacrificed for speed and lower costs.

Another trend we will see is Citizen Developers - a trend that started with Low-Code / No-Code - becomes a real thing. Everybody and anybody can create software and put it into production. Without a background in software engineering, software applications will be released into the wild with abismal quality, and people will use them. Once it is out, it is nearly impossible to remove.

The challenge with Agentic AI is its non-deterministic behavior. This must be harnessed, which means that we need to build failsafe systems when AIs breach the boundary of what they are intent to do (the happy path). This requires a good understanding of what it is these AI systems must (and mustn’t) do. In other words, its behavior must be clearly specified, modelled, and controls implemented.

I believe many companies are not up to this task. And so we will have Agentic AI systems in place that behave outside their control parameters. They will appear buggy and display unexpected behavior.

These systems will cause damage. When pervasive enough to impact society - only a matter of time - governments must act. Their response is regulation. I expect it would look similar to how physical construction (buildings, etc) is overseen.

Governments already do this for safety-critical systems (healthcare, banking, critical infrastructure, etc) to a certain extent. This will be extended and applied to the rest of the industry.

It will take some time before this happens. Maybe 5 to 10 years. But the regulation is long overdue.

Updated: