Article

AI in Software Development Is No Longer Optional

Why using AI occasionally as a chatbot is no longer enough, why companies need AI-assisted software delivery to stay competitive, and why that makes architectural thinking more important.

Let us start with the elephant in the room: in 2026, software should already be built with AI.

Yes, I know. For a first real article after the blog introduction, I could have started with a cleaner architecture topic, a pattern, a platform design choice, or something more timeless.

I promise the next articles will do that. But before going deeper into system architectures, cloud, data, integration patterns, or specific engineering decisions, it is worth addressing the shift that now sits underneath all of them: AI is changing how technical work gets done.

And many people are still underusing it badly.

Of course, some people reading this are already using AI very deeply, perhaps more deeply than what I criticize later in this article. Good. This is a provocation mainly for the many people and teams who still are not, because that gap already matters and will matter even more soon.

Some still treat AI as an occasional chatbot. Some ask it for snippets, auto-completion, or summaries and think that counts as adoption. Some teams have “allowed” AI, but have not actually changed how they build software.

That is not enough anymore.

At this point, using AI seriously is not just a personal productivity trick. It is increasingly a requirement for teams and companies that want to keep pace over the next few years (or even months).

The question is no longer whether AI belongs in software development.

The real question is whether we are using it deeply enough to matter.

This is not optional anymore

The reason is simple: AI changes delivery speed.

Used the right way, AI allows many people to operate at a completely different level.

Teams that use coding models well can explore more options, produce more implementation output, refactor more aggressively, document faster, test faster, and move from idea to working software much more quickly than teams that still operate mostly in the old way.

And this is not about becoming 5% faster.

In many cases, the difference is closer to an order-of-magnitude shift. Not a marginal optimization, but a real 10x or more change in how fast a person or a team can analyze, plan, implement, refine, and ship meaningful software work.

That does not mean every AI-generated result is good. It does not mean the technology is mature in every dimension. And it definitely does not mean judgment stops mattering.

The baseline has changed.

When one team can design and ship in days what another team still needs weeks just to structure, the difference is no longer cosmetic. It becomes strategic. It affects roadmap speed, internal leverage, delivery cost, and how quickly a company can respond when the market changes.

At that point, failure to adopt is not just inefficiency. It becomes a serious competitive risk.

If this kind of exponential speed change becomes normal in the market, companies that do not absorb it deeply enough will not just look a bit slower. They risk being overtaken by teams that can iterate faster, learn faster, deliver faster, and adapt faster.

That is why I do not see AI in development as a nice extra anymore.

I see it as part of the operating model.

Most people are still underusing it

This is the uncomfortable part.

A lot of technical people are still using AI in a very limited way:

  • asking occasional questions in chat
  • requesting small isolated code snippets
  • using it like a smarter search engine
  • stopping at the first answer instead of refining
  • avoiding larger asks because they do not trust it enough
  • keeping the model far away from real project scope

That is not where the leverage is.

The real change starts when the conversation becomes much broader.

Not:

write this function

But:

let us discuss this business problem, the constraints, and the possible ways to solve it with software. Then give me a detailed plan for the best option, challenge the weak points, and let’s implement it.

That is a completely different use of AI.

It moves the model from local helper to force multiplier.

This article is mainly about AI coding

There is also a broader world of agentic automations, autonomous workflows, and AI-operated systems.

That matters, and it deserves its own discussion.

But the more immediate shift is already here: coding itself has changed.

Tools like Codex, Claude Code, and the broader class of coding models are already changing how engineers, developers, platform teams, and technical leads can approach software work. That is the focus here.

The point is not whether autonomous agents will eventually run entire delivery pipelines on their own.

The point is that even before that future becomes normal, teams should already be using AI much more deeply in the software development process than many of them are today.

The unit of work has changed

The old unit of work was often small and local.

  • implement this ticket
  • write this function
  • fix this bug
  • add this endpoint

That is still part of the job, but it is no longer the most important level at which good technical people should operate.

The new unit of work is broader:

  • understand the business problem
  • surface the constraints
  • analyze task and project complexity
  • map the available solution space
  • evaluate trade-offs in complexity, cost, and effort
  • choose an approach
  • evaluate whether large parts of legacy code should be refactored to accelerate future development
  • simplify and automate more processes where the old way no longer makes sense
  • define the plan
  • let the model help execute it
  • review and check it works

The new unit of work is no longer just the function or the ticket. It is the problem, the constraints, the solution space, the architecture, and then the implementation.

This is one of the biggest mindset shifts.

If you use AI well, you stop interacting with software work only at the level of local implementation. You start working more directly on the relationship between the problem, the constraints, the solution options, the architecture, and the execution.

That is why tools like Codex and Claude Code matter. Not because they can write another helper method, but because they make it much easier to stay in that broader loop and then go vertical into implementation.

How AI should actually be used

For me, the practical workflow now looks much more like this:

  1. start from the business problem, not from the function
  2. explain the constraints clearly
  3. ask for multiple solution options
  4. ask the model to criticize its own proposals
  5. choose an approach
  6. ask for a detailed implementation plan
  7. let the model implement all the work
  8. review and refine at both code level and system level

That is a much more serious use of AI than “generate me a snippet.”

You should be asking bigger questions.

You should be pushing for broader plans.

You should be asking for alternatives, counterarguments, trade-offs, and refinements.

You should be using the model to do as much heavy lifting as possible under strong human direction, up to designing and implementing whole projects.

And yes, that means trusting it with bigger chunks of work than many people are still comfortable with.

Not blindly, but decisively.

This is why architectural thinking matters more

When implementation becomes cheaper, the value moves upward.

That is the real shift.

What becomes more important is not the ability to produce syntax faster by hand. It is the ability to:

  • frame the right problem
  • understand business context
  • choose boundaries and abstractions
  • evaluate trade-offs
  • structure a system coherently
  • define interfaces
  • challenge unnecessary complexity
  • decide what should not be built

This is why I think many technical roles will increasingly drift toward architecting solutions.

Developers, cloud engineers, data people, platform teams, and technical leads will all still need to understand implementation. But the differentiator moves more and more toward system thinking, business knowledge, solution design, and judgment.

That is not a side effect.

That is one of the central consequences of AI in development.

”But AI still makes mistakes” is no longer a serious objection

When I read online discussions or articles on this topic, I still often see some variation of:

  • Claude made this stupid mistake
  • Codex got stuck in a weird loop
  • the model hallucinated in this one specific case

Yes, that happens.

Hallucinations, strange loops, and obvious errors were a much bigger topic a few years ago, and they have not disappeared completely.

But models have improved very quickly, and the practical importance of those failures keeps shrinking.

At this point, in a large share of real software work, the issue is usually not that the model is fundamentally unusable. The issue is that it was given poor context, a weak description of the problem, unclear constraints, no real testing loop, or too little iteration.

In many cases, the failure is no longer mainly a model problem.

It is a context problem, a prompting problem, a review problem, or a workflow problem.

If you give a model partial context, vague instructions, no architecture, no constraints, and no feedback from tests, then yes, it will make more mistakes. That is not surprising.

And yes, it is occasionally funny to share a screenshot of a model doing something ridiculous.

But that is a reason for jokes and memes, not a reason to underuse the technology.

The point is not to challenge the model like it is a rival in an argument, or to keep proving that it is not perfect.

The point is to use it to build things quickly and well.

Benchmarkers, evaluators, and model builders should absolutely care about the edge cases and failure modes in great detail. That is part of their job.

For most end users, the practical question is much simpler:

Does this tool already provide enormous leverage, even if it still makes occasional specific mistakes?

The answer is clearly yes.

It is already fast enough, precise enough, and capable enough to change how software work should be done.

So treating an occasional failure as if it invalidates the whole tool is just a bad way to think about it.

The right response is not to dismiss the model.

The right response is to give it better context, better constraints, better tests, and better iteration.

”It is not safe to give my business data to public LLMs”

This is one of the most common enterprise objections, and in some cases it is perfectly valid.

But it is usually framed much too broadly.

First, this is really true only for specific categories of sensitive company data, not for the majority of what many normal companies actually handle day to day.

If you have PII, highly regulated data, especially sensitive business information, or some narrow set of genuinely strategic internal assets, then yes, privacy and confidentiality matter a lot and must be handled carefully.

But that is not the same thing as saying that AI should not be used in development at all.

Most of the time, the immediate value comes much earlier and in a much safer way: using AI to reason about problems, structure work, and write or refactor code.

The idea is not to feed an entire production database to public agents and hope for the best. That is a separate topic, much closer to governed enterprise access, MCP-style integration patterns, controlled retrieval, and internal data exposure.

This article is mainly about AI coding.

And code, in most companies, is not the deepest secret in the organization. The real sensitivity usually lives in data, and even there the genuinely critical part is often a relatively small subset rather than the whole estate.

The same is true for code itself. Most code is much closer to an implementation commodity than people like to admit. The truly unique part is often smaller: some proprietary algorithms, a few differentiating workflows, or specific internal logic.

So I think people often overestimate the secrecy of both their code and their data.

That does not mean “nothing is sensitive.”

It means the sensitive part should be identified precisely instead of being used as a vague reason to avoid the whole shift.

And if privacy really is a hard constraint, the answer is not automatically to avoid AI. The answer can also be private and segregated deployment: Amazon Bedrock, private model hosting, controlled enterprise inference environments, or similar approaches that fit the real risk level.

If the data is sensitive, solve the deployment and governance problem. Do not use it as a generic excuse to keep working in the old way.

Using AI properly does not make you think less

One of the most common fears around AI is that it will shut down the brain of the people using it.

That can happen if someone uses it lazily, like a machine for low-effort answers or a way to avoid understanding anything.

But serious use does the opposite.

Working heavily with AI does not reduce the need for thought. It raises the level at which thought needs to happen.

It forces you to define problems more clearly, evaluate more options, compare more architectures faster, review more critically, and manage a wider surface of technical context than before.

You are not delegating your brain. You are doing the opposite.

You are pushing your brain toward higher-level judgment while letting the model absorb more of the repetitive implementation effort.

That is why I do not buy the lazy idea that serious AI use makes professionals weaker by default.

If anything, used well, it can make people more stimulated, more knowledgeable, and more capable because they can work across more topics, more systems, and more projects at the same time instead of operating only at the level of local execution.

The real risk is overload

The real risk, in my view, is different.

It is not intellectual passivity.

It is overload.

AI increases pace. It increases context switching. It increases the number of options you can explore, the number of threads you can follow, and the number of decisions you need to make well at the same time. It increases responsibilities.

That can become cognitively heavy very quickly.

You can end up bombarded by concepts, jumping between domains, reviewing much more output than you are used to, and operating in a much higher-speed loop than older workflows demanded.

You can be tempted to stretch yourself too far, work too long, and stay in a constant high-speed loop.

That is something to be aware of, but it is not a reason to step back.

It is a reason to adapt your way of working so you can stay coherent, healthy, and effective at a much higher speed.

Stay on the wave, not under it

This is no longer a theoretical discussion.

AI has already changed the baseline for how software can be built.

That does not make architecture less important. It makes it more important.

That does not make good technical people obsolete. It changes where their value sits.

And it does not remove the need for thought. It raises the level at which thought has to happen.

The people and companies that adapt early will work differently, move faster, and operate with more leverage.

The ones that keep using AI as an occasional side tool, or avoid it because it feels uncomfortable, will simply end up slower than the environment around them with clear consequences.

You should stay on the wave, not under it.