What AI can't replace in software development

website developer

For all the hype and excitement surrounding Artificial Intelligence (AI), there is more than a little trepidation about what a future dominated by smart machines could look like. The whole point of AI is to make computer programmes capable of thinking and behaving as humans do. If they can think and act like us, they can do our jobs because no human being is capable of such speeds, scales, or levels of accuracy without ever resting.

Back in March 2023, global investment bank Goldman Sachs caused a stir by releasing a report claiming that up to 300 million jobs could be made obsolete by AI. A headline intended to generate clicks rather than stimulate serious debate it may have been. But it nonetheless tapped into genuine concerns people have about the impact of AI on work and jobs.

What is clear is that AI will have a huge impact on how businesses function, and that will certainly affect how people work. That transformation is already well underway. The IMF makes the more measured assessment that up to 40% of jobs could be affected by AI, both for good and bad. As well as potentially replacing some things people do, AI also has the power to support and augment human labour, taking what we are capable of to new heights.

For many analysts, this augmenting role is likely to be much more significant than AI’s potential to take over jobs and make people redundant.

AI in Software Development

We’re already seeing this play out in the world of software development. Programming is uniquely suited to the application of AI – it boils down to using code to write code, after all. Developers have been using automation tools and rules-based logic to make production cycles quicker and more efficient for years. When Machine Learning (ML) came along, it revolutionised code checking, testing and bug fixing. ML algorithms could then spot weaknesses in code, and suggest fixes, based on similar patterns they had ‘learnt’ from previous code examples, without people having to set the parameters of what to look for.

But the real game-changer was the arrival of Generative AI. ChatGPT and its copycats have ushered in a new age of AI capabilities based on the ability to understand, use and create human language. Computer code is just another type of language. Built on large language models (LLMs) trained on enormous code depositories, code agents and code assistants like GitHub CoPilot, GitLab, Google’s Gemini and Amazon’s Q Developer have in effect ‘learnt’ the intricacies of coding to the point where human developers can give them a set of instructions about what the want to build, and the AI does much of the technical heavy lifting.

As is usually the case with AI, the biggest benefits of this next-gen approach to ‘smart’ coding are vast improvements in speed, efficiency, productivity and accuracy. Development teams making use of AI tools can complete production cycles up to twice as fast. It’s little surprise, therefore, that an estimated 90% of software development teams are already incorporating AI into their workflows.

One stat that stands out is the prediction that, as soon as 2027, up to 15% of applications could be built without human involvement. The next step from AI being able to take instructions from people and turn them into code is having combinations of ML-enabled generative algorithms that can monitor tech stacks, identify where changes to code (or new code) would make a difference, and then build, test and run it. Completely automated, completely independent.

The question then is, if AI can take over 15% of all coding, how long until it is 50%, 75% and then, eventually, 100%? And in that case, how long before software developers have to worry about their careers? Or, coming at it from another angle, how soon before businesses that want apps built can do it themselves using ‘no code’ AI tools, rather than hiring a developer?

These are good questions. But there are equally good reasons to believe that the role of the expert software developer and engineer will never become obsolete, no matter how far AI evolves.

Algorithm Limitations

The biggest reason lies in how AI works. For all their complexity and sophistication, AI algorithms are still just algorithms – that is, mathematical or logical processes which interpret input data to produce a desired output based on a specified set of rules.

By the very nature of how they work, algorithms have three main limiting factors:

  1. Outputs are dependent on the quality (and, as is increasingly apparent in AI, the quantity) of the input data.
  2. Algorithms ‘see’ only the input data itself, without any understanding of context or relevance.
  3. Outputs are also dependent on the quality of the rules built into the algorithm. The ability of AI algorithms to ‘learn’ and change their own rules according to the evaluation of outputs has presented a partial solution to this, although it is still apparent in issues surrounding algorithm bias.

Even when AI is performing some of its most impressive feats – predicting future outcomes with astonishing accuracy, generating text and code all on its own, drawing actionable insights from data at previously unimaginable speed and scale – it is still operating within the above limitations. For example, the ‘smartest’ AI systems have to be trained on notoriously massive volumes of data – even ChatGPT’s early iterations required 300 billion-plus words to function adequately.

Even then, as brilliant as cutting-edge neural networks are at extrapolating patterns from vast data sets and applying them to create desired outputs, tweaking their own rules as they go based on the quality of those outputs – they still make mistakes. And that’s because, even though neural networks are designed to work like a human brain, they cannot see the context of data the way a human brain can. Text-generating AI cannot always tell fact from fiction when processing source material. In software development, AI tools can generate workable tools based on past examples, but it cannot judge whether people liked using those apps or not.

The Case for Human-AI Collaboration

For completely automated AI-driven software development to be possible – and to ensure that the applications produced were user-friendly, of good quality, financially viable in the market etc – you would need to keep piling on input data and rules to explain and cover every nuance every context, every variable, every novel situation. Not only is this probably impossible. It’s unnecessary and undesirable when you have a resource that can already add that level of contextual, nuanced interpretation. People.

AI can certainly do many, many things in development better than humans can, from speeding up and streamlining repetitive tasks to improving accuracy and eliminating errors. But the case for the ‘human touch’ is compelling. Rather than having to build a whole new training repository filled with user feedback on software in order to judge UX quality, for example, a skilled developer can make judgements about what a good interface looks and ‘feels’ like simply by drawing on their own experience.

People are also much better at thinking creatively, whether that’s to solve a particularly tricky or complex problem, or developing applications for entirely new use cases. AI is limited to regurgitating patterns from the data it is provided with. The human brain is brilliant at applying knowledge and experience across domains to achieve new ends.

Finally, people know people in a way that AI algorithms cannot. Think about why the piece of writing you ask an AI text generator to produce never quite ‘feels’ right. It might be grammatically and, indeed, factually correct. But it often comes across as leaden, formulaic, uninspiring. Why? Because Gen AI tools don’t understand the context of how AI plays on human emotions and how the interplay of words triggers reactions in hugely complex and changeable ways. Skilled writers (and indeed all artists and other creatives, including marketers) make their work impactful because they understand intuitively how to orchestrate emotions through their chosen medium.

All of this means that the best possible future for software development – including getting the most out of AI itself – rests in effective human-machine collaboration. There’s an illustrative story here from the world of chess. Chess software had reached such a level of sophistication that an IBM programme famously beat Grand Master Gary Kasparov as early as 1997. Kasparov’s response was to develop a new game that pitted chess champions collaborating with machines against one another – and these collaborative partnerships routinely beat even the best chess programmes. Why? Because the software could calculate the merits of every possible move step by step. But the chess champion could see the big picture and work to a plan.

As the demands on software development in terms of both the scale of production and the sophistication of programmes continues to accelerate, neither the astonishingly powerful calculating prowess of AI nor the comparatively snail-paced but creative, nuanced and contextually grounded insights of people are enough on their own. AI may well ultimately change what it means to be a software developer, and require a shift in both the core competencies required and what a developer’s day-to-day tasks involve. However, the two sets of capabilities are more powerful when paired together. And a future where machines run machines without any human involvement still sounds like the stuff of science fiction.

If you want to find out more about custom software development for your business, come have a chat with our human experts – and find out exactly where AI can have the biggest impact for you, too.

Scroll to Top