Kubefeeds Team A dedicated and highly skilled team at Kubefeeds, driven by a passion for Kubernetes and Cloud-Native technologies, delivering innovative solutions with expertise and enthusiasm.

AI-Generated Code Requires a ‘Trust and Verify’ Approach

3 min read

Colorful image of code

AI is the new frontier, and developers understand now is the time to embrace it if they haven’t already. This undeniable component of software development is changing the landscape of software development as we previously knew it. Adoption is now a table-stakes issue for any organization looking to meet the growing need for software in business.

As that need continues to increase, so does the demand for software developers to write more code on which the software is based. Keeping up with this demand is imperative, but rising talent shortages in IT-related fields have the potential to make that difficult. Luckily, the advent of AI coding assistants has changed the game and enabled organizations to meet growing demands with the developer teams they already have in place.

However, it hasn’t been totally smooth sailing for developers as they integrate AI coding assistants into their workflows. AI code-generation tools can write code and test code, but who is held responsible for the errors in that code? In traditional development environments, we link code changes to developers with clear ownership and authorship tracking because even when a developer modifies a line of existing code, they understand it end to end.

When AI uses the context of the codebase, developers don’t need to modify it nearly as much and accept what might be bad code. With the increasing use of AI in the software development life cycle (SDLC), we are seeing an erosion and breakdown of traditional trust models. It disrupts the established clarity of code authorship and ownership and introduces a new challenge: a code accountability crisis.

If left unaddressed, this crisis can increase organizations’ risk of costly technical debt. In the United States alone, the cost of poor-quality code is estimated to be $2.4 trillion. Allowing that cost to grow in 2025 is untenable. To avoid these expensive consequences, organizations must act now. Teams entrusted to use AI to write code must establish clear ownership of that code to maintain high code quality and security standards. Only in doing so can they reap the benefits of this technology while avoiding its risks.

Establishing Ownership of AI-Generated Code

AI-generated coding assistants provide indisputable benefits, and organizations not taking advantage of these tools are already behind. These tools allow developers to focus on priority work that has more impact on the business at a time when overall workloads are expanding. AI coding assistants can handle mundane, repetitive code-writing tasks so developers can put their brainpower toward projects requiring more creative and critical thinking.

That said, developers can rely too heavily on AI. Combined with a lack of proper checks and balances to ensure code quality is up to snuff, this overreliance allows bugs and issues to slip through the cracks. AI-generated code is not inherently perfect nor foolproof. Like any developer-written code, it must be reviewed before being pushed into production. Research shows that more than a third of the code generated by GitHub Copilot contains serious weaknesses.

Additionally, a study showed that about a third (30%) of AI-generated code required correction, while 23% was partially incorrect. Releasing software without the proper quality guardrails can only increase numbers like these, threatening an organization’s reputation and bottom line.

AI can help developer workloads during talent shortages, particularly when most (56%) business leaders recognize this problem and consider it a significant concern. However, it’s important to remember that humans ultimately bear responsibility for software quality. AI is a powerful tool, but it cannot replace software developers.

Humans still need to ensure the code AI generates is accurate, reliable and of high quality so that software adds value. Organizations need to implement the proper guardrails in the SDLC as early as possible so developers can check the quality of AI-generated code. This extra step ensures that all software and projects do not pose a security risk to the business that releases them. Putting these safeguards in place starts with a “trust and verify” approach to development.

Ensuring Software Quality While Promoting Accountability

Organizations must entrust their developers to ensure that every piece of code that makes it through to production, including AI-generated code, is of high quality. The golden rule for adopting AI in the SDLC must be “trust and verify.” We should trust the capabilities of AI tools while maintaining due diligence by verifying the output. This dual approach is crucial to mitigate risks and ensure code quality.

Automated code review tools help streamline the code examination process for developers. Automated tooling can also help boost accountability while relieving workplace pressure. AI is a boon for developers, but introducing these coding assistants has also created fear of job loss: 74% of developers worry that AI will completely eliminate their roles, and another 40% think this will happen soon.

In reality, providing the right tools for code review and error remediation help developers embrace the power of AI coding assistants as a helpful hand, not a hindrance or a replacement. These tools provide the opportunity for developers not only to increase an organization’s coding output but also to focus on the work that interests them and use their skill sets.

Quality Code Ensures Strong Software Output

Demand for software is at an all-time high in every industry, and that isn’t going to change. It will only grow as AI becomes more integrated into the SDLC. The U.S. Bureau of Labor Statistics projects that employment for developers, quality assurance analysts and testers will grow 25% from 2021 to 2031.

This expansion increases the need for organizations and developers to maintain trust in their codebase using the right strategies and tools. It isn’t enough to simply adopt LLMs. Organizations must carefully select, customize, and monitor them to ensure they reap the productivity benefits while staying mindful of code quality and accountability.

AI tools aren’t going anywhere, and developers will need to take advantage of them appropriately while maintaining responsibility for all code, regardless of whether they wrote it. Businesses can’t afford to hinge their profits on potentially bad code slipping through to deployment. It’s too costly.

A “trust and verify” approach is the best way for them to do just that. By using automated, trusted tools, such as SonarQube’s AI Code Assurance, from the start of the development process, developers can take accountability for AI-generated code quality and ensure the corresponding software quality. This will help developers embrace AI as a productivity boost rather than fearing it as a way to replace them on the job, and it will help organizations reach their goals.

The post AI-Generated Code Requires a ‘Trust and Verify’ Approach appeared first on The New Stack.

Kubefeeds Team A dedicated and highly skilled team at Kubefeeds, driven by a passion for Kubernetes and Cloud-Native technologies, delivering innovative solutions with expertise and enthusiasm.