The assumption is that AI will handle it all by automate testing, ensure reliability, and keep systems running smoothly. This article explains why it will not. And the consequences of this oversight are already beginning to show.
Author: Ravikiran Karanjkar, Engineering Manager (Quality), Amazon, https://www.linkedin.com/in/ravikiran-karanjkar
For the past three years, the tech industry has been on a full-speed sprint toward AI. AI-driven initiatives dominate boardrooms, investment pitches, and product roadmaps. Whether it is embedding AI in consumer products, automating business operations, or rolling out AI-powered tools, the message is clear: AI is the future.
But in the rush to push forward, something crucial is being left behind: software quality. Testing teams are shrinking, reliability engineering is underfunded, and core infrastructure is neglected. The assumption is that AI will handle it all by automate testing, ensure reliability, and keep systems running smoothly.
It will not. And the consequences of this oversight are already beginning to show.
The Cracks Are Already Visible
Across industries, the consequences are already starting to show.
- Cloud outages are now taking down entire ecosystems, including banks, trading platforms, logistics systems, and consumer services, because modern AI-heavy infrastructure is deeply interconnected and brittle under load.
- Safety-critical recalls in healthcare and medical devices are citing software defects as the leading cause.
- Consumer products, from cars to connected home devices, are being shipped with unstable software, only to be pulled back from the market in record numbers.
These failures are not random. They are the inevitable outcome of cutting corners on quality to fund AI initiatives that depend on the very reliability they are undermining.
AI Is Powerful, But Not a Replacement for Quality
There is no denying the promise of AI. It can automate repetitive testing, analyze logs, and find bugs faster than any human could. But AI cannot:
- Reason about customer journeys and the real-world implications of a failure
- Understand complex regulatory or business risks
- Participate in root-cause analysis and argue that a product launch should be delayed
- Provide independent oversight free from the incentives of the product team
AI can speed up certain aspects of quality engineering, but it cannot replace the need for human oversight, risk management, or governance.
Startups and Enterprises Are Repeating an Old Pattern
This is not the first time the tech industry has gone through a pattern like this. It is a cycle.
- Phase 1: Quality teams slow things down in pursuit of perfection.
- Phase 2: Executives demand faster releases, citing the need to move fast and break things.
- Phase 3: Quality teams are told to “partner with the business,” which often means they are under pressure to rubber-stamp releases.
- Phase 4: Today, with AI as the magic bullet, quality is treated as a nonessential function.
This is not innovation; it is a mistake.

Investors Are Starting to Notice the Hidden Cost
The market loves AI. Investors are pouring capital into AI-driven productivity, and companies tout AI-powered solutions as the next big thing. But the same investors are also punishing companies that experience avoidable outages or high-profile failures, especially when these are linked to “rushing AI deployment.”
As AI becomes more deeply embedded in mission-critical workflows, whether it is in factories, healthcare diagnostics, or autonomous vehicles, investors and customers will demand more operational resilience. AI increases operational risk, it does not reduce it.
The Correction Is Coming
Every hype cycle eventually hits a wall, and AI is no exception. For AI, that wall is reliability.
Here is what the next phase will look like:
- Quality Will Return as an Independent Function Quality will no longer be a gatekeeping bottleneck, but a strategic risk management discipline with autonomy and deep expertise in AI systems.
- Hybrid Teams Will Become the Norm AI will handle repetitive testing, while human engineers will focus on complex scenario design, exploratory testing, and failure analysis. Quality will become more about intelligence and less about mechanical execution.
- Infrastructure Funding Will Flow Back The budgets that were quietly redirected from observability and test environments into experimental AI projects will return. Treating reliability as an afterthought will no longer be acceptable.
- Boards Will Shift Their Questions Instead of asking, “How many AI projects do we have?” boards will start asking, “What controls are in place to prevent AI-driven failures from becoming brand-damaging incidents?”
The companies that emerge strongest from this correction will be the ones that embrace AI innovation while maintaining a strong foundation of reliability and quality.
Leaders Should Move Now, Not Later
Rebuilding quality capabilities after a crisis is always more expensive. The time to act is now.
Companies that successfully pair AI investment with strong quality practices won’t be seen as cautious, they will be seen as prescient. A decade from now, the winners won’t be the ones that sprinted hardest toward AI at the cost of reliability. They will be the ones that understand the balance needed to scale AI safely.
The pendulum always swings back. Smart leaders will act before it hits them on the return.
About the Author
Ravikiran Karanjkar is an Engineering Manager (Quality) at Amazon with over 18 years of experience in the software industry. He has served as a judge on several technology initiatives and hackathons, and has a deep interest in the intersection of AI, quality assurance, and engineering leadership.

Leave a Reply