We try to stay on top of the trends that are coming for our readers. But after reviewing last year’s data research, and many 2025 predictions, we still had some unanswered questions.
To get a better reading of what’s ahead, The New Stack’s editors identified 14 questions that needed answering. We then identified over 120 industry experts to help us learn about the future of open source, developers’ use of AI, and IT infrastructure.
More than 30 experts responded by providing, on average, four in-depth predictions on the topics they know best. Some of those answers have already been reported in The New Stack. In this post, we start off by highlighting the top takeaways. Then, specific questions and answers are highlighted that elaborate on these conclusions.
Open Source and Competitiveness
- Security and maintainer’s time are top threats to open source.
- Incremental progress is expected on unmaintained components.
- Open source can be the most effective way to counter the power of centralized control of developer platforms.
- There is no consensus about whether big platform companies will win the battle for AI stacks.
AI Agents
- Despite long-term optimism, there is significant concern that many internal AI agent projects will be abandoned in 2025.
AI’s Impact on Developer Processes
- AI-assisted development will challenge developers as they review the quality of the code generated and integrate tools into their workflows.
- Developers will spend roughly the same amount of time manually testing software in 2025 as they did in ’24.
Cloud and Data Infrastructure
- Experts do not think migration of workloads to private and on-premises clouds will outpace the movement to public clouds.
- There is no consensus about which barriers will be overcome as organizations move from batch to stream processing.
Open Source and Competitiveness
Thinking of the average enterprise application components, by what percentage will the proportion that are unmaintained or out-of-date open source components increase or decrease in 2025?
Incremental progress is expected on unmaintained components. Five of the six experts who answered this question think the proportion of components that are open source and unmaintained will decline in 2025, with two thinking the decline will be by about 10%. AI and the continued adoption of DevOps tools are cited as reasons for the improvement. This is what some of them said:
- “This decline will be driven by increased awareness of software supply chain security, spurred by incidents like Log4Shell, and the widespread adoption of tools like Snyk, and others, which automate dependency updates. Regulatory requirements for software bill of materials (SBOMs) and improved ecosystem governance in platforms like npm are also playing a crucial role in identifying and addressing unsupported components…. Overall, proactive efforts to maintain dependency hygiene are likely to yield substantial gains in reducing outdated components within enterprise systems.” — Eran Kinsbruner, head of product marketing at Lightrun
- “Hopefully, increased adoption of automation for dependency management and updates, as well as improving DevOps practices like continuous integration and deployment, can help enterprise applications decrease out-of-date components by, on average, 5-10% in 2025. This reduction will be determined, though, by the amount of technical debt and resource constraints at play, as well as individual enterprise priorities.” — Kat Gaines, senior manager of developer relations and community at PagerDuty
What do you think will be the most successful efforts to counter centralized control of developer platforms in 2025?
Open source can be the most effective way to counter the power of centralized control of developer platforms. Three of eight experts who answered this question believe that the centralized control of developer platforms is the best way to fight the centralization of them. Others see signs that open source, AI-assisted coding tools are effectively competing with big platform companies.
- “The open source approach should automatically reduce centralized control over developer platforms, because there is the ability to fork projects or build what you want yourself. In reality, though, there is a lot of reluctance to fork because it is a time-consuming and potentially wasteful process compared to keeping communities together. Communities don’t want to fork if they don’t have to.” — Kate Obiidykhat, solutions marketing manager, cloud native products and services, at Percona
- “I think we have already seen a decentralization of control of developer platforms with the rise of Cursor AI (moved users away from GitHub Copilot), Surfer/Codium IDE, Bolt.new for full stack apps and even new features in desktop ChatGPT applications that allows coding based on sharing an application like Xcode to write Swift code.” — Madhukar Kumar, chief marketing officer at SingleStore
Will developers migrate to AI stacks built by big platform companies in 2025?
There is no consensus about whether big platform companies will win the battle for AI stacks. Several experts see a future where specialized companies can compete with the big platform companies. In the long run, the need to have access to a wide range of data sources may give big platform companies an advantage.
- “Data is the lifeblood of AI. In order to build meaningful AI-powered apps, developers must have direct access to relevant enterprise data that powers them. When developers migrate to platform companies where their AI stacks and data coexist in the same place, they can unlock the full potential of their data, prevent unnecessary governance risks associated with moving data between platforms, and accelerate time-to-value for their apps.” — Jeff Hollen, director of product for Snowpark, ecosystem and developer platform at Snowflake
- “I see developers gravitating towards tools that will help them build applications and solutions faster and are intuitive to use. Organizations want AI stacks that couple productivity and efficiency with best-in-class security and data privacy. Tools that will gain developer trust will be the ones that suggest high-quality code and editors that fit their code development flow … Developers right now are evaluating how these AI technologies will bring value to them. Few big platform companies have had the market gain and reach through early rollouts. I see smaller companies gaining traction through driving targeted value for developers.” — Bhawna Singh, CTO, customer identity at Okta
- “I think AI enables so many different niche use cases that we will see developers both using capabilities provided by big platforms in addition to tools that solve very specific problems. For example, a dev team may utilize GitHub Copilot to help with daily coding tasks, but use a different AI stack for things like reviewing code, generating test, or managing the [Infrastructure as Code]. I am currently building such a tool, Blackbird, that uses AI to generate and edit OpenAPI specs. The big platforms will cover most of the common use cases, and some may stretch into niche areas. But speaking as a developer, I think we will start to piece together a handful of alternative AI stacks that will be more effective at solving niche problems at a fraction of the price.” — Matt Voget, director of technology at Ambassador Labs
AI Agents
Will at least half of all 2024 efforts to internally build AI agents will be abandoned by the end of 2025?
Despite long-term optimism, there is significant concern that many internal AI agent projects will be abandoned in 2025, our survey participants said. Of the 14 experts who answered this question, five experts expect that at least half of the 2024 efforts will be abandoned and two disagreed with the premise. The remaining seven experts who answered this question admitted to being uncertain about how this would end up.
- “While many organizations have been focusing on internally building AI agents, we anticipate a significant shift by the end of 2025. The market is expected to see a surge in commercially available AI agents entering the mainstream between 2025 and 2026. At Kobiton, we recognize that developing AI agents in-house remains viable during the early stages of their evolution. However, as more sophisticated and refined AI solutions become accessible commercially, the cost and complexity of maintaining internal development efforts are likely to outweigh the benefits. Consequently, we expect that a substantial portion of the 2024 initiatives to build AI agents internally will be phased out in favor of adopting these advanced, ready-to-deploy solutions. This transition reflects a broader industry trend towards leveraging specialized, market-driven AI tools that offer superior performance and scalability compared to bespoke, in-house developments.” — Frank Moyer, CTO at Kobiton
- “The very nature of AI often makes earlier efforts obsolete in a relatively short time, leading to many efforts being abandoned. We have seen this in large data-model projects, and realigning resources to move forward can be costly.” — Josep Prat, engineering director, streaming services at Aiven
- “Smaller efforts will die because companies will realize two things: their data is much dirtier than they thought and requires massive effort to clean, and when GPT-5 and Claude 4 come out [in 2025], their out-of-box performance will make internal projects redundant for anyone smaller than an S&P 500 company.” — Yang Li, COO, Cosine
- “Emerging technologies take time to mature, and the use of AI agents will undergo a similar maturity curve. Organizations that will be successful are the ones that take measured steps and incrementally incorporate AI agents and adapt as they learn. Enterprises that expect quick results may get jaded and abandon their efforts.” — Krishna Subramanian, co-founder and COO at Komprise
- “It’s hard to tell at this point. Success in building AI agents will depend on organizations’ ability to adapt their API strategies to support machine-to-machine interactions. Companies that fail to address challenges such as security vulnerabilities, scalability, and [return on investment] may abandon these efforts. However, those that invest in designing modular, reusable, externalize-able APIs and implementing robust compliance frameworks will be better positioned to achieve sustainable AI integrations and avoid project abandonment.” — Abhinav Asthana, CEO at Postman
AI’s Impact on Developer Processes
In 2025, what will be the biggest challenges for developers that rely on AI-assisted software development? How will these challenges impact GitHub Copilot users?
AI-assisted development will challenge developers as they review the quality of the code generated and integrate tools into their workflows. Many of the 17 experts who answered this question expressed concern that an increase in AI-generated code actually increases developers’ workload because it requires manual review. Several experts worry that inexperienced developers won’t be able to identify when AI spits out bad code. Security and trust were other challenges cited.
- “For developers that rely on it, the question to ask is, if it was taken away, would they be extremely disappointed, somewhat disappointed, or not disappointed? An informal survey around our engineers earlier this year showed that only two out of 28 respondents (our active experimenters in AI tech) would be extremely disappointed if it went away. My takeaway from this is that these tools are somewhat useful, but haven’t proven that they are invaluable to the point that engineers would be very vocal if they were cut for budget reasons.” — Colin Bowern, senior vice president at Octopus Deploy
- “Reduction in code quality may become an issue due to the inability to identify defects in the AI code. Currently, experienced developers can gain significant benefits from AI-assisted software development since they have the experience to recognize when AI code is incorrect. As AI-assisted coding continues, the skills to find AI errors will diminish at a rate that may exceed AI’s improvement in generating error-free code.” — Scott Wheeler, cloud practice lead at Asperitas
- “While AI coding assistants like GitHub Copilot are marketed as universal productivity tools, they disproportionately amplify the capabilities of experienced developers while potentially hindering the learning process for newcomers. Senior developers, with their deep understanding of system design, architectural patterns, and technical tradeoffs, can effectively ‘speak the language’ of AI assistants. They know exactly what to ask for, can quickly validate generated code, and understand the broader implications of implementing suggested solutions ….Juniors with little experience are left to rely on AI suggestions. While this may boost their need to think language-agnostically, it now seems rather Utopian, given the concern about eliminating failures from the learning process.” — Artem Barmin, co-founder and board member, Freshcode
Will the number of hours developers spend manually testing software decline substantially because of increased automation and adoption of AI technologies?
Developers will spend roughly the same amount of time manually testing software in 2025 as they did in ’24. Of the 15 experts who answered this question, seven answered “no.” three answered “yes,” with the rest providing a more nuanced answer.
- “There is no point in having some of your developers point to ‘10X’ improvements in their productivity when the overall result is a tiny marginal gain due to slower Q&A and testing to work out problems.” — Amanda Brock, CEO, OpenUK
- “I think it’ll increase. Developers will spend even more time running custom tests and workflows to verify code they didn’t write and can’t easily explain.” — Yang Li, Cosine
- “Automation typically yields the highest returns when applied to stable products with minimal changes. However, at Kobiton, we’ve observed an unexpected trend: manual testing hours have increased by 22% month-over-month, surpassing the 17% growth in automated testing. This shift is primarily driven by the accelerated pace of new application development facilitated by AI-assisted tools. As these applications evolve more rapidly, the frequency and extent of changes make comprehensive automation less feasible. Consequently, the demand for flexible, manual testing has risen to ensure that new features and updates maintain quality standards. This dynamic underscores the need for a balanced approach, where automation and manual testing complement each other to adapt to the fast-paced development environment fostered by AI technologies.” — Frank Moyer, Kobiton
- “Yes, teams will be able to automate the creation of unit tests and intent-based functional tests. This will not eliminate manual testing such as UAT, but will help convert those manual tests to automation for regression testing.” — David Brooks, senior vice president of evangelism at Copado
- “AI is being used to make some tasks easier, such as generating test cases, providing quick and small gains. While these initial improvements can enhance efficiency and reduce manual effort, the more transformative impact of AI on testing infrastructure will take longer to realize. Significant effort will be required to build and adopt AI-driven solutions that can replace or substantially alter existing testing frameworks. While the number of hours dedicated to testing may not decline substantially in 2025, a more noticeable reduction is expected in 2026 as these technologies mature and become more integrated into standard practices.” — Supriya Lal, group tech lead, Yelp
- “Yes and no. Developers will be able to accomplish larger coding tasks with AI tooling, which means they will spend less time re-running their local development loops (part of which includes manual testing). They will be able to quickly write automation and unit tests too, with the purpose of preventing regressions and catching bugs earlier. But I don’t believe this will necessarily decrease the amount of time a developer spends manually testing. If anything, AI tooling allows developers to spend more time ensuring their code is actually solving business problems because a lot of the boilerplate work is automated away. That could mean less time is spent coding, and more time is spent manually testing.” — Matt Voget, Ambassador Labs
Cloud and Data Infrastructure
Will the average enterprise migrate more workloads to private and on-premises clouds than the number of workloads it moves to public cloud environments?
Experts do not think migration of workloads to private and on-premises clouds will outpace the movement to public clouds. Of the 15 experts that answered this question, six answered “no,”three answered “yes,” while the remainder described a mixed picture where different types of companies and workloads will be more likely to gravitate to one environment over another.
- “Yes, for two key reasons. The first is growing pressure to optimize IT spending. Businesses that have lifted-and-shifted workloads into the cloud are finding that the public cloud is not always as cost-effective as on-prem or private clouds. In addition, some CFOs miss the stability and predictability of CapEx spending models.
The second factor is changes to the vendor landscape and product portfolios, such as those that followed Broadcom’s acquisition of VMware. These shifts have pushed some companies to reassess their platform strategies. In some cases this may lead to migration of appropriate workloads into public clouds, but in others we see organizations opting for alternative models, like managed infrastructure in private data centers, that provide the control of on-prem models with the flexibility and ease-of-maintenance of public cloud.” — Justin Giardina, CTO at 11:11 Systems - “Enterprises are likely to migrate more workloads to private and on-premises clouds than to public cloud environments. This trend is driven by several factors, most notably the need for AI infrastructure build-outs, which require full control over data assets and maximum performance from network infrastructure, avoiding shared or virtual networks.” — Ugur Tigli, CTO at MinIO
- “In 2025, enterprises will increasingly prioritize private and on-premises cloud environments as part of hybrid solutions that balance flexibility with control. Regulatory compliance and cost optimization will drive this shift, as companies seek to better manage data sovereignty and meet the demands of mission-critical workloads while still leveraging the agility of public cloud services. This shift reflects an increasing preference for solutions that offer both agility and control, especially as data and AI become foundational to business innovation.” — Aislinn Wright, vice president of product management at EDB
- “The ‘“average’” enterprise (which would include midsize and the long tail) will continue to ramp up workload migrations to public clouds in 2025. For large Global 2000 enterprises the outlook is more mixed. Most of the action will be migration to public clouds, but there will be significant uptake with hybrid/private clouds owing to the confluence of several factors:
- The maturity of [Kubernetes] platforms, especially Red Hat OpenShift, which is seeing critical mass adoption. There is now a viable alternative to hyperscaler-native platforms.
- They have gotten through many of the “low-hanging fruit” workloads, such as born-in-the-cloud customer engagement systems, that have already gone to public clouds.
- In the next stage, large enterprises are starting to turn their attention to mission-critical back-end systems, many if not most of which may have restrictions on what can flow into a public cloud. Case in point is the looming sunset of SAP ECC which is prodding many enterprises to take second looks at their ERP systems. Many of these will wind up in a private cloud. As noted above, while most of the action will still be in public cloud migrations, a rising proportion of Global 200 workloads will go to private or hybrid cloud.” — Tony Baer, principal at dbInsight LLC
- “No. The converse is true: CIOs and CTOs are fed up with on-prem solutions. Broadcom’s acquisition of VMware has resulted in many orgs looking to expedite their move from on-prem to cloud. The continuum of compute between edge and cloud has made it possible for organizations to squeeze every last bit of performance out of their cloud/edge services — something nearly impossible for an organization to do on its own. And now that Kubernetes is perceived as the universal control plane, fears about vendor lock-in are abating. A few loud (but small) organizations have made bold claims about their ‘repatriation,’ but those companies are far from representative of the concerns of mainstream organizations. Just because 37Signals, a company with a few dozen employees, can save money does not mean the average enterprise will experience anything good from a move back on-prem.” — Matt Butcher, co-founder and CEO at Ferymon Technologies
- “It depends on the enterprise’s priorities — specifically, the tradeoff between speed and cost alignment. Divisions or organizations that prioritize speed will continue migrating more workloads to public cloud environments. However, for stable, resource-intensive business processes, enterprises may increasingly consider moving these workloads to private or on-premises clouds to optimize costs. Regardless of the destination, enterprises must focus on developing the ability to move and manage workloads across environments seamlessly.” — Rohit Choudhary, CEO and co-founder at Acceldata
Data Architectures
What barriers to moving from batch to stream processing will be overcome in 2025?
There is no consensus about which barriers will be overcome as organizations move from batch to stream processing. Seven experts answered this question, but they spent more time explaining challenges than how they will be overcome. That said, improvement in 5G network slicing, standardization of real-time APIs and protocols, decreased cost/complexity of vector databases, and increased demand due to AI use cases are all reasons experts expect growth in the adoption of stream processing technologies.
- “Improvements in 5G network slicing will allow dedicated bandwidth for critical streaming applications, making stream processing more reliable and predictable for enterprise use cases.” — Artem Barmin, Freshcode
- “Increased cost and infrastructure complexity problems were common in vector databases that didn’t support efficient stream insertion. Some vector databases don’t support it at all. In order to deal with these challenges, developers had to build and implement additional systems to handle real-time data often resulting in architectural complexity in maintaining multiple systems … As more enterprises move to overcome these challenges, we expect to see an increase in Milvus adoption since it is designed to handle historical data while also providing a new path for fresh data so that insertion and search on the new data is efficient.” — James Luan, vice president of engineering, Zilliz
- “We will be able to finally look at both static data lakes and warehouse data with real-time streaming data to make batch and internal facing applications become real-time and also external facing. A prime example is the ability to run a real-time multimodal database like SingleStore as a container within Snowflake.” — Madhukar Kumar, SingleStore
- “A significant barrier we encounter is manipulating data from disparate sources in real time due to varying formats, models, API, and latencies. However, today, we have better tools to model data in motion, and these tools will only improve in 2025. We’re seeing enhancements from Apache Flink and the data platforms that are using them. Some improvements revolve around better standardization of real-time APIs and protocols, and overall, we’re seeing widespread adoption of data mesh and event-driven architectures that better support decentralized stream processing. But there is still, and always will be, work to be done on this issue.” — Josep Prat, Aiven
- “Data aggregation is more challenging with stream processing as data arrives piecemeal. Emerging frameworks like Apache Flink and Apache Spark Streaming are improving stateful stream processing capabilities, enabling more efficient handling of aggregations and joins in real-time environments.” — Scott Wheeler, Asperitas
The New Stack’s 2025 Predictions
- Developer Productivity in 2025: More AI, but Mixed Results
- Developer Tools: What’s Ahead in 2025?
- eBPF in 2025: Bigger Than the CrowdStrike Outage
- GitLab’s Field CTO Predicts: When DevSecOps Meets AI
- Observability in 2025: OpenTelemetry and AI to Fill In Gaps
- Open Source in 2025: Strap In, Disruption Straight Ahead
- Serverless Computing In 2024: GenAI Influence, Security, 5G
- See What WebAssembly Can Do in 2025
The New Stack’s 2024 Wrap-Ups
- Big Moments in Rust 2024
- Database Trends: A 2024 Review and a Look Ahead
- Infrastructure as Code in 2024: Why It’s Still So Terrible
- Kubernetes, Rust, Linux and DOS? The Year in Open Source
- Language Wars 2024: Python Leads, Java Maintains, Rust Rises
- Web Development Trends in 2024: A Shift Back to Simplicity
- What 2024’s Data Told Us About How Developers Work Now
- The Year in JavaScript: Top JS News Stories of 2024
- Top 5 AI Engineering Trends of 2024
The post What’s Ahead for AI-Assisted Coding, Open Source and More appeared first on The New Stack.