What 100 CIOs Are Really Doing With Gen AI in 2025

CN
@Zakariae BEN ALLALCreated on Mon Oct 06 2025
CIOs strategizing enterprise generative AI plans for 2025 in a boardroom

What 100 CIOs Are Really Doing With Gen AI in 2025

Generative AI has shifted from flashy pilots to integral components in enterprise strategies. With increasing budgets and a more rigorous purchasing process, the tools, models, and architectures that CIOs select now will influence value creation for years to come. This article synthesizes insights from 100 enterprise CIOs on their approaches to building and buying Generative AI in 2025, supplemented by external evidence and practical advice for non-experts to navigate this evolving landscape confidently.

TL;DR – Top Changes Since Last Year

A recent survey conducted by a16z across 100 CIOs from 15 different sectors revealed some significant trends for 2025:
– Budget Increases: CIOs anticipate a 75% rise in AI spending over the next year, with some executives noting that their 2023 expenditures represent a weekly run rate now.
– From Pilots to Line Items: Once, innovation funds covered 25% of large language model expenses. Now, that figure has plummeted to low single digits as spending transitions into core IT and business unit budgets.
– Model Variety: OpenAI, Google, and Anthropic dominate in overall enterprise usage, while larger on-premises deployments increasingly utilize open-source models like Llama and Mistral.
– Prompting vs Fine-tuning: Enterprises are finding a better return on investment (ROI) through context-oriented prompts and retrieval systems rather than resorting to fine-tuning, except in very specialized scenarios.
– Early Adoption of Reasoning Models: While interest is high, robust production use remains nascent. OpenAI’s o3 has gained traction among early users, especially in startups.
– Procurement Evolution: The buying process now mirrors traditional enterprise procurement, with heightened attention to security, cost, and thorough evaluations. Workflow complexity has increased switching costs.
– Off-the-Shelf Solutions Rising: Pre-packaged AI applications are rapidly outperforming custom builds across various categories.

These trends will be explored in-depth in the following sections, linking them to broader market data, research findings, and actionable takeaways for teams creating practical systems in 2025.

1) Budgets Are Up – And Now Feel Permanent

Enterprise spending on Generative AI has become a central focus, no longer regarded as a secondary strategy. According to the a16z survey, budgets for large language models (LLMs) are exceeding expectations and are now established as permanent line items in both centralized IT and business unit budgets. This clear shift indicates that Generative AI has evolved from a hype-focused interest into core strategy.

According to broader market insights, Gartner predicts global spending on Generative AI will hit approximately $644 billion by 2025, representing a 76% year-over-year increase. Across all AI categories—hardware, software, and services—total spending is anticipated to approach nearly $1.5 trillion, powered by significant hardware investments and rapid AI integration into enterprise offerings.

However, larger budgets don’t automatically guarantee returns. McKinsey’s recent analyses reflect a varied ROI landscape; while more leaders are reporting a revenue boost from AI than earlier in 2024, many companies still experience limited enterprise-wide benefits, especially when projects remain as pilots or lack integration into core workflows. The key takeaway is straightforward: invest wisely, monitor rigorously, and align use cases with actual profit and loss results.

2) Multi-Model Strategy Becomes the Norm

Enterprise strategies are increasingly diversified, no longer reliant on a single AI model. Organizations are distributing tasks among multiple models to balance capability, latency, cost, and hosting requirements. Academic research supports this approach, indicating that strategic routing improves output quality and reduces costs by matching the most suitable model to each specific task.

According to the a16z data, OpenAI, Google, and Anthropic maintain the largest market share among enterprises, while larger firms favor open-source models for on-premises or private-cloud deployments that require added control. As new providers emerge, many teams are standardizing a small selection of models, typically including:
– A flagship model designed for high-stakes reasoning.
– A cost-effective mid-tier model for standard tasks.
– An open-source option for scenarios needing tight data controls or low latency.

Rethinking Fine-Tuning

Compared to last year, fewer companies are opting for fine-tuning as their default strategy. Many find they achieve comparable or superior results by incorporating context into prompts and retrieval systems (RAG). Fine-tuning continues to be useful for highly specific tasks, yet it is no longer the first choice for many enterprises.

Pricing and Performance Considerations

Enterprises are now focusing on performance-per-dollar when comparing models, rather than just absolute accuracy. Differences in pricing for similar performance levels can be substantial, and as a result, buyers are optimizing for the overall costs associated with a portfolio rather than individual model prices. It is essential to treat public price listings as general guidelines and conduct personalized benchmarks under realistic workload conditions.

3) Reasoning Models: Promising but Still Early in Production

Reasoning-capable models open new avenues for automation and enhance reliability in task chaining. While most organizations are still in the pilot phase for these models, the level of interest remains high. The a16z survey indicates that OpenAI’s o3 family has the most enterprise adoption so far in the reasoning category, while DeepSeek gains traction in startup circles. Expect a year filled with rapid advancements and disciplined A/B testing to validate their effectiveness before broader implementation.

4) RAG Becomes the Default Enterprise Pattern

Retrieval-Augmented Generation (RAG) has emerged as the go-to method for safely incorporating organizational knowledge into LLMs without the need for retraining. RAG often provides better cost efficiency and quicker adaptability compared to fine-tuning while minimizing vendor lock-in, as the data layer remains portable. Analyst reports suggest rapid adoption is underway, with vector functionalities being integrated into both traditional databases and specialized vector storage solutions. It is expected that vector search will be commonplace in most organizations by 2026.

RAG itself is evolving. Research on GraphRAG and multimodal RAG points to enterprises exploring advanced retrieval mechanisms that go beyond simple text and include complex graphs, images, audio, and video data. If your projects involve intricate cross-document reasoning or manage varied formats like screenshots and PDFs, consider exploring these innovative variants.

Practical Guidance for 2025:

  • Initiate with a clean knowledge base and thorough retrieval evaluations before implementing LLMs.
  • Assess multi-vector strategies only if necessary; a single vector system with keyword fallbacks suffices for many teams.
  • Introduce caching and routing early on, as not all queries will demand expensive models.
  • Evaluate groundedness and factual accuracy based on actual user inquiries, not merely synthetic metrics.

5) Procurement Mirrors Classic Enterprise Software Buying

CIOs and CISOs are integrating Generative AI into standard procurement processes. The purchasing of models and applications now includes security assessments, data residency checks, cost benchmarks, and contractual requirements. Gartner anticipates that in 2025, CIOs will increasingly favor commercial off-the-shelf solutions rather than custom builds due to their promise of more predictable advantages and expedited implementation. Treat Generative AI like any other critical software: demand service level agreements (SLAs), audit trails, and exit strategies.

Regulatory considerations are also crucial, particularly for global enterprises. The EU’s AI Act will gradually introduce obligations for general AI and high-risk systems starting in 2025 and 2026, alongside an EU code of practice to aid compliance. Even for US-based companies, it’s essential for multinational procurement teams to align with these regulations and evolving governance frameworks like ISO/IEC 42001 for AI management systems.

Key Elements to Include in Your 2025 RFPs and MSAs:

  • Data Handling: Ensure details on encryption, retention, deletion, regional hosting, and model training protocols.
  • Evaluation: Include reproducible benchmarks for task performance with clear target thresholds.
  • Controls: Specify role-based access, human oversight checkpoints, usage logs, and evidence of audits.
  • Exit Strategy: Address the portability of prompts, embeddings, and fine-tuned artifacts, with support for migrating to alternative solutions.

6) Off-the-Shelf AI Applications Are Rapidly Advancing

AI-native third-party applications are increasingly outperforming in-house solutions, especially regarding speed and value. The a16z survey shows enterprises are relying more on off-the-shelf options for customer support, sales enablement, coding assistance, marketing operations, and knowledge search, particularly when vendors provide domain-specific features and analytics out of the box. The recommended approach: buy wherever feasible and build out only when necessary.

Gartner’s 2025 spending projections confirm this trend: as foundational models advance, and vendors continuously enhance their features, CIOs are shifting away from isolated proofs of concept in favor of bundled solutions that integrate with their existing tools. This transition minimizes integration risks and accelerates the path to measurable results.

7) Autonomous Agents: Exciting but Caution is Needed

The discussion around autonomous agents has intensified. However, independent reports indicate that many agent-focused projects could be discontinued by 2027 due to unclear ROI. Gartner warns of “agent washing,” where conventional workflows are misrepresented as autonomous agents. Take a focused approach to agent development, concentrating on specific use cases with measurable KPIs and ensuring human oversight is always available before scaling.

8) ROI: A Reality Check

While CIOs remain hopeful about Generative AI’s potential, disciplined execution is crucial. McKinsey’s longitudinal surveys indicate that although some areas, such as service operations and software engineering, are seeing increasing revenue impacts, enterprise-wide benefits remain inconsistent. Numerous analyses highlight that a significant proportion of Generative AI initiatives have yet to influence profit and loss figures because they remain limited to pilot projects or lack integration into critical workflows. The pattern is clear: successful enterprises align their use cases with business processes, measure outcomes effectively, and continuously iterate.

Suggestions to Prevent Stalled Projects:

  • Focus on automating back-office functions and customer service, where returns and savings can be easily quantified.
  • Appoint a dedicated product owner instead of relying on a committee, and set monthly KPIs that must be met.
  • Allocate resources for change management and training—don’t just deliver a tool.
  • Approach evaluation as an ongoing activity rather than a one-time event.

9) A Practical Reference Architecture for 2025

Here’s a vendor-neutral framework for your tech stack, drawn from diverse real-world applications across various industries:
– Interaction Layer: Incorporate channels and user experiences including web, mobile, IDE plug-ins, and CRM interfaces.
– Orchestration: Develop prompt templates, establish guardrails, and maintain API connectors for functionality.
– Reasoning and Generation: Implement a multi-model portfolio with effective routing and fallback mechanisms.
– Retrieval Systems: Utilize vector stores alongside keyword or graph-based backups; implement content chunking and ranking features.
– Data Infrastructure: Manage documents, tickets, logs, and product data while ensuring handling of personal data and sensitive information.
– Observability: Create dashboards for cost, latency, quality, and safety along with evaluation suites and feedback mechanisms.
– Governance: Align practices with ISO/IEC 42001 controls, maintain audit trails, and develop red team protocols and incident response strategies.

10) Insights for Founders Targeting Enterprise Clients

The purchasing habits observed in the survey yield valuable market entry strategies:
– Emphasize security, compliance, and total cost of ownership from the outset.
– Publish realistic benchmarks based on customer data, focusing on performance-per-dollar metrics.
– Facilitate migrations and offer no-regrets trials to mitigate perceived risks associated with switching.
– Develop packaged solutions that address specific workflows—not just isolated models.

Conclusion

The Generative AI market now operates similarly to traditional enterprise software, albeit at a faster pace with more dynamic components. Budget allocations are solidified, multi-model approaches are standard, RAG is the norm, and procurement has grown more stringent. While agents hold potential, they remain in early stages, and off-the-shelf applications are gaining significant ground.

By maintaining a modular architecture, ensuring clean data practices, conducting ongoing evaluations, and aligning KPIs with real workflows, organizations can successfully harness the evolving Generative AI ecosystem in 2025.

FAQs

1) Should we fine-tune a model or start with RAG?

Opt for RAG for most knowledge-centric applications. It’s more cost-effective for iteration and limits vendor lock-in. Reserve fine-tuning for highly specific constraints or specialized datasets that are hard to retrieve during inference.

2) How many models should we support?

Most enterprises manage a compact portfolio: one leading model for complex tasks, one cost-efficient model for routine activities, and one open-source or private-hosted option for critical tasks needing privacy or speed. Incorporate routing and caching to stabilize costs.

3) Where can we realistically see ROI in 3 to 6 months?

Areas such as customer support management, software development productivity (like code search and test generation), internal document searches, and targeted back-office automations typically yield the fastest returns when effectively implemented. McKinsey’s survey results highlight service operations and software engineering among the earliest sectors to see revenue increases.

4) Are agents ready for mainstream adoption?

Often yes, but with caution. Keep agent implementations focused and supervised, with strict guidelines and rollback paths. Forecasters indicate many agent-based projects may be shelved by 2027 due to unclear economic value, so scale only after thorough validation.

5) What governance framework should we follow?

The ISO/IEC 42001 standard serves as an essential backbone for creating policies, defining roles, and conducting audits. If you operate within the EU, remain vigilant about the phased requirements set forth by the AI Act for 2025 and 2026, alongside the evolving code of practice for general-purpose AI.

Sources and Notes

This article synthesizes insights from a16z’s 2025 enterprise AI survey and interviews, along with recent market research and academic work. Always validate pricing and performance against your own use cases before making purchasing decisions.

Thank You for Reading this Blog and See You Soon! 🙏 👋

Let's connect 🚀

Newsletter

Your Weekly AI Blog Post

Subscribe to our newsletter.

Sign up for the AI Developer Code newsletter to receive the latest insights, tutorials, and updates in the world of AI development.

By subscription you accept Terms and Conditions and Privacy Policy.

Weekly articles
Join our community of AI and receive weekly update. Sign up today to start receiving your AI Developer Code newsletter!
No spam
AI Developer Code newsletter offers valuable content designed to help you stay ahead in this fast-evolving field.