Developers Are Embracing AI Daily: Insights from Google’s DORA Research on Software Delivery

CN
@Zakariae BEN ALLALCreated on Mon Sep 29 2025
Developer utilizing an AI coding assistant on a laptop during a code review session

Developers Are Embracing AI Daily: Insights from Google's DORA Research on Software Delivery

AI has evolved from a mere assistant to an essential collaborator in software development. Google’s recent DORA research sheds light on how AI is now intricately embedded in modern development practices, its impacts, and the areas where strong engineering principles remain crucial for success.

Why This Matters Now

Software teams are under pressure to deliver features quickly, enhance reliability, and manage costs effectively. AI-driven coding assistants, copilots, and chat tools present attractive shortcuts. The key inquiry is not whether developers are utilizing AI but how it’s influencing delivery speed, quality, security, and the overall developer experience.

The DORA program from Google Cloud has tracked the factors that drive elite software performance for over a decade. Its latest insights indicate a growing dependency on AI, while also emphasizing that organizational culture, architecture, and disciplined delivery methods are what truly set high performers apart.

DORA 101

DORA is well-known for its Accelerate State of DevOps research, which connects engineering methodologies to quantifiable software delivery results. The program popularized four key performance metrics, often referred to as the DORA metrics:

  • Deployment frequency
  • Lead time for changes
  • Change failure rate
  • Time to restore service

In recent years, the research has also highlighted reliability as a primary goal. Annual reports analyze thousands of teams to reveal what distinguishes elite, high, medium, and low-performing groups. You can find more details on the report series and methodology at dora.dev, and view Google Cloud's summaries of recent findings at 2023 report highlights.

AI as a Daily Development Tool

Today, developers are leveraging AI for routine tasks, including boilerplate code writing, refactoring, generating tests, summarizing pull requests, creating documentation, and troubleshooting build or runtime issues. The DORA research contextualizes this shift, highlighting that AI is now a standard component of the modern software development toolkit.

Supporting surveys further reinforce this trend:

  • Stack Overflow’s 2024 Developer Survey indicates that most developers are currently using—or planning to use—AI tools, especially for code generation, debugging, and learning about new APIs.
  • GitHub’s Octoverse research highlights the adoption of AI coding assistants and correlates AI use with enhanced workflows throughout the software lifecycle.
  • JetBrains’ State of Developer Ecosystem 2024 shows an increasing use of AI assistants across various languages and IDEs for code suggestions, documentation, and reviews.

This shift means AI is enabling quicker feedback loops for individual tasks. Developers can request a model to draft a function, generate test cases, explain a trace, or suggest a migration path, and then rapidly iterate using real code and tests.

Does AI Alone Drive High Performance?

The answer is no. DORA’s core message remains clear: culture, architecture, and disciplined delivery practices are what drive performance. While AI can enhance effective practices, it cannot replace them.

What Still Matters Most

  • Small, frequent changes and trunk-based development
  • Automated testing and continuous delivery
  • Clear, up-to-date documentation
  • Well-structured, loosely coupled architectures
  • Platform engineering that mitigates friction in developer workflows
  • Supportive team cultures that foster psychological safety

When these fundamentals are established, AI tools tend to boost speed and reduce burden. Without them, AI can lead to chaos, resulting in more code changes, configuration drift, and defects in production.

The DORA program has consistently shown a strong connection between these practices and elite performance across the four key metrics. Recent summaries are available on the Google Cloud blog.

Where Developers Are Most Dependent on AI

Based on DORA’s framework and industry surveys, here are the main use cases that are shaping daily work:

  • Code generation and transformation: drafting functions, converting patterns, translating between languages, and suggesting refactors.
  • Testing: generating unit tests, fuzzing inputs, creating property tests, and proposing boundary cases.
  • Code review and documentation: summarizing diffs, flagging risky changes, and drafting or enhancing documentation.
  • Troubleshooting: explaining stack traces, proposing fixes, and suggesting observability instrumentation.
  • Learning and prototyping: quickly exploring SDKs, frameworks, and integration examples.

Teams are also utilizing AI to enhance developer portals and internal documentation, transforming scattered knowledge into easily searchable, contextual help.

What High Performers Do Differently with AI

High-performing teams don’t simply add AI to faulty workflows; they integrate it with robust engineering systems:

  • Establish guardrails: implement policies around acceptable use, privacy, and attribution; enable secure IDE features by default.
  • Incorporate human oversight: require code reviews, tests, and security checks for AI-assisted changes just like any other changes.
  • Ensure observability: track AI-assisted actions (accepted suggestions, merged changes, identified issues) and correlate them with delivery metrics.
  • Foster a documentation culture: invest in documentation and knowledge bases to provide high-quality source material for AI and reliable references for developers.
  • Focus on platform engineering: centralize scaffolding, CI/CD templates, golden paths, and reusable components to align AI suggestions with standards.

This approach transforms AI into a force multiplier for existing effective practices rather than a workaround.

Addressing Risks as AI Usage Grows

AI-assisted development alters risk profiles. Leading organizations are proactively tackling the following areas:

Code Quality and Correctness

  • Hallucinations and subtle bugs: mitigate these with unit, integration, and property-based testing, as well as static analysis.
  • Overreliance: avoid accepting suggestions without understanding; require reviewers to validate logic and performance implications.

Security and Supply Chain

  • Insecure patterns: enforce security linters, SAST/DAST, dependency scanning, and policy-as-code pre-merge checks.
  • Provenance and tampering: adopt supply chain hardening like SLSA and signed attestations to track what built and deployed your software. See: SLSA.
  • AI-specific risks: consult frameworks like OWASP's Top 10 for LLM Applications for guidance on prompt injection, data leakage, and potential model misuse.

Privacy, IP, and Compliance

  • Training data and licensing: ensure compliant use and proper attribution for generated code patterns when necessary.
  • Data minimization: exclude sensitive data from prompts and logs; utilize enterprise features for retention controls and access management.
  • Risk management: adhere to guidelines like NIST's AI Risk Management Framework.

Measuring Impact: What to Focus On

To distinguish genuine value from hype, monitor both delivery outcomes and developer experience, linking AI adoption to quantifiable improvements.

DORA Metrics to Track

  • Deployment frequency: Are we able to ship smaller changes more frequently?
  • Lead time for changes: Are we seeing improvements in PR cycle times and change lead times?
  • Change failure rate: Are we observing fewer incidents per change?
  • Time to restore: Are we recovering more quickly when issues occur?

Developer Experience (DevEx) Indicators

  • Time to the first meaningful commit on a new service
  • PR approval time and the frequency of re-reviews
  • Time spent on tedious tasks versus creative work
  • Onboarding time and new-hire autonomy
  • Overall satisfaction and flow (via lightweight pulse surveys)

Correlate these measures with AI usage telemetry (e.g., suggestions accepted per PR) to assess genuine impact. Tools like GitHub, IDEs, and internal developer portals can provide necessary data. For further details, refer to DORA's methodology and the research detailed in the book Accelerate.

A Practical 90-Day Plan for Responsible AI Adoption

Days 0-30: Establish Baselines and Guardrails

  • Select 2-3 pilot teams with strong test coverage and CI/CD maturity.
  • Outline acceptable use, data handling, and review protocols.
  • Enable enterprise-grade AI assistants in IDEs with logging capabilities.
  • Instrument DORA metrics and establish DevEx baselines; set target deltas.

Days 31-60: Integrate AI into Workflows

  • Incorporate AI into golden paths: repository templates, scaffolding, test generation, and PR checklists.
  • Automate security controls including SAST, dependency scanning, and IaC policy checks.
  • Utilize AI to enhance documentation: provide code examples, architectural decision records (ADRs), and runbooks.
  • Conduct weekly reviews of AI-assisted PRs and reflect on lessons learned from incidents.

Days 61-90: Scale Successful Practices

  • Expand AI integration to additional teams and programming languages where results are promising.
  • Publish internal guidelines and brief training sessions based on pilot experiences.
  • Update developer portal content and templates based on validated practices.
  • Reassess targets and refine guardrails based on observed risks and outcomes.

Questions Leaders Should Consider Now

  • Where does AI alleviate developer toil most effectively in our stack, and how will we gauge its impact?
  • Are there platform guardrails ensuring AI suggestions adhere to our standards?
  • Are we enhancing DORA metrics while maintaining or improving security?
  • Is our documentation sufficiently robust for both AI and humans to locate reliable answers?
  • What training and change management resources do our teams require for successful AI adoption?

The Bottom Line

As developers increasingly lean on AI, this trend shows no signs of slowing. Google’s DORA research corroborates that AI can expedite individual tasks and empower teams to work more swiftly. However, realizing elite outcomes still hinges on foundational practices that DORA has emphasized for years: deploying small batches, obtaining rapid feedback, ensuring robust automation, crafting clear documentation, developing strong platforms, and nurturing supportive cultures.

Treat AI as a tool to enhance disciplined engineering, not a replacement for it. Measure the essential metrics, implement guardrails, and ensure developers can easily make the right choices as the default. This strategy will enable AI to translate into sustainable improvements in speed, stability, and overall satisfaction.

FAQs

Does AI coding assistance improve DORA metrics by itself?

Not reliably. While AI can speed up coding and testing tasks, improvements in metrics such as deployment frequency, lead time, failure rate, and time to restore generally stem from established DevOps practices. AI tends to enhance these practices when they are already in place.

What is the safest way to begin using AI in development?

Start with a small pilot involving teams that have good test coverage and CI/CD maturity. Implement basic guardrails such as acceptable use protocols, data management rules, and code review policies. Initially focus AI use on reducing mundane tasks like scaffolding, documentation, test generation, and pull request summaries.

How do we mitigate AI-related security risks?

Utilize policy-as-code frameworks, conduct automated security testing, and enforce supply chain protections like SLSA attestations. Keep sensitive information out of prompts and prefer enterprise-grade tools that facilitate logging and retention controls. Consult OWASP resources for guidance on LLM-specific risks.

What additional metrics should we track beyond DORA metrics?

Monitor developer experience factors such as PR cycle time, rework frequencies, onboarding speed, and overall workflow satisfaction. Correlate these metrics with AI usage and incident reports to uncover trade-offs and value insights.

Will AI replace developers?

No. AI functions best as an assistant. Developers are still responsible for system design, decision-making, constraint reasoning, and quality assurance. AI helps reduce monotonous tasks and speeds up routine functions so developers can concentrate on more complex work.

Sources

  1. DORA – DevOps Research and Assessment
  2. Google Cloud – DORA 2023 Accelerate State of DevOps Report Highlights
  3. Google Cloud Blog – DevOps and SRE
  4. Stack Overflow – 2024 Developer Survey
  5. GitHub – Octoverse 2023
  6. JetBrains – State of Developer Ecosystem 2024
  7. OWASP – Top 10 for LLM Applications
  8. SLSA – Supply-chain Levels for Software Artifacts
  9. NIST – AI Risk Management Framework

Thank You for Reading this Blog and See You Soon! 🙏 👋

Let's connect 🚀

Newsletter

Your Weekly AI Blog Post

Subscribe to our newsletter.

Sign up for the AI Developer Code newsletter to receive the latest insights, tutorials, and updates in the world of AI development.

By subscription you accept Terms and Conditions and Privacy Policy.

Weekly articles
Join our community of AI and receive weekly update. Sign up today to start receiving your AI Developer Code newsletter!
No spam
AI Developer Code newsletter offers valuable content designed to help you stay ahead in this fast-evolving field.