Wiseboard is becoming a Growth Office plugin for growing IT companies.

Wiseboard is becoming a Growth Office plugin for growing IT companies.

What AI in SDLC Looks Like in Practice: Inside Wiseboard’s New Program

Illustration
  • Illustration

    By Alex Sharko

    AI Practice Lead • Head of AI at Clearly

The biggest fear companies have about AI in software development is complexity. 
And I get it. There are no industry standards yet, no plug-and-play frameworks. At best, you hear a few success stories from individual teams. That’s what a paradigm shift looks like, the same thing we saw with DevOps, now common. Before long, AI in SDLC will become the standard, too. 
If your leads are already asking “How do you use AI in development?” you know the shift has begun. The good news is, Wiseboard is already helping IT companies adapt. In this article, I’ll show you how we approach AI in SDLC through the program we’ve built and run with Wiseboard’s clients.
P.S. Integrating AI in SDLC is one of three AI rollout paths I described in another Wiseboard article.

Key topics we’ll cover:
→ What AI in SDLC is and is not→ How to get impact from AI-powered software development team-wide→ Wiseboard’s 6-step approach that helps companies roll out AI in SDLC→ Metrics tech teams measure (basic + advanced)

But first, why is now the right moment to experiment with AI in your SDLC?

Why now is the best time to start with AI in SDLC

AI is reshaping how software gets built, and there are no industry standards yet. That’s why teams that start with AI early have an advantage: they can experiment, learn, and set the rules before everyone else catches up.
Early AI adopters already see results. SoftServe reports a 45% boost in team productivity and a 30% reduction in project timelines. 
Here’s what structured AI adoption can look like across the software development cycle. There are clear roles for AI at every stage of delivery.

Illustration

AI helps every step of SDLC, from requirements gathering to deployment. Source: Dr. Priyanka Nair

To get impact from AI-powered software development team-wide, you need an established AI in SDLC framework. Here’s what that means.  

What AI in SDLC actually means


  • AI in SDLC is not about individual developers using Copilot → it’s a documented policy with rules and practices for using AI, shared with the whole team.

Say, your client is asking: “How do you use AI in development?” 
Chances are, some of your developers already experiment with tools like GitHub Copilot, Cursor, Claude Code, or Windsurf. But that’s individual use, like everyone using ChatGPT in their own way. Without a company-wide AI SDLC policy, backed by metrics, you don’t yet have an established practice. 
That’s when you need a documented AI in SDLC policy that highlights what you do with AI, what you don’t, and how you keep the process safe and repeatable.
In practice, it usually starts as a short one- or two-page document that defines:

    which tools are approved (assistants, agents, or LLM bots across the SDLC)

    which tasks AI should or shouldn’t handle

    how security and code review are enforced

    how the policy evolves as tools improve

The policy might include points like “Never build sensitive modules (like finance) purely with Copilot” or “Always use AI for high-efficiency tasks such as API integrations and test coverage.”
Note that AI in SDLC policy is a living document. For example, the first version may only cover developers. Later, it can extend to QA (AI-generated test cases), presales (AI-driven proposals), or DevOps (AI monitoring scripts). It grows with your team and processes. 

✔ The difference

AI in SDLC isn’t an individual productivity trick. It’s a collective shift in how your entire team develops software.

Wiseboard’s approach to rolling out AI in SDLC

Integrating AI in SDLC can be messy if you don’t break it into manageable steps. 
Here’s how we guide companies at Wiseboard. (For more context on AI implementation strategies, Arthur Fedorenko and I talked about it in a recent Wiseboard Talks).

Step 1. Audit delivery

✔ The outcome

AI in SDLC isn’t an individual productivity trick. It’s a collective shift in how your entire team develops software.

We start with how you work today. Which tools are in use? How do you handle code reviews, tickets, QA, and DevOps? We look at security gaps and check what small DevOps changes are needed, like adding scanners or review rules for AI-generated code.

    Smaller companies usually begin with coding assistants like Copilot or Cursor, and later extend to coding agents (like Codex) that integrate into an IDE. 

    Bigger companies often combine coding assistants with LLM bots across the entire SDLC because parallel adoption is more feasible for them.

Case in point: we helped a 500-person IT company map their SDLC and found they could integrate LLM bots at every stage, from presale to client support.

Illustration

Use cases for LLM bots across the SDLC

Step 2. Define policy

✔ The outcome

First version of your AI SDLC policy that sets boundaries, rules, and approved tools.

Together with a CTO or tech lead, we document:

    where AI is allowed and where it isn’t

    what security checks are required

    which tools are approved

This is a several–page documented policy that’s presented to the first pilot teams.

Step 3. Run pilot teams

✔ The outcome

Evidence of AI’s impact on real projects, plus feedback from early users.

We test the framework with 1–3 pilot teams, ideally of different sizes and project types. Before starting, we capture baseline metrics (e.g. security, velocity, earned value) so results are measurable. I describe which metrics to measure later in the article.
We also collect feedback from the teams: what worked, what didn’t, and what slowed them down.


  • Note: It takes time for teams to get used to a new workflow. In the first weeks, efficiency may even drop. But once they adapt, the benefits spread across the whole organization: in code quality, delivery speed, and developer experience.

  Step 4. Build an ambassador community

✔ The outcome

A small group of AI champions who promote AI adoption across other teams.

Every company has people who are curious by nature. We turn them into ambassadors of the AI SDLC policy, who can share both successes and failures and help other teams adopt the same practices.
This can be as simple as:

    a shared channel for posting prompts or cases,

    a board where people post use cases of using AI,

    a monthly digest summarizing what worked.

Step 5. Scale with evidence

✔ The outcome

AI in SDLC becomes part of everyday work, supported by proof and internal culture.

Once pilot teams show clear results and ambassadors are active, the policy is rolled out company-wide. At this stage AI in SDLC is no longer an experiment, but is embedded into operations:

    Proven by metrics from pilots (faster onboarding, fewer bugs, quicker integrations).

    Supported by culture: teams resist less because they see peers succeeding.

    Integrated into processes: policy becomes part of onboarding docs, DevOps pipelines, and code review checklists.

Step 6. Create a client-facing asset

✔ The outcome

A company-wide AI SDLC policy you can show to clients as a sales advantage.

Once your teams practice the policy, you finally have something concrete to share with the world. Instead of vague answers about “trying Copilot,” you will be able to say:
→ “Here’s our AI SDLC policy, the framework we follow on every project.”→ “Here are the tools we’ve chosen, and the results we’ve measured.”→ “We’ve created a framework that works for us internally, and we can apply the same approach, with the same guardrails and proven tools, on your project too.”
That said, your AI in SDLC initiative is no longer a behind-the-scenes experiment. Clients see structure, maturity, and transparency where many of your competitors still look ad hoc. 

Metrics to measure before and during your AI in SDLC initiative

A lack of clear metrics is the biggest AI challenge for 60% of engineering leaders, according to LeadDev’s 2025 AI Impact Report. But that doesn’t mean we need brand-new KPIs. AI doesn’t change what “good” software looks like. Quality, maintainability, and speed still matter most. The real question is whether AI helps you improve on those fundamentals.
Many of the metrics tech teams measure aren’t new (like pull request throughput). What’s new are AI-usage signals layered on top: time savings, AI spend, AI tool usage. 

Illustration

How 9 top companies measure AI impact. Source: The Pragmatic Engineer


We recommend you start tracking the basic set of metrics, and then expand to advanced ones.

Basic set of metrics to measure 

    Developer contribution → commits with and without AI-generated code, PRs with and without AI contributions, frequency of AI tool usage (active users, tasks assisted, suggestions accepted).

    Bug counts and defect types → are issues going up or down post-adoption?

    Security vulnerabilities → what scanners find in the pipeline, and whether risk is shrinking or spiking.

    Onboarding speed → AI tools help explain code, summarize documentation, and speed up how fast new developers get productive.

Advanced set of metrics

Because the minimal set is easy to track but doesn’t prove quality, we also recommend measuring outcomes that tie directly to effectiveness and business value.
DORA metrics (DevOps and delivery health):

    Deployment frequency → how often code reaches production.

    Lead time for changes → how fast a commit goes live.

    Change failure rate → how many releases cause issues.

    Mean time to recovery (MTTR) → how quickly the team restores service after failure.

Earned value per sprint (business outcomes):

    Services companies → billable features delivered, or backlog items linked to client invoices.

    Product companies → new capabilities for end users, performance gains that improve retention, or reduced churn.

Developer experience frameworks (DevEx / SPACE):

    Services companies → billable features delivered, or backlog items linked to client invoices.

    Product companies → new capabilities for end users, performance gains that improve retention, or reduced churn.

    Change failure rate → how many releases cause issues.

These frameworks measure both developer’s output and well-being.
Case in point: My team ran a pilot for one of the UK’s top four banks. In it, two teams adopted GitLab Duo and GitHub Copilot. With security metrics and value tracking, onboarding time dropped from 58 days to 30. For an organization with 10,000 engineers, those 28 saved days translated into £7M of annual value. 
Of course, not every company has 10,000 developers. For smaller teams, the impact looks different but still significant: some double their release speed, others slash onboarding time for new hires.

Bottom line

Right now, the market doesn’t require companies to adopt AI across the SDLC. But that won’t last. Over time, those who integrate AI systematically will get a clear edge in productivity and efficiency.
For teams of 50+ people, the time to act is now: build a strategy, launch pilot projects, choose tools, and assess risks, so you don’t fall behind in a few years.
If you need support, Wiseboard’s AI in SDLC program was built to help IT companies do exactly that. 

LATEST INSIGHTS

Learn from our experience

April-June, 2025・NEWSLETTER

AI Advantage, Diversifying Delivery, and M&A Readiness

How to diversify delivery risks, adopt AI internally, and pick a winning growth strategy?

  • Illustration

    Arthur FedorénkoFounder & CEO, Wiseboard

Feb-Mar, 2025・NEWSLETTER

Strategy & Alliances, Health Care industry penetration

How to sell solutions, not heads, and penetrate the Healthcare market?

  • Illustration

    Arthur FedorénkoFounder & CEO, Wiseboard

Newsletter

Subscribe to our newsletter

Join 2000+ company leaders who subscribe to Wiseboard Insights to learn from our experience working with dozens of software development companies across different growth stages.

NEWSLETTER

Subscribe to our newsletter

Join 2000+ company leaders who subscribe to Wiseboard Insights to learn from our experience working with dozens of software development companies across different growth stages.

Illustration