Skip to main content

Business Transformation: Accelerate AI

Practical Implementation for MSP Operations

State of the Industry Fireside Chat with Chris Massey and Cameron Rohn

Opening & Introduction

60-minute session for MSP CEOs & Executives

What's the "State of AI" in the MSP industry right now — is it hype, reality, or both?

You've been at the intersection of MSP operations and cutting-edge AI development. The "State of AI" in the MSP industry is both hype and reality — understanding the distinction is critical for success.

  • Hype vs breakthrough distinction matters
  • MSP AI adoption curve positioning varies widely
  • Early wins vs long-term transformation balance
  • Underestimating AI opportunity cost is common
  • Practical vs theoretical applications differ greatly
  • Predictive vs reactive service models shift paradigms

What is the mindset shift needed to treat AI as a strategic advantage rather than a threat?

For MSP executives watching AI headlines and feeling a mix of FOMO and fear, a fundamental mindset shift is required.

Key Insight: AI is operational leverage, not replacement. Focus on reallocation vs reduction mindset.
  • AI as operational leverage, not employee replacement
  • Reallocation vs reduction mindset for resources
  • Competitive positioning window is closing
  • Strategic investment vs cost center view
  • Employee time to learn and experiment is critical
  • Crawl-Walk-Run framework for implementation

Business Transformation Attendees

This fireside chat brings together technical leaders and security professionals driving AI implementation within their MSPs.

Technical Leadership & Security

Business Transformation Technical Attendees

Attendee
Organization & Role
Technical Focus & AI Involvement
Christian Kelly, CISSPXantrion - CTO (22+ years)IT/cybersecurity expert, developed Managed Security program, AI security integration
Sean ErnstcentrexIT - VP Technology (18+ years)US Navy veteran, digital workspace focus, AI-enhanced service delivery
Rick MutzelOmega Systems - Manager of TechnologySince 2013, Smart Guard MDR operations, AI-augmented security ops
Bryan LukralleCharter Technology - CSO (15+ years)Cybersecurity program builder, AI threat detection implementation
Andrew NeumannOmega Systems - VP Technical ServicesLeads managed services operations, AI service automation initiatives
David Cieslak, PhDAunalytics - Chief Data ScientistLeads Innovation Lab, data science research, AI/ML model development

N-Able AI & Security Leadership

N-Able Technical & AI Strategy Team

N-Able Leader
Role
Technical AI Initiative Focus
Nicole ReinekeVP AI Strategy & Vision (PhD Stanford AI/ML)Owns N-able AI Roadmap 2023-2026, enterprise-scale AI deployment architecture
Will LedesmaSenior Director MDR Cybersecurity (15 years SOC)AI-augmented MDR, MCP-based SOC playbooks, automated threat response
Laura DuBoisChief Product & Customer Officer (18 years)Building self-service AI-tool catalog, partner AI enablement platform
Rob WilburnVP Partner Growth (20 years channel experience)Piloting Partner-First AI Marketplace, MSP AI GTM strategy

Topic 1: Internal Operations & Efficiency

What's a practical first AI use case to reduce ticket noise without breaking existing workflows?

Quick Win: Companies report 40-60% of L1 tickets resolved automatically with AI-driven ticket triage & L1 automation.

Practical first AI use cases for reducing ticket noise:

  • L1 support automation for common requests
  • Conversational AI for password resets
  • Account unlock automation
  • Intelligent ticket routing and categorization
  • After-hours coverage without adding headcount
  • First-contact resolution improvements
40-60%
L1 Tickets

Resolved automatically with AI

45%
Volume Drop

Ticket reduction in 3 months

75%
Time Saved

Resolution time: 4h → 1h

4-Step Ticket Management Model

A systematic approach to transforming your help desk:

4-Step Model for Ticket Management

Step
Action
Outcome
1 - EliminateDisable/raise thresholds on system-generated alerts; route info-only tickets to separate queue≥ 30% ticket volume drop
2 - ImprovePull user & asset data into tickets; standardize onboarding; auto-assign by client/skillTime-to-First-Touch ≤ 5 min
3 - AutomateDeploy chatbot for routine requests; add RPA scripts for provisioning tasksFCR ≥ 70%; avg resolution ≤ 30 min
4 - MonitorTrack metrics: volume, FCR, CSAT, automation success. Review weekly, adjust thresholdsContinuous improvement loop
Target Metrics: Ticket volume ≤ 20/day, % eliminated ≥ 30%, avg. Time-to-First-Touch ≤ 5 min, FCR ≥ 70%, CSAT ≥ 4.5/5, automation success ≥ 85%

How can MSPs make sure their AI solutions understand client environment data — not just give generic chatbot answers?

Context is everything in support. MSPs need to ensure their AI solutions understand specific client environments.

MCP Protocol: Model Context Protocol provides secure context management across systems. Learn more at cat-mip.org

Context Layer Integration

Context Layer
How AI Uses It
MCP for context managementPulls device inventory, OS version, recent changes
Knowledge-base integrationRetrieves up-to-date SOPs; ranks answers by relevance
PSA/RMM dataAuto-fills ticket fields, suggests remediation steps
Continuous learningAnalyzes resolved tickets to refine intent models
30%
Higher FCR

vs generic chatbots with context integration

Quick-Start Implementation Checklist

  • Export last 90 days of tickets to CSV
  • Tag tickets: Eliminate / Improve / Automate
  • Disable low-value alerts, create "Info-only" queue
  • Standardize onboarding builds per department
  • Publish "Birthright Access" list with auto-grant via API
  • Connect AD/SSO to ticket view for context
  • Build first bot: Password reset automation
  • Build RPA script: Auto-close hardware-order tickets
  • Pilot on 1 client → measure FCR & CSAT
  • Add "Talk to an agent" button in bot flow
  • Deploy to all clients → monitor metrics weekly

How do you see AI not just writing SOPs, but continuously keeping them current as systems and processes change?

MSP documentation is notoriously out-of-date. AI provides solutions for both generating and maintaining current documentation.

  • Document understanding vs document intelligence
  • Automated SOP generation from observed actions
  • Change detection and update triggers
  • Version control and drift monitoring
  • Procedure validation and testing
  • Living documentation concept
25%
Fewer Errors

With AI-generated SOPs

20%
Faster Onboarding

New tech productivity

35%
Time Reduction

Onboarding: 4-6 weeks → 2 weeks

What's the smartest way to use AI to onboard new techs and flatten that steep learning curve?

New technicians face a steep learning curve across dozens of client environments. AI can dramatically accelerate onboarding:

  • Interactive training agents simulate environments
  • Client environment summaries auto-generated
  • Context-aware guidance systems
  • Progressive knowledge delivery
  • Simulated troubleshooting scenarios

How should MSPs deploy AI in monitoring so they get fewer, better alerts — and maybe even proactive resolutions?

AI can generate insights, but it can also generate alert fatigue. The key is strategic deployment for signal vs noise.

Alert Fatigue Solution: AI-enabled clustering and correlation reduces alerts by 50-70% while improving accuracy.
  • Signal vs noise filtering
  • Pattern recognition for true incidents
  • Predictive maintenance timing
  • Automated remediation workflows
  • Alert correlation and root cause analysis
  • Proactive vs reactive service positioning
50-70%
Alert Reduction

Fewer alerts reaching technicians

85%
Prediction Accuracy

Hardware failure forecasting

30%
Auto-Resolved

Incidents handled without human touch

Topic 2: Implementation Metrics & Real ROI

Beyond the hype, here are the real metrics MSPs are achieving with AI implementation. These numbers come from MSP-focused case studies and operational data:

20-35%
OPEX Reduction

Labor cost reduction across operations

40-60%
L1 Auto-Resolution

Tickets resolved without human intervention

30%
Faster Processing

Overall ticket handling speed improvement

L1 Support Automation Performance

Conversational AI and automated L1 support deliver measurable improvements across multiple dimensions:

L1 Automation Impact Metrics

Metric
Improvement
Implementation Note
AI-Handled Tickets≥45% automation rateTarget for full production deployment after pilot
L1 Labor Cost40% reductionCost per L1 ticket drops significantly with automation
Customer SatisfactionMaintain ≥4.5/5 CSATAI must maintain or improve CSAT vs human-only support
Resolution Time4 hours → 1 hour (75% reduction)Average time to resolve common L1 issues drops dramatically
Ticket Volume Drop45% reduction in 3 monthsCombination of elimination and automation effects
Cost Structure: Initial fine-tuning $5-10k, ongoing $0.02-$0.05 per request. Pilot Q1 2025, full rollout Q2-Q3 2025.

Process Automation & Efficiency Gains

Process automation across onboarding, ticketing, and reporting delivers consistent efficiency improvements:

Process Automation Performance Metrics

Process Area
Improvement Achieved
Timeline
Ticket Handling Speed↓30% processing timeQ2-Q3 2025
First Contact Resolution (FCR)↑20% improvementQ2-Q3 2025
Labor Cost Per Ticket↓25% cost reductionQ3-Q4 2025
Overall Labor OPEX20-35% reductionQ1-Q4 2025 phased rollout
Ticket Volume≤20 tickets/day targetAfter full implementation
Time-to-First-Touch≤5 minutes averageQ2 2025
Automation Success Rate≥85% successful executionTarget for production workflows
Investment: Initial setup $10-20k, ongoing $1-2k/month maintenance. Q1-Q4 2025 rollout timeline.

Onboarding & Documentation Improvements

AI-generated SOPs and interactive training accelerate new technician productivity:

35%
Faster Onboarding

4-6 weeks reduced to 2 weeks

25%
Fewer Errors

With AI-generated SOPs

20%
Productivity Boost

New tech ramp-up speed

  • Interactive training agents simulate client environments
  • Auto-generated environment summaries for each client
  • Context-aware guidance during actual troubleshooting
  • Progressive knowledge delivery based on tech experience
  • Real-time SOP updates as processes change

Comprehensive ROI Analysis

Putting it all together: the complete financial picture for mid-size MSP AI implementation:

Complete AI Implementation ROI (Mid-Size MSP)

Financial Metric
Value
Notes
First-Year Investment$50,000 - $150,000Platform setup, agent development, integrations, pilots
Annual Maintenance$70,000/yearLicensing, API usage, continuous improvement
First-Year ROI>300%Return on initial investment within 12 months
Payback Period≤6 monthsTime to recover initial investment from savings
Net Profit Growth↑20% YoYYear-over-year profit increase from efficiency gains
Ticket Resolution Time↓30% averageFaster resolution across all ticket types
Bottom Line: MSPs investing $50-150k see >300% ROI in Year 1 with ≤6 month payback, achieving 20-35% labor cost reduction and 20% profit growth.

Topic 3: Practical Implementation & MCP Integration

If you had to recommend one AI agent every MSP should deploy this year, what would it be — and why?

Getting started with AI agents requires choosing the right first project for maximum ROI and learning value.

Recommended First Agent: Ticket triage and routing agent - highest ROI, fastest implementation, immediate impact.
  • Ticket triage and routing agent (RECOMMENDED)
  • Documentation assistant for SOPs
  • Monitoring analysis agent for alerts
  • Client communication assistant

What new opportunities do MCP servers unlock for MSPs that weren't possible before?

You've been a big proponent of MCP servers for orchestrating AI across systems. Model Context Protocol unlocks significant new capabilities:

  • Context aggregation across multiple systems
  • Modular and reusable integrations
  • Custom controls and security boundaries
  • Private data leverage without exposure
  • Multi-agent coordination and orchestration
  • Alternative to converting every API
  • Fine-grained access control (RBAC)

MCP Quick-Start Checklist

MCP Implementation Verification

Item
Verification
MCP server reachableJSON list returned from /resource://knowledgebases
Model can invoke toolPrompt triggers QueryKnowledgeBases with valid response
IAM role & SigV4 authaws sts get-caller-identity shows expected role
TLS & loggingHTTPS endpoint returns spec; CloudWatch logs show session_id
Metrics visibleAlarms set for mcp.requests, mcp.errors, latency

MCP Best Practices & Implementation Guide

Model Context Protocol (MCP) represents a fundamental shift in how AI systems integrate with enterprise tools and data. Understanding best practices is critical for successful implementation.

MCP Core Value: Context is king. MCP provides modular, reusable integrations with custom controls and security, enabling AI to leverage private data without exposure.

Understanding MCP Value & Reputation

MCP has evolved significantly, and it's important to understand both its history and current value:

MCP Evolution: Reputation vs Reality

Aspect
The Challenges
The Real Value
Early ReputationUnstable early releases, especially local servers. Basic specification raised questions about necessity.Latest evolution: remote cloud-hosted MCP servers provide stability and scale
Context ManagementGeneric APIs lack environmental awareness and client-specific contextContext is king: Aggregates data across multiple systems with client awareness
Integration ApproachPoint-to-point integrations create maintenance burden and duplicationModularity: Reusable, plug-and-play integrations reduce development time
Security & ControlDirect API access lacks fine-grained permissions and audit trailsCustom controls: Security boundaries, RBAC, and private data leverage without exposure

Critical MCP Implementation Guidelines

Key MCP Don'ts: Avoid these common mistakes that reduce MCP effectiveness and create technical debt.

MCP Implementation: Do's and Don'ts

Don't Do This
Why It Fails
Do This Instead
Convert every API to MCP serverCreates unnecessary abstraction layer, adds latency, increases complexityUse MCP for context aggregation and orchestration, not simple API wrappers
Overload MCP servers with too many toolsContext bloat reduces model performance, increases costs, degrades response qualityCreate focused MCP servers with 5-10 related tools max; use multiple servers for different domains
Skip RBAC and access controlsSecurity vulnerabilities, compliance failures, unauthorized data accessImplement fine-grained RBAC from day one; use IAM roles and security boundaries
Deploy without monitoring/loggingNo visibility into usage, errors, or performance; difficult to debug issuesEnable CloudWatch logs with session_id tracking; set up alarms for errors and latency

Agent Infrastructure Stack Components

Building a production-ready AI agent infrastructure requires multiple layers of technology:

  • AI Gateways and Proxies: Route requests, manage quotas, provide fallbacks
  • MCP Gateways and Portals: Centralized access to MCP servers with authentication
  • Observability: Real-time monitoring, usage analytics, performance tracking
  • Guardrails: Content filtering, prompt injection detection, output validation
  • Logging and Analytics: Session tracking, audit trails, compliance reporting
  • Containers and Sandboxes: Isolated execution environments for agent actions
  • Actions and Workflows: Orchestration layer for multi-step agent operations
Architecture Tip: Start with AI Gateway + MCP + Observability, then add Guardrails and Sandboxes as you scale.

Modes of Agent Use in MSP Operations

AI agents can be deployed in multiple modes depending on use case and integration requirements:

Agent Deployment Modes

Mode
Use Case
Example
ChatbotInteractive Q&A, support automationL1 support bot for password resets and common requests
Computer UseGUI automation, legacy system interactionAutomating tasks in older management consoles without APIs
Browser UseWeb-based workflows, data extractionAutomated client portal updates, web-based reporting
CLI (Command Line)DevOps automation, infrastructure tasksClaude Code, Gemini CLI for automated deployments
Agents as MCPAgent-to-agent communicationSpecialized agents exposing capabilities via MCP protocol
Vibe Coding PlatformsLow-code/no-code AI developmentRapid prototyping of custom AI workflows for specific clients

MCP Cost & Operations Considerations

Managing costs and operations effectively is critical for sustainable AI deployment:

  • Time-based usage billing: Track API calls and context usage per client
  • Controlling context: Limit context window size to reduce token costs
  • Scheduled jobs and batching: Run non-urgent tasks during off-peak hours
  • Caching strategies: Cache common responses to reduce API calls
  • Model selection: Use appropriate model sizes for different tasks
  • Request throttling: Implement rate limits to prevent cost spikes
Cost Management: Monitor token usage per client, implement caching, use scheduled batching for non-urgent tasks, and select appropriate model tiers.

Avoiding Common Pitfalls

What's the most common mistake MSPs make on their first AI project — and how do they avoid "pilot purgatory"?

Pilot Purgatory: Projects without clear success metrics and deployment timelines often stall indefinitely.
  • Scope creep and perfectionism delays
  • Lack of clear success metrics
  • Over-complicated first projects
  • Insufficient stakeholder buy-in
  • No concrete deployment timeline
  • Converting APIs to MCP unnecessarily
  • Overloading context with too many tools

How should MSPs think about guardrails and governance when deploying AI across multiple client environments?

Security and compliance are top concerns. Here's how to implement proper guardrails:

  • Data residency and privacy requirements mapped
  • Client data segregation enforced
  • Access control frameworks implemented
  • Audit logging and monitoring enabled
  • Compliance mapping (GDPR, CCPA) completed
  • Hallucination detection mechanisms in place
  • Escalation triggers and human oversight defined

TL;DR - Key Takeaways

Implementation Summary: Follow the structured approach below to evolve from reactive to proactive AI-augmented operations.
  • Start with noise elimination → raise alert thresholds, create info-only queues
  • Improve processes → standard onboarding, birth-right access, AD context injection
  • Automate only after validation → check if repetitive, no judgement required, stable, data-ready
  • Use MCP as secure translator → between LLMs and internal data/tools, scale with containers and IAM
  • Always provide human fallback → bots without escalation paths hurt CSAT/NPS
Next Steps: Follow the checklists and metric targets to evolve your service desk into a proactive, AI-augmented operation that resolves most tickets automatically and frees analysts for high-value work.