Agency Performance Review Framework: AI-Powered Evaluation & ROI Assessment
Marketing LeadershipintermediateClaude 3.5 Sonnet recommended. It excels at structured analysis, can handle complex frameworks with multiple sections, and produces clear, actionable recommendations. GPT-4o is a strong alternative if you need faster processing or prefer OpenAI's interface, but Claude's reasoning is superior for this type of strategic assessment.
When to Use This Prompt
Use this prompt when you need to evaluate whether an agency partnership is delivering value and where AI can reduce friction without disrupting the relationship. It's especially useful when operational debt is slowing campaign velocity or when you're considering bringing work in-house vs. optimizing the agency workflow.
The Prompt
You are a senior marketing operations strategist helping a CMO evaluate agency performance and identify where AI can unlock ROI. Use this framework to conduct a comprehensive review.
## Agency Performance Review Framework
### Part 1: Current State Assessment
Review the agency relationship across these dimensions:
- **Workflow Efficiency**: Where is time leaking? Identify coordination overhead, approval cycles, and rework loops.
- **Output Quality**: Are deliverables meeting brand standards? How much internal rework is required?
- **Strategic Alignment**: Does the agency understand your business goals and revenue drivers?
- **Tool & Process Maturity**: What systems are in place? Where is manual work happening that could be automated?
### Part 2: Operational Debt Audit
For each major workflow (e.g., campaign creation, asset production, reporting), assess:
1. How many handoffs occur between agency and internal team?
2. What approvals or reviews slow down delivery?
3. How much time is spent on coordination vs. strategy?
4. Where do errors or rework happen most frequently?
### Part 3: AI Opportunity Mapping
For the top 3 high-friction workflows identified above, evaluate:
- **Time Impact**: How many hours per month leak here?
- **Revenue Connection**: Does this workflow directly affect pipeline, conversion, or retention?
- **AI Readiness**: Can this workflow be partially or fully automated with current AI tools?
- **Implementation Complexity**: What governance, data, or brand risks exist?
### Part 4: ROI Projection & Recommendation
For each opportunity, provide:
- **Baseline Metrics**: Current cycle time, cost, quality baseline
- **AI Intervention**: Specific tool/approach (e.g., generative brief creation, asset variation, performance analysis)
- **Expected Lift**: Time savings (%), quality improvement (%), cost reduction (%)
- **Implementation Path**: Quick win vs. systemic change; timeline; resource needs
- **Success Criteria**: How you'll measure ROI (speed, cost, output quality, pipeline impact)
## Your Input
Agency Name: [AGENCY NAME]
Services: [SERVICES PROVIDED: e.g., content, design, paid media, strategy]
Annual Spend: [ANNUAL SPEND]
Key Pain Points: [LIST 3-5 CURRENT FRUSTRATIONS]
Business Priority: [PRIMARY GOAL: e.g., pipeline growth, brand awareness, customer retention]
Provide a structured review with clear recommendations on where to embed AI to reduce operational debt and prove ROI within 90 days.
Get the Full AI Marketing Learning Path
Courses, workshops, frameworks, daily intelligence, and 6 proprietary tools — built for marketing leaders adopting AI.
Trusted by 10,000+ Directors and CMOs.
Tips for Best Results
- 1.Customize the 'Key Pain Points' section with specific examples (e.g., 'Asset revisions take 5+ rounds' or 'Weekly reporting consumes 8 hours'). Concrete details produce more targeted recommendations.
- 2.Include actual cycle time data and cost metrics in your input. The prompt uses these to calculate ROI projections—vague inputs yield vague outputs.
- 3.Run this prompt quarterly to track whether AI interventions are actually reducing operational debt. Use previous reviews as baseline to show progress.
- 4.After getting recommendations, ask the follow-up: 'Which of these 3 opportunities should we pilot first based on implementation risk vs. ROI?' to prioritize execution.
Example Output
## Agency Performance Review: [Agency Name]
### Current State Summary
The agency is strong on creative execution but operational overhead is high. Approval cycles average 5-7 days per asset, and 30% of deliverables require rework due to brief misalignment. Coordination between agency and internal team consumes ~120 hours/month.
### Top 3 High-Friction Workflows
**1. Campaign Brief to First Draft (Highest Impact)**
- Current cycle time: 10-12 days
- Handoffs: 4 (brief → creative → internal review → revisions → approval)
- Time leaking: 40 hours/month in coordination and rework
- Revenue connection: Direct—delays push campaigns 1-2 weeks, impacting pipeline
**2. Asset Variation & Localization**
- Current process: Manual requests, 3-5 day turnaround per variation
- Volume: 15-20 variations/month across channels
- Time leaking: 35 hours/month
- Revenue connection: Medium—faster variations improve campaign performance but not critical path
**3. Campaign Performance Reporting**
- Current process: Agency pulls data, internal team reformats for stakeholders
- Frequency: Weekly, takes 8 hours/week
- Time leaking: 32 hours/month
- Revenue connection: Low—reporting is hygiene, not strategic
### AI Opportunity Ranking
**Priority 1: Automated Brief Generation & Variation (90-Day Quick Win)**
- Intervention: AI-powered brief template that auto-generates creative direction, messaging pillars, and asset specs from campaign objective
- Expected lift: 50% reduction in brief cycle time (5 days → 2-3 days); 25% fewer revisions
- Implementation: Low complexity; uses agency's existing brief data
- ROI: 20 hours/month saved; faster campaign launch = 2-3 additional campaigns/quarter
**Priority 2: Intelligent Asset Variation Engine**
- Intervention: AI generates copy variations and design mockups from approved master; agency reviews and refines
- Expected lift: 60% faster turnaround on variations; 3x volume capacity
- Implementation: Medium complexity; requires brand asset library and approval workflow
- ROI: 20 hours/month saved; enables more A/B testing without additional cost
**Priority 3: Automated Reporting Dashboard**
- Intervention: AI pulls data, structures narrative, flags anomalies
- Expected lift: 90% automation of routine reporting
- Implementation: Low complexity; data integration only
- ROI: 30 hours/month saved; frees team for strategic analysis
### 90-Day Implementation Roadmap
- **Weeks 1-2**: Pilot AI brief generator with 2 campaigns; measure cycle time and revision count
- **Weeks 3-6**: Roll out to all campaigns; refine prompts based on learnings
- **Weeks 7-12**: Launch asset variation tool; integrate into agency workflow
- **Success metric**: Reduce campaign cycle time by 40%; maintain or improve quality scores
Related Prompts
Related Reading
Get the Full AI Marketing Learning Path
Courses, workshops, frameworks, daily intelligence, and 6 proprietary tools — built for marketing leaders adopting AI.
Trusted by 10,000+ Directors and CMOs.
