As a Scrum Master and Agile Coach working with multiple teams across different Agile Release Trains, I used to spend 90 minutes per team wrestling with JIRA sprint data—copy-pasting into Excel, calculating commitment percentages, and formatting retrospective decks. Now? 15 minutes. The secret is a structured AI prompt that does the heavy lifting while I focus on coaching.

The Problem: JIRA Reports Don't Answer the Questions Teams Actually Ask

JIRA's sprint report gives you raw data: story points, statuses, and timestamps. But during retrospectives, teams need answers to:

Manual calculation is tedious, error-prone, and worst of all—inconsistent across teams. One Scrum Master counts cancelled items, another doesn't. One uses re-estimated points, another uses originals. The result? You can't compare squad performance or identify patterns across your organization.

The Solution: A 30-Second Copy-Paste Workflow

Step 1: Extract the Data

Go to your JIRA Sprint Report, select all the text under the burndown graph, and paste it into a file named after your sprint (e.g., sprint-32-data.md).

Step 2: Submit to an AI Agent

Attach the file to ChatGPT, Claude, or your agent of choice. Then paste the prompt below. The AI will:

The Prompt

You can view the full prompt on GitHub Gist or expand it below:

📋 Click to expand the full prompt
JIRA Sprint Analysis Prompt
SPRINT ANALYSIS REQUEST

Please analyze the attached JIRA Sprint Report and provide a comprehensive sprint performance analysis. I need you to calculate and report on three key areas:

### 1. RESPECT OF COMMITMENT OF ORIGINAL SCOPE

**Critical Note**: Look for the legend "* Issue added to sprint after start time" or similar indicators. Items marked with asterisk (*) or noted as "added after start" should NOT be counted as part of the original commitment or original scope of the sprint.

**Story Point Calculation Rules:**
- **Use ORIGINAL estimates only**: If an item shows "6 → 9" points, use 6 (the first number before the arrow)
- **CANCELLED items**: Do NOT count as completed, regardless of story points
- **Only DONE status**: Count as completed for all calculations

Calculate:
- **Total Story Points in Initial Scope**: Sum of all original story points in sprint at start
- **Total Story Points on Initial Scope that are DONE**: Sum of story points from initial scope with DONE status only (using original estimates)
- **Respect of Commitment %**: (Total Story Points on Initial Scope that are DONE / Total Story Points in Initial Scope) × 100
- Assessment: Did the team meet, exceed, or fall short of their original commitment?

### 2. VELOCITY ANALYSIS

Calculate:
- **Total Story Points in Items that are DONE**: All completed work (initial + added scope, using original estimates)
- Number of issues completed vs started (excluding CANCELLED)
- Sprint duration in working days
- Daily velocity (points per day)
- Story completion rate percentage

### 3. SCOPE VARIATION ANALYSIS

Identify and calculate:
- **Scope Additions**: Items added mid-sprint (marked with * or similar)
- **Total Story Points on Added Scope**: Using original estimates
- **Total Story Points in Added Scope that are Completed**: Only DONE status items
- Count of added issues
- **Scope Removals**: Cancelled or removed items
- Count of cancelled issues
- Story points of cancelled work (using original estimates)
- **Net Scope Change**: (Added points - Removed points)
- **Scope Stability**: (Total Story Points in Initial Scope / Final scope) × 100

### REQUIRED OUTPUT FORMAT

Please structure your response as follows:

#### EXECUTIVE SUMMARY
- Sprint name and dates
- **Total Story Points in Initial Scope**
- **Total Story Points on Initial Scope that are DONE**
- **Respect of Commitment %**
- **Total Story Points in Items that are DONE**
- Overall performance rating (A, B, C, D, F)

#### DETAILED METRICS TABLE

| Metric | Value | Status (✅/⚠️/❌) |
|--------|-------|------------------|
| Total Story Points in Initial Scope | X points | - |
| Total Story Points on Initial Scope that are DONE | X points | |
| Respect of Commitment % | X% | |
| Total Story Points on Added Scope | X points | - |
| Total Story Points in Added Scope that are Completed | X points | |
| Total Story Points in Items that are DONE | X points | |
| Story Completion Rate | X% | |
| Net Scope Change | +/- X points | |

#### SCOPE CHANGE BREAKDOWN

List all mid-sprint additions and cancellations with their **original** story points.

**Mid-Sprint Additions (marked with *)**
- List each added item with original story points and completion status
- **Total Story Points on Added Scope**: X points
- **Total Story Points in Added Scope that are Completed**: X points

**Scope Removals**
- List cancelled items with original story points
- **Total Removed**: X points

#### COMMITMENT ANALYSIS

**Initial Scope Analysis**
- **Total Story Points in Initial Scope**: X points
- **Total Story Points on Initial Scope that are DONE**: X points
- **Respect of Commitment %**: X%
- Breakdown of incomplete initial scope items

**Added Scope Performance**
- **Total Story Points on Added Scope**: X points
- **Total Story Points in Added Scope that are Completed**: X points
- **Added Scope Completion Rate**: X%

**Final Delivery Summary**
- **Total Story Points in Items that are DONE**: X points
- Total Issues Completed: X out of X (excluding CANCELLED)
- Overall Completion Rate: X%

#### KEY FINDINGS & RECOMMENDATIONS
- What went well
- Areas for improvement
- Specific recommendations for next sprint

### ANALYSIS GUIDELINES

1. **Be Precise**: Use exact numbers from the report
2. **Use Original Estimates**: Always use the first number before "→" in story point changes
3. **Exclude CANCELLED**: Never count CANCELLED items as completed
4. **Distinguish Clearly**: Separate initial scope performance from added scope performance
5. **Identify Patterns**: Look for trends in cancelled vs completed work
6. **Consider Context**: Note any special circumstances mentioned
7. **Rate Performance**: Provide an honest assessment with letter grade

Please be thorough and accurate in your analysis. Focus on helping the team understand their predictability and scope management effectiveness.

Real-World Results: From 90 Minutes to 15 Minutes Per Team

I've used this prompt across 6 scrum squads and 2 Agile Release Trains for over a year. Here's what changed:

⏱️ Time Savings

📊 Consistency Across Teams

The biggest win isn't speed—it's homogeneous analysis. Every team now gets:

When your Product Owner asks "Which team is struggling with scope creep?", you can answer confidently because the data is consistent.

🧠 Cognitive Load Reduction

Before, comparing 6 teams meant decoding 6 different spreadsheet formats. Now, every retrospective deck looks identical. Consistency reduces complexity, making it trivial to spot patterns (e.g., "Squad B always underestimates API integration work").

Why This Works

  1. Clear Rules: The prompt defines exactly what counts as "done" and eliminates interpretation errors.
  2. Structured Output: The AI generates a formatted report ready for retrospective slides and organizational sharing, maintaining consistent historical data across all teams in the train.
  3. Updates: The prompt can be updated to include new metrics or to adjust existing metrics as needed.

Pro Tips


This prompt has saved me hundreds of hours and made my retrospectives significantly more data-driven. If you try it, let me know how it goes—I'd love to hear what customizations you make for your teams.

📥 Download the full prompt from GitHub Gist