Overview
When you close an issue or epic, you can click Track Impact to receive an AI-synthesized assessment like:“This feature resolved 47 support tickets about inventory sync issues and was mentioned in 12 sales calls as a competitive differentiator. Customer escalations in this area dropped 65%.”Instead of manually searching through support tickets, sales call notes, and incident logs, Impact Analysis does this automatically—giving you the data you need for quarterly reviews, stakeholder updates, and roadmap planning.
How It Works
1. Correlation Engine
When you trigger an impact analysis, Kasava:- Extracts keywords and context from the issue title, description, and linked PRDs
- Generates semantic embeddings using Voyage AI
- Searches connected platforms for related data within the relevant time window
2. Platform Analysis
Data is correlated from multiple platforms:| Platform | What’s Analyzed |
|---|---|
| Gong | Sales calls, discovery calls, deal discussions |
| Zendesk | Support tickets, resolutions, customer feedback |
| Intercom | Customer conversations, support threads |
| PagerDuty | Production incidents, service disruptions |
3. AI Synthesis
Claude analyzes all correlations and generates:- Summary: A human-readable one-liner for reports
- Impact Factors: Explainable reasons why this work mattered
- Metrics: Quantified impact (tickets resolved, calls influenced, etc.)
- Evidence: Direct links to source data
Using Impact Analysis
Tracking Impact on Completed Work
Click Track Impact
The “Track Impact” button appears in the dialog header for closed items
The button is disabled for open issues. Close the issue first to track its impact.
Preview Correlations
A preview dialog shows potential correlations found across platforms:
- Platform breakdown with match counts
- Top matches with confidence scores
- Estimated credit cost
Run Full Analysis
Click “Run Full Analysis” to generate the complete impact assessment with AI synthesis
Understanding Impact Factors
Impact factors explain the business significance of your work. Each factor includes:- Significance Level: Critical, High, Medium, or Low
- Title: What type of impact was detected
- Explanation: Why this matters with specific evidence
- Evidence Links: Direct links to source tickets, calls, or incidents
Common Impact Factor Types
| Factor | What It Means |
|---|---|
| High Support Volume | Many support tickets were related to this issue |
| Customer Escalation | Work addressed escalated customer issues |
| Sales Opportunity | Feature was mentioned in sales calls or affected deals |
| Competitor Differentiator | Addresses gaps competitors were exploiting |
| Incident Resolution | Production incidents were resolved |
| Incident Prevention | Incident frequency decreased after completion |
| Customer Retention | Work affected at-risk or churning customers |
| Revenue Impact | Direct correlation with closed deals |
| Goal Progress | Work contributed to product goal progress |
| KPI Improvement | Key product metrics improved |
| Engagement Increase | User engagement metrics improved (Amplitude/Mixpanel) |
| Revenue Growth | Direct revenue correlation (Stripe data) |
| Retention Improvement | User retention metrics improved |
Viewing Correlations
Correlations are grouped by confidence level:| Tier | Confidence | Display |
|---|---|---|
| Primary | 90-100% | Shown prominently |
| Related | 70-90% | Listed with details |
| Possibly Related | 50-70% | Collapsed by default |
Preview Mode
Before running a full analysis (which uses AI credits), you can preview what data exists:Preview mode is fast (5-10 seconds) and costs minimal credits. Use it to validate data exists before committing to full analysis.
Viewing Past Analyses
For issues that have already been analyzed:- The button shows View Impact instead of “Track Impact”
- Click to view the existing analysis results
- Use Refresh Analysis to re-run with the latest platform data
Analysis results are cached. Re-running an analysis will update results with newer data but costs additional credits.
Copying Results for Reports
The impact summary is designed to be copy-paste ready for stakeholder reports:Intelligent Investigation System
Beyond basic correlation, Impact Analysis includes an intelligent investigation system that automatically generates and executes queries to quantify business impact.How It Works
Work Understanding
AI analyzes the completed work to understand:
- What type of work (feature, bugfix, performance, etc.)
- Which areas/components are affected
- Expected user-facing impact
- Relevant keywords and context
Data Source Discovery
Kasava discovers available data sources:
- Connected integrations (Gong, Zendesk, PagerDuty, etc.)
- Product metrics (Amplitude, Mixpanel)
- Custom database queries (PostgreSQL, Supabase)
- Revenue data (Stripe)
- Product goals and KPIs
Query Generation
AI generates investigative queries tailored to each data source:
- Hypotheses about expected impact
- Before/after comparisons around the completion date
- Metrics that should change if the work was effective
Goals & Metrics Correlation
Impact Analysis automatically correlates completed work with your product goals:| Correlation Type | Description |
|---|---|
| Directly Linked | Work was explicitly linked to this goal |
| Keyword Match | Work title/description mentions goal keywords |
| Metric Affected | Work affects metrics tracked by this goal |
| AI Inferred | AI determined work likely contributes to goal |
- Progress before and after the work completed
- Which tracked metrics changed
- Confidence score for the correlation
Investigative Queries
The system generates and executes queries specific to each data source:| Source | Query Types | Example |
|---|---|---|
| PostgreSQL | SQL queries on your data | User activation rates, error counts |
| Amplitude | Event analysis | Feature adoption, funnel conversion |
| Mixpanel | User behavior | Engagement trends, retention |
| Stripe | Revenue metrics | MRR change, churn rate |
- Hypothesis: What the query is testing
- Expected Outcome: What a positive result looks like
- Finding: The actual result and its significance
- Evidence Type: Whether the finding is causal or correlational
Evidence Report
The final evidence report synthesizes all data into:Category Scores
Impact is scored across categories:| Category | What’s Measured |
|---|---|
| Support | Ticket reduction, resolution time, CSAT |
| Sales | Deal influence, competitive positioning |
| Reliability | Incident reduction, MTTR improvement |
| Metrics | KPI changes, goal progress |
| Goals | Contribution to product objectives |
| Overall | Weighted aggregate score |
Bi-Directional Sync
Impact Analysis can propagate findings back to source systems, keeping your tools in sync.Propagation Actions
| Target System | Available Actions |
|---|---|
| Zendesk | Add internal note, update tags |
| Intercom | Add note to conversation |
| Jira | Add comment, link to analysis |
| Linear | Add comment, update labels |
| GitHub | Add PR/issue comment |
Propagation History
All sync actions are tracked with full audit trail:- What was sent and when
- Success/failure status
- Link to the updated item
- Who or what triggered the sync (auto/manual/workflow)
Bi-directional sync is opt-in. Enable it per-integration in Settings → Integrations.
Platform Requirements
Impact Analysis requires at least one platform integration:| Platform | Required Data | How to Connect |
|---|---|---|
| Gong | Call recordings with transcripts | Connect Gong |
| Zendesk | Support tickets | Connect Zendesk |
| Intercom | Customer conversations | Connect Intercom |
| PagerDuty | Incident history | Connect PagerDuty |
| Amplitude | Product analytics | Connect Amplitude |
| Mixpanel | User behavior data | Connect Mixpanel |
| Stripe | Revenue metrics | Connect Stripe |
| PostgreSQL | Custom database queries | Connect PostgreSQL |
Best Practices
When to Use Impact Analysis
- Quarterly reviews: Compile impact data for multiple completed epics
- Stakeholder updates: Generate data-backed summaries for leadership
- Roadmap planning: Understand which types of work had the most impact
- Team retrospectives: Review what moved the needle for customers
Getting Better Results
- Write clear issue titles: The correlation engine uses your issue title and description
- Link related PRDs: More context improves correlation accuracy
- Wait for data: Run analysis 1-2 weeks after close for best results
- Connect multiple platforms: More data sources = richer insights
Time Windows
Impact Analysis looks for correlations within a configurable time window:- Before close: Up to 90 days before the issue was closed
- After close: Up to 30 days after close (to capture resolution effects)
Credit Usage
| Operation | Estimated Credits |
|---|---|
| Preview (no AI) | ~0.5 credits |
| Full analysis | ~4-6 credits |
| Re-analysis | ~4-6 credits |
Preview mode lets you validate data exists before committing to full analysis credits.
Troubleshooting
No Correlations Found
If Impact Analysis finds no related data:- Check platform connections: Ensure integrations are properly configured
- Verify data exists: The relevant tickets/calls may use different terminology
- Adjust time window: The data may be outside the default window
- Proactive work: Some work is proactive (no prior complaints to correlate)
Partial Results
If some platforms succeed but others fail:- View the available results from successful platforms
- Click Retry Failed Platforms to re-attempt only the failed ones
- Check integration settings if failures persist
Analysis Taking Too Long
Full analysis typically completes in 30-60 seconds. If it takes longer:- Large correlation sets take more time to process
- AI synthesis may be processing a complex analysis
- Check the status indicator for progress