# UX Metrics AI Skill Define Area · UX Metrics Block · Decision Map --- ## 1. What the Skill Does The UX Metrics skill helps teams choose the right numbers to prove their design work is working. It sits inside the Define area of Glare's Decision Map. This is where teams decide what to measure before they start collecting data — not after. Most teams measure too late or measure the wrong thing. They wait for analytics that arrive after the sprint is over. Or they collect numbers that look good in a slide deck but never guide a real decision. The UX Metrics skill fixes that by helping teams pick metrics with purpose. Every metric belongs to one of three types. A good measurement plan includes all three. | Type | What it measures | Examples | |---|---|---| | Attitudinal | How users feel | Trust, Satisfaction, Desirability, Sentiment | | Behavioral | What users do | Completion, Comprehension, Effort, Engagement | | Performance | How well the experience works | Time on Task, Error Rate, Drop-off, Retention Rate | Use one of each. A single metric distorts the picture. High satisfaction with low completion means users like the idea but cannot finish. Strong performance with low engagement means the system works but nobody cares. Together, all three tell the real story. **The Metric Quality Rule** Not all metrics are useful. Teams often track numbers that feel important but never change what they build. The rule is simple: if a metric cannot tell you what to do differently, it is not worth tracking. Before committing to any metric, run it through four questions: - Can everyone on the team explain it in one sentence? - Can it be compared over time or across versions? - Is it a rate or ratio, not just a raw count? - Does it measure what users actually do, not just what they say? If the answer to any of these is no, replace it. --- ## 2. Business Benefit When teams choose metrics with discipline, design earns credibility. Decisions move faster because they are grounded in evidence, not debate. This helps teams: - prove that design work changed user behavior - stop tracking numbers that look good but say nothing - give product and leadership a shared language for decisions - catch problems before launch, not after - connect design outcomes to business results Metrics chosen with care become the evidence that earns the next yes. --- ## 3. Skill Output When used correctly, the skill produces a clear metric plan for a product or workflow. The plan shows: - which three metrics to track (one per type) - whether each metric is a leading or lagging indicator - which stage each metric belongs to: predictive, proxy, or analytics - any mismatches to watch for between metric types The example below shows how this works for a mobile banking dashboard. | Field | Example Output (Mobile Banking Dashboard) | |---|---| | Attitudinal Metric | Trust — do users feel confident the balance shown is accurate? | | Behavioral Metric | Completion — can users locate transaction history within two taps? | | Performance Metric | Time on Task — how long does it take to find and act on a recent transaction? | | Leading Indicator | Comprehension score from prototype testing (collected before launch) | | Lagging Indicator | Session abandonment rate (confirmed after launch) | | Mismatch to Watch | High satisfaction + low completion = users feel good about the app but cannot finish the task. Fix the flow before assuming the experience is working. | | Next Step Handoff | → glare-define-collecting to choose the right techniques and tools for collecting each metric | The output connects directly to the other Define blocks: - User Needs tells you what each metric should prove - Audience tells you whose behavior you are measuring - Collecting tells you how to gather the data --- ## 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. --- ### Prompt 1 — Diagnostic Entry: Fix a broken metric plan "We're updating our mobile banking dashboard and our current metrics are monthly active users and app store rating. Using the glare-define-ux-metrics skill, tell us whether these are the right metrics to track, apply the four quality principles to each one, and recommend a replacement trio — one attitudinal, one behavioral, one performance — that would actually guide our next design decision." **Why this works:** Monthly active users and app store ratings are common vanity metrics. They count things without explaining what to do next. This prompt uses the quality filter to replace them with metrics that can change how the team builds. **Best for:** - auditing an existing metric plan - sprint kickoffs where the success criteria feel vague - any situation where teams are measuring activity instead of outcomes --- ### Prompt 2 — Timing Entry: Choose the right metric for the right stage "We are about to run usability testing on our mobile banking dashboard before launch. Using glare-define-ux-metrics, help us identify which metrics should be collected now as leading indicators, which ones we should plan to collect post-launch as lagging indicators, and how to use each to make a decision at the right time." **Why this works:** Teams that only track post-launch analytics are always learning too late. This prompt uses the leading vs. lagging framework to build a measurement plan that catches problems early and confirms results after. **Best for:** - pre-launch research planning - setting up a test with clear success criteria - building a measurement timeline across a product sprint --- ### Prompt 3 — Mismatch Entry: Diagnose a confusing result "After our last round of testing on the mobile banking dashboard, satisfaction scores were high but task completion on the transaction history flow dropped to 61%. We are not sure what to do with this. Using glare-define-ux-metrics, explain what this mismatch means, what it tells us about where the experience is breaking down, and which metric we should add to diagnose the root cause." **Why this works:** Metric mismatches are one of the most common signs that a team is measuring the wrong thing or missing part of the picture. This prompt uses the diagnostic mismatch model to turn a confusing result into a clear next step. **Best for:** - making sense of conflicting data - preparing a findings summary for a design review - deciding which metric to add before the next round of testing --- *Glare Framework · glare-define-ux-metrics · Define Area* *Handoffs: glare-define-user-needs · glare-define-audience · glare-define-collecting · glare-measure*