# Comparing AI Skill Focus Area · Comparing Move · Decision Map --- ## 1. What the Skill Does The Comparing skill helps teams place signals side by side so results become meaningful. It is the third move inside the Focus area of Glare's Decision Map. This is where a single score — a usability rating, a satisfaction number, a task success rate — gets context by being measured against something else. A score by itself does not tell a team what to do. A 72% task success rate sounds okay until it is compared against the previous version at 61%, or a competitor at 84%, or a new user segment at 48%. Comparison is what turns a number into a signal. The Comparing skill gives teams a consistent way to do that — by naming what is being compared, using a shared metric, and turning the result into a finding with a clear tradeoff. Teams can compare along 12 different points depending on what the decision needs. | Comparison Point | Use when | |---|---| | Iteration | The team needs to know if the new version improved | | User Goals / Tasks | The team needs to know which task is easiest or hardest to complete | | Competitors | The team needs to understand where the market sets user expectations | | Feature Usage | The team needs to know which features create value and which get ignored | | Timeline | The team needs to see whether signals are improving over time | | Geographies | The experience needs to be checked across different markets or regions | | Segments | Different audiences may respond differently to the same design | | User Lifecycle | User needs change from new to habitual to dormant — the comparison shows where | | Journeys | Friction may live at one step in the flow, not the whole experience | | Behavioral Triggers | The team needs to understand what causes users to act or stop | | Platforms / Devices | The experience may work on desktop but break on mobile | | Season | Behavior may change by time of year — the team needs to know if the signal is durable | **The Shared Metric Rule** The most common mistake in Comparing is placing results side by side before agreeing on what is being measured. A team might compare two versions — one measured by satisfaction, another by task success — and walk away disagreeing about which one won because they were looking at different things. The rule is simple: use the same metric for everything being compared. If the metric differs between options, name that clearly before interpreting the results. A fair comparison requires a shared measure. Without it, the team is not comparing signals — they are comparing opinions about different things. --- ## 2. Business Benefit Comparing gives data context. Without comparison, results stay isolated and decisions drift back into preference and opinion. With comparison, the team has a reason to choose one direction over another — and can explain that reason to anyone who asks. This helps teams: - stop debating results that have no reference point - explain why one version, audience, or direction deserves more investment - catch tradeoffs before committing — not after shipping - build trust with stakeholders by showing the difference, not just the score - make the same data useful across product, design, marketing, and leadership Comparison is what turns evidence into direction. --- ## 3. Skill Output When used correctly, the skill produces a clear comparison finding for a design decision. The finding shows: - what was compared and why - which metric was used - which signal was stronger - what tradeoff appeared - what the team should do next The example below shows how this works for a mobile banking dashboard. | Field | Example Output (Mobile Banking Dashboard) | |---|---| | Comparison Point | Iteration — comparing the current home screen against the redesigned version | | Shared Metric | First-click success rate on balance and transaction history | | Result | Current version: 61% first-click success. Redesigned version: 79% first-click success. | | Strongest Signal | The redesigned version creates a significantly stronger first-click signal for habitual users | | Tradeoff | The redesigned layout improves findability but reduces visible account actions — users in testing noted fewer quick shortcuts than the current version | | Finding | Surfacing balance and transactions directly on the home screen improves first-click success by 18 percentage points. The tradeoff is reduced shortcut visibility, which may affect power users. | | Next Step | → glare-focus-decisions to choose whether to implement, refine, or test another iteration based on this finding | | Failure Mode to Watch | Asking which version won without naming the tradeoff. The strongest signal is not always the highest score — it is the best signal for the specific decision the team needs to make. | The output connects directly to the other Focus moves: - Initiatives provides the metric and decision that make the comparison fair - Methods provides the frame that determines what gets placed side by side - Decisions uses the finding and tradeoff to name the next move --- ## 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. --- ### Prompt 1 — Diagnostic Entry: Make sense of a single result "We ran a usability test on our mobile banking dashboard and first-click success on transaction history came back at 61%. Our stakeholders are not sure if that is good or bad. Using the glare-focus-comparing skill, help us identify what this score needs to be compared against to become meaningful, choose the right comparison point, and explain what a fair comparison using the same metric would look like." **Why this works:** A 61% score with no reference point is just a number. This prompt uses the shared metric rule and comparison points to give the result context — whether that means comparing against the previous version, a competitor benchmark, or a user segment — so the team can explain whether the score represents progress or a problem. **Best for:** - making sense of a single research result before a review - preparing a finding that needs to hold up in a stakeholder discussion - any situation where the team has a number but not a direction --- ### Prompt 2 — Segment Entry: Compare across audiences "We have first-click success data from our mobile banking dashboard test: habitual users scored 79%, new users scored 44%. We are not sure what to do with the gap. Using glare-focus-comparing, apply the Segments comparison point to this data, name the tradeoff the gap reveals, and tell us what finding and next step it supports." **Why this works:** A gap between user segments is one of the most useful signals a team can find — it shows that the same design works differently for different people. This prompt uses the segment comparison to turn the gap into a finding with a clear tradeoff, so the team can decide whether to optimize for new users, habitual users, or both. **Best for:** - any test where results differ significantly between user groups - identifying which audience a design decision should prioritize - preparing a finding that explains why one group needs a different approach --- ### Prompt 3 — Tradeoff Entry: Explain competing strengths "We compared two versions of the mobile banking dashboard. Version A scored higher on first-click success (79% vs 71%). Version B scored higher on post-task satisfaction (4.2 vs 3.7 out of 5). Our team is split on which one to move forward with. Using glare-focus-comparing, help us name the tradeoff between these two signals, explain what each version strengthens and weakens, and identify which finding best supports the decision." **Why this works:** When two versions each win on a different metric, the team needs to name the tradeoff explicitly before choosing. This prompt uses the strongest-signal step to move the team past the score debate and toward the real question: which metric matters most for the decision the initiative is trying to support. **Best for:** - any comparison where two versions each have a clear advantage - preparing a finding that explains why a decision is not as simple as picking the highest score - moving a split team toward a decision by naming what each option gives up --- *Glare Framework · glare-focus-comparing · Focus Area* *Handoffs: glare-focus-initiatives · glare-focus-methods · glare-focus-decisions · glare-lead*