# Findings AI Skill Measure Area · Findings Move · Decision Map --- ## 1. What the Skill Does The Findings skill helps teams turn raw data into something the whole team can act on. It is the final move inside the Measure area of Glare's Decision Map. This is where numbers, drop-off rates, survey scores, and task results stop being data and become direction. Data alone tells you what happened. A finding tells you what it means. A signal tells you what to do next. The Findings skill closes that gap by connecting each piece of data to a user need, a business goal, and a design recommendation — in that order. Without this step, teams stall. The data sits in a deck. The meeting ends without a decision. Research gets run again. The Findings skill fixes that by giving every result a chain that leads to action. Every finding follows the same five-step chain. | Step | What happens | Example | |---|---|---| | 1. Translate the data | Pair the metric with its source and describe the behavior | 48% of users abandon checkout at the payment step (checkout analytics) | | 2. Tie to user value | Map to a specific user need | Clarity — users need payment options to be simple and error-free | | 3. Tie to business results | Link to a metric leadership cares about | Reducing abandonment increases conversion and revenue | | 4. Connect back to intent | Check against the original concept and hunch | Confirms the hunch that simplifying payment would reduce abandonment | | 5. Write the signal | Name the recommendation with both a user and business metric | Simplifying payment options will increase completion rate and lower error rate | If the team cannot complete all five steps, the finding is not ready. A result that cannot connect to a user need is noise. A result that cannot connect to a business goal is interesting but not actionable. **The Signal Rule** Teams often share data without completing the chain. They report that task success dropped to 61% but do not explain what user need that threatens or what business outcome it affects. The result is a number that generates discussion without generating decisions. The rule is simple: a finding is not finished until it has a user metric and a business metric in the same sentence. If the team cannot write both, go back and complete steps two and three before sharing anything. --- ## 2. Business Benefit Findings that complete the chain replace debate with evidence. They give product, engineering, and leadership a clear reason to act — tied to outcomes they already care about. This helps teams: - stop presenting data that generates discussion but not decisions - connect every research result to a user need and a business outcome - build trust with stakeholders by showing what the data means, not just what it says - close the loop between research and the next design sprint - make signals travel further by sharing questions alongside answers Research earns its investment when findings lead to action. --- ## 3. Skill Output When used correctly, the skill produces a clear signal for each finding. Each signal shows: - the raw data and its source - the finding described as user behavior - the user need it connects to - the business result it affects - the recommendation with both a user and business metric The example below shows how this works for a mobile banking dashboard. | Field | Example Output (Mobile Banking Dashboard) | |---|---| | Raw Data | Task success on finding transaction history: 61% (Helio usability test, 100 participants) | | Finding | Nearly four in ten users could not complete the task of finding their recent transactions without help | | User Need | Findable — users need to locate transaction history without extra navigation or confusion | | Business Result | Session abandonment rises when users cannot find key information quickly, reducing return visit rate | | Signal | Surfacing transaction history on the home screen will increase task success rate (user metric) and reduce session abandonment (business metric) | | What Would Disprove It | Task success rate does not improve after the change, or users still abandon at the same rate | | Failure Mode to Watch | Sharing the raw number without completing the chain. A 61% task success rate is not a finding — it is a starting point. The finding is what it means for users and what it costs the business. | | Next Step Handoff | → glare-focus to compare this signal against other versions or directions and decide what moves forward | The output connects directly to the other Measure moves: - Concepts provides the original intent to check the finding against - Hunches provides the hypothesis the finding confirms or disproves - Questioning provides the research prompts that produced the data --- ## 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. --- ### Prompt 1 — Diagnostic Entry: Turn a raw result into a finding "We ran a usability test on our mobile banking dashboard and task success on finding transaction history was 61%. We also have a post-task satisfaction score of 3.8 out of 5. We are not sure what to do with these numbers. Using the glare-measure-findings skill, walk the five-step chain for each result and produce a signal that connects to a user need and a business outcome." **Why this works:** Raw numbers without context do not guide decisions. This prompt uses the five-step chain to complete both findings — connecting task success to findability and a session metric, and connecting satisfaction to trust and a retention metric — so the team leaves with two actionable signals instead of two numbers. **Best for:** - making sense of usability test results - preparing findings for a sprint review or stakeholder readout - any situation where the team has data but cannot agree on what it means --- ### Prompt 2 — Mismatch Entry: Diagnose conflicting results "Our mobile banking dashboard testing showed high satisfaction scores but low task completion on the transaction history flow. Users say they like the app but keep abandoning the session. Using glare-measure-findings, explain what this mismatch means, which user need it threatens, what it signals for the business, and what we should do next." **Why this works:** High satisfaction with low completion is one of the most common metric mismatches in UX research. It means users feel good about the product but cannot finish. This prompt uses the findings chain to name the gap precisely and produce a signal that the team can act on in the next sprint. **Best for:** - any situation where positive feedback and behavioral data contradict each other - preparing a findings summary that needs to explain a counterintuitive result - deciding which metric to prioritize when two are pulling in different directions --- ### Prompt 3 — Handoff Entry: Prepare findings for a leadership review "We have three findings from our mobile banking dashboard research: task success on transaction history is 61%, session abandonment is up 18% this quarter, and new users rate trust at 3.2 out of 5. We need to present these to our VP of Product next week. Using glare-measure-findings, complete the five-step chain for each finding, write a signal for each one, and organize them in order of business impact." **Why this works:** Leadership reviews need findings that are already connected to business outcomes. This prompt uses the findings chain to complete all three results and rank them by impact — so the presentation leads with what matters most to the business, not what was most interesting to the research team. **Best for:** - preparing a leadership readout after a research sprint - prioritizing which findings to act on when there are more than two or three - connecting multiple research results into a single, coherent story --- *Glare Framework · glare-measure-findings · Measure Area* *Handoffs: glare-measure-concepts · glare-measure-hunches · glare-measure-questioning · glare-focus · glare-design-signals*