A library of AI skills you can drop into your work.
# Audience AI Skill Define Area · Audience Block · Decision Map --- ## 1. What the Skill Does The Audience skill helps teams understand who their signals come from before they start testing or building. It works inside the Define area of Glare's Decision Map. This is where teams decide whose feedback counts, how much weight to give it, and how to describe users in a way that makes testing possible. Without a clear audience, customer data, stakeholder opinion, and personal preference all get treated equally. When that happens, metrics lose meaning and teams end up solving the wrong problem. The skill organizes audience into four voices. Each voice plays a different role in the design process. | Audience | Provides | Signal Weight | |---|---|---| | Project Team | Intent — what the team thinks will work | Light but continuous | | Stakeholders | Direction — what matters most to the business | Medium to high | | Customers | Proof — how the product works in real life | Highest | | Participants | Clarity — how the design feels before release | High during testing | Internal voices guide. External voices validate. Customers confirm what works. Participants predict what will. **Audience-Build Rule** Teams often jump straight to user testing without aligning internally first. That leads to misaligned results — the team interprets findings differently because they never agreed on what they were trying to learn. The rule is simple: build your audience in order. Project Teams form intent first. Stakeholders align on business impact next. Participants validate direction during testing. Customers confirm outcomes last. Skipping a step upstream shows up as confusion downstream. --- ## 2. Business Benefit A clear audience helps teams collect the right signals from the right people. Without it, feedback piles up without direction and decisions stall. This helps teams: - stop treating all feedback as equally important - connect user signals to business goals - test with the right people at the right time - avoid building for users who don't exist - make decisions that hold up in stakeholder reviews Research becomes faster to run and easier to act on. --- ## 3. Skill Output When used correctly, the skill creates a clear audience brief for a product or workflow. The brief shows: - which of the four voices matter most for this decision - how to describe users by what they do, not just who they are - which customer lifecycle segment to focus on - which participant type to recruit for testing - how to weight signals from each voice The example below shows how this works for a mobile banking dashboard. | Field | Example Output (Mobile Banking Dashboard) | |---|---| | Primary Voice | Customers — Habitual Users (log in 3+ times per week to check balance and transactions) | | Secondary Voice | Stakeholders — Product team (focused on retention and session depth) | | Participant Type | Adjacent Users — people who use other financial apps but not this one yet | | Key Attributes | Behavioral (login frequency), Lifecycle (habitual vs. new paying), Contextual (mobile-only vs. cross-device) | | Signal Weight | Customer behavior carries highest weight. Participant feedback guides direction during testing. | | Failure Mode to Watch | Over-relying on participant feedback and skipping customer behavior data — confident in theory, fragile in practice. | | Next Step Handoff | → glare-define-collecting to choose the right research methods for each audience voice | The output connects directly to the other Define blocks: - User Needs helps name what each audience is trying to accomplish - Collecting helps choose the right methods for each voice - UX Metrics helps pick the right numbers to track per segment --- ## 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. --- ### Prompt 1 — Diagnostic Entry: Start from a feedback problem "We're updating our mobile banking dashboard and our team keeps getting conflicting feedback. Designers think users want more data on the home screen. The product team thinks users want fewer taps to reach key actions. Using the glare-define-audience skill, walk the four audience voices in order and help us figure out whose feedback to prioritize and how to weight each voice for this decision." **Why this works:** Conflicting feedback is almost always an audience problem. This prompt uses the four-voice frame to separate internal opinion from external proof, and gives the team a way to resolve disagreement without more meetings. **Best for:** - resolving feedback conflicts between teams - sprint planning where priorities are unclear - any decision where internal opinion is being treated as user data --- ### Prompt 2 — Targeting Entry: Define who to test with "We need to run usability testing on our mobile banking dashboard redesign. We're not sure who to recruit. Using glare-define-audience, help us identify the right participant type for this phase of testing, choose 3–5 attributes to describe them, and explain which customer lifecycle segment we should validate against once testing is complete." **Why this works:** Most teams recruit participants too broadly or describe them by job title instead of behavior. This prompt uses the attribute framework to build a testable group and connects participant testing to the right customer segment for follow-up validation. **Best for:** - planning a usability study - writing a participant screener - connecting research to a specific customer segment --- ### Prompt 3 — Stakeholder Entry: Translate findings for a business audience "We have usability findings from our mobile banking dashboard testing. Completion rate on the transaction history flow dropped to 61%. We need to present this to our product and finance stakeholders. Using glare-define-audience, help us translate this finding into the language each stakeholder group cares about, and identify which business workflow each signal connects to." **Why this works:** Design findings often get dismissed because they are presented in design language. This prompt uses the stakeholder workflow map to reframe the same data in terms of retention, risk, and revenue — the metrics each audience already tracks. **Best for:** - preparing a leadership readout - getting buy-in for a redesign - translating UX data into business impact --- *Glare Framework · glare-define-audience · Define Area* *Handoffs: glare-define-user-needs · glare-define-collecting · glare-define-ux-metrics · glare-design-signals*
# Collecting Data AI Skill Define Area · Collecting Block · Decision Map --- ## 1. What the Skill Does The Collecting Data skill helps teams gather the right information from the right people at the right time. It sits inside the Define area of Glare's Decision Map. This is where teams decide how to capture signals before they get lost in opinions and guesswork. Most teams collect too much or collect the wrong thing. They run research without a clear goal, use tools they are comfortable with instead of tools that fit the question, and share findings that nobody acts on. The Collecting skill fixes that by pairing every collection method with a specific user need, a business goal, and a metric. The skill uses five steps to move from intent to insight. | Step | What happens | |---|---| | 1. Start with Intent | Pair a user need with a business goal to define what data matters | | 2. Choose Your Stack | Pick the right research mode: Exploratory, Evaluative, or Comparative | | 3. Identify the Approach | Make sure the question is testable — it defines a gap, points to behavior, and can be measured | | 4. Apply the Techniques | Match the collection method to the approach and the metric | | 5. Connect the Data | Share findings at the right level: project, cross-team, or leadership | Each step connects to the next. Skipping intent means picking techniques that answer the wrong question. Skipping the connect step means findings stay in a deck and never reach a decision. **The Collection Rule** Teams often jump to tools before they know what they are trying to learn. That leads to data that looks thorough but does not change anything. The rule is simple: start with the question, not the tool. Write the hypothesis first. Then choose the approach. Then pick the technique. Then choose the tool. Running these in the wrong order is the most common reason research does not get used. --- ## 2. Business Benefit Good data collection gives teams evidence that replaces opinion in decision-making. One clear signal can stop weeks of debate. Ten can build confidence. A hundred can pressure-test a strategy. This helps teams: - stop running research that nobody acts on - pick the right method for the question they are actually asking - share findings in a way that each audience understands - connect research to metrics that matter to leadership - move faster because decisions are grounded in evidence Research becomes easier to trust and easier to present. --- ## 3. Skill Output When used correctly, the skill creates a clear collection plan for a product or workflow. The plan shows: - which research mode fits the current stage - which techniques to use and why - which tools to run them in - how to share findings at each level of the organization The example below shows how this works for a mobile banking dashboard. | Field | Example Output (Mobile Banking Dashboard) | |---|---| | Research Mode | Evaluative — the dashboard design exists and we need to know if it works | | Technique | First Click Testing + Task Success Rate — does the user find the balance and transaction history without help? | | Tool | Helio — collect completion and comprehension signals from 100 participants in hours | | Feedback Pairing | See what users do (task recordings) + Hear what users say (post-task survey) | | Metric Tied To | Completion Rate, Comprehension, Time on Task | | Sharing Level | Cross-team — findings shared in Glare workspace by concept and metric so product and engineering can act on them | | Failure Mode to Watch | Collecting data without a metric attached. A task success test without a defined completion criterion is just observation — it cannot guide a decision. | | Next Step Handoff | → glare-measure to turn collected signals into findings tied to user value and business outcomes | The output connects directly to the other Define blocks: - User Needs tells you what question to answer - Audience tells you who to collect data from - UX Metrics tells you what numbers to watch for --- ## 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. --- ### Prompt 1 — Diagnostic Entry: Start from a research gap "We are updating our mobile banking dashboard and we have not done any user research yet. We have a hypothesis that users cannot find their transaction history quickly enough. Using the glare-define-collecting skill, walk the five-step collection process and help us build a research plan — including the right mode, technique, tool, and how to share the findings with our product team." **Why this works:** Starting from a hypothesis forces the skill to apply the five-step process in order. It stops teams from jumping to a familiar tool before they have confirmed what question they are actually trying to answer. **Best for:** - starting a new research sprint from scratch - any situation where the team has a hunch but no data - planning research at the beginning of a design phase --- ### Prompt 2 — Technique Entry: Choose the right method for the question "We are comparing two versions of our mobile banking dashboard home screen. Version A shows balance and recent transactions up front. Version B uses a summary card that users tap to expand. We need to know which performs better. Using glare-define-collecting, identify the right research mode for this decision, recommend two techniques to use together, and explain which UX metrics each one produces." **Why this works:** Comparison questions need a different approach than discovery questions. This prompt uses the mode framework to match the research question — which version works better — to the right technique and metric, instead of defaulting to whatever method the team used last time. **Best for:** - A/B or preference testing decisions - any sprint where two design directions need a tiebreaker - connecting a technique choice to a measurable outcome --- ### Prompt 3 — Sharing Entry: Make findings usable across teams "We ran a first-click test on our mobile banking dashboard with 100 participants. Task success on finding transaction history was 61%. We need to share this with our product manager, our engineering lead, and our VP of product. Using glare-define-collecting, help us apply the three-tier sharing model to present this finding at the right level for each audience." **Why this works:** Research findings often go unused because they are shared the same way to every audience. This prompt uses the three-tier sharing model to translate the same data into the right format — project detail for the team, cross-team insight for the PM, leadership rollup for the VP. **Best for:** - preparing a research readout for multiple stakeholders - any situation where findings need to travel beyond the design team - building a habit of showing sources alongside results --- *Glare Framework · glare-define-collecting · Define Area* *Handoffs: glare-define-user-needs · glare-define-audience · glare-define-ux-metrics · glare-measure*
# UX Metrics AI Skill Define Area · UX Metrics Block · Decision Map --- ## 1. What the Skill Does The UX Metrics skill helps teams choose the right numbers to prove their design work is working. It sits inside the Define area of Glare's Decision Map. This is where teams decide what to measure before they start collecting data — not after. Most teams measure too late or measure the wrong thing. They wait for analytics that arrive after the sprint is over. Or they collect numbers that look good in a slide deck but never guide a real decision. The UX Metrics skill fixes that by helping teams pick metrics with purpose. Every metric belongs to one of three types. A good measurement plan includes all three. | Type | What it measures | Examples | |---|---|---| | Attitudinal | How users feel | Trust, Satisfaction, Desirability, Sentiment | | Behavioral | What users do | Completion, Comprehension, Effort, Engagement | | Performance | How well the experience works | Time on Task, Error Rate, Drop-off, Retention Rate | Use one of each. A single metric distorts the picture. High satisfaction with low completion means users like the idea but cannot finish. Strong performance with low engagement means the system works but nobody cares. Together, all three tell the real story. **The Metric Quality Rule** Not all metrics are useful. Teams often track numbers that feel important but never change what they build. The rule is simple: if a metric cannot tell you what to do differently, it is not worth tracking. Before committing to any metric, run it through four questions: - Can everyone on the team explain it in one sentence? - Can it be compared over time or across versions? - Is it a rate or ratio, not just a raw count? - Does it measure what users actually do, not just what they say? If the answer to any of these is no, replace it. --- ## 2. Business Benefit When teams choose metrics with discipline, design earns credibility. Decisions move faster because they are grounded in evidence, not debate. This helps teams: - prove that design work changed user behavior - stop tracking numbers that look good but say nothing - give product and leadership a shared language for decisions - catch problems before launch, not after - connect design outcomes to business results Metrics chosen with care become the evidence that earns the next yes. --- ## 3. Skill Output When used correctly, the skill produces a clear metric plan for a product or workflow. The plan shows: - which three metrics to track (one per type) - whether each metric is a leading or lagging indicator - which stage each metric belongs to: predictive, proxy, or analytics - any mismatches to watch for between metric types The example below shows how this works for a mobile banking dashboard. | Field | Example Output (Mobile Banking Dashboard) | |---|---| | Attitudinal Metric | Trust — do users feel confident the balance shown is accurate? | | Behavioral Metric | Completion — can users locate transaction history within two taps? | | Performance Metric | Time on Task — how long does it take to find and act on a recent transaction? | | Leading Indicator | Comprehension score from prototype testing (collected before launch) | | Lagging Indicator | Session abandonment rate (confirmed after launch) | | Mismatch to Watch | High satisfaction + low completion = users feel good about the app but cannot finish the task. Fix the flow before assuming the experience is working. | | Next Step Handoff | → glare-define-collecting to choose the right techniques and tools for collecting each metric | The output connects directly to the other Define blocks: - User Needs tells you what each metric should prove - Audience tells you whose behavior you are measuring - Collecting tells you how to gather the data --- ## 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. --- ### Prompt 1 — Diagnostic Entry: Fix a broken metric plan "We're updating our mobile banking dashboard and our current metrics are monthly active users and app store rating. Using the glare-define-ux-metrics skill, tell us whether these are the right metrics to track, apply the four quality principles to each one, and recommend a replacement trio — one attitudinal, one behavioral, one performance — that would actually guide our next design decision." **Why this works:** Monthly active users and app store ratings are common vanity metrics. They count things without explaining what to do next. This prompt uses the quality filter to replace them with metrics that can change how the team builds. **Best for:** - auditing an existing metric plan - sprint kickoffs where the success criteria feel vague - any situation where teams are measuring activity instead of outcomes --- ### Prompt 2 — Timing Entry: Choose the right metric for the right stage "We are about to run usability testing on our mobile banking dashboard before launch. Using glare-define-ux-metrics, help us identify which metrics should be collected now as leading indicators, which ones we should plan to collect post-launch as lagging indicators, and how to use each to make a decision at the right time." **Why this works:** Teams that only track post-launch analytics are always learning too late. This prompt uses the leading vs. lagging framework to build a measurement plan that catches problems early and confirms results after. **Best for:** - pre-launch research planning - setting up a test with clear success criteria - building a measurement timeline across a product sprint --- ### Prompt 3 — Mismatch Entry: Diagnose a confusing result "After our last round of testing on the mobile banking dashboard, satisfaction scores were high but task completion on the transaction history flow dropped to 61%. We are not sure what to do with this. Using glare-define-ux-metrics, explain what this mismatch means, what it tells us about where the experience is breaking down, and which metric we should add to diagnose the root cause." **Why this works:** Metric mismatches are one of the most common signs that a team is measuring the wrong thing or missing part of the picture. This prompt uses the diagnostic mismatch model to turn a confusing result into a clear next step. **Best for:** - making sense of conflicting data - preparing a findings summary for a design review - deciding which metric to add before the next round of testing --- *Glare Framework · glare-define-ux-metrics · Define Area* *Handoffs: glare-define-user-needs · glare-define-audience · glare-define-collecting · glare-measure*
User Needs AI Skill Define Area · User Needs Block · Decision Map 1. What the Skill Does The User Needs skill helps teams understand what users really need before building solutions or features. It works inside the Define area of Glare’s Decision Map. This is where teams get clear on the real problem before starting research, testing, or development. The skill breaks user needs into five simple categories. Teams move through them in order to find where the experience first breaks. Higher needs cannot fix basic problems. A product can look polished, but if users cannot find it, trust it, or use it, it still fails. Category Core Question Need Types Basics Can I use it easily? Usable, Useful, Findable, Accessible Trust Do I believe it works? Credible, Secure, Reliable, Intuitive Personal Does it fit my needs? Inclusive, Adaptable, Connected, Insightful Impact Does it make a difference? Valuable, Sustainable, Efficient, Scalable Feelings Does it inspire me? Desirable, Delightful, Engaging, Empowering Within each category, the skill maps to 22 specific need types — each with a definition, key diagnostic questions, associated signal types, example metrics, and a common failure mode. This makes every named need testable, not just descriptive. Core Validation Rule What users say is not always what they really need. People often ask for things that sound helpful, but their behavior shows a different problem. The rule is simple: If users say they need something but do not use it in practice, it is likely a preference, not a real need. Always compare what users say with what they actually do. Example Users say they want a simpler layout. But testing shows they leave when they cannot find their balance quickly. The real problem is not the layout. The real problem is findability. If the team only changes the visuals, the main problem stays unsolved. 2. Business Benefit Clear user needs help teams understand what actually matters to users before building. This helps teams: find the real problem faster focus on what users actually need avoid solving the wrong thing separate needs from preferences make better product decisions User needs become easier to validate, measure, and improve over time. 3. Skill Output When used correctly, the skill creates a clear needs brief for a product or workflow. The brief shows: which user needs matter most whether the problem is a real need or preference which UX metrics measure success where the experience first breaks The example below shows how this works for a mobile banking dashboard. Field Example Output (Mobile Banking Dashboard) Need Type Findable (Basics) → Credible (Trust) → Valuable (Impact) User Need Statement Users need to locate their balance and recent transactions within one tap so they feel in control of their money — without questioning whether the figures are up to date. Want vs. Need Validation Users say they want a simpler layout. Observed behavior shows they abandon sessions when balance data is more than 2 taps away — confirming Findable is the foundational gap, not aesthetics. Metric Tie First-click success on balance → Task completion rate → Session abandonment rate Failure Mode to Watch Jumping to Feelings (delight, animation) before the Basics category (Findable/Accessible) is confirmed working. Next Step Handoff → glare-define-ux-metrics to select the measurable indicators for each named need The output connects directly to the other Define blocks: Audience helps identify who has the need most Collecting helps gather evidence UX Metrics helps measure success It also helps guide Concepts and Hunches in the Measure area. 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. Prompt 1- Diagnostic Entry: Start from a symptom "We're updating our mobile banking dashboard and our session data shows users abandon within 30 seconds without completing any action. Using the glare-define-user-needs skill, walk the five need categories in order and identify where the experience is most likely breaking down. For each category, name the specific need type that is failing and describe what observed behavior would confirm it." Why this works: Starting with user behavior helps teams find the real problem first. It stops teams from fixing visuals when the real issue is usability or trust. Best for: finding problems after launch sprint reviews redesign discussions without evidence Prompt 2- Validation Entry: Pressure-test a stated need "Our product team believes the mobile banking dashboard needs to feel more personalized — users have said they want the app to remember their preferences. Using glare-define-user-needs, apply the want-vs-need validation rule to this claim. Tell me whether personalization is a genuine need or a stated want, which need type it maps to if real, and what foundational needs must be confirmed before we invest in it." Why this works: Teams often confuse preferences with real user needs. This helps validate whether the problem is real before building new features. Best for: roadmap planning sprint planning reviewing feature requests without user evidence Prompt 3- Metric Bridge Entry: Connect named needs to measurable outcomes "For our mobile banking dashboard update, we have identified three active need types: Findable (users cannot locate transaction history in under two taps), Credible (users question whether balance figures are current), and Valuable (returning users do not feel the dashboard helps them make better financial decisions). Using glare-define-user-needs, map each need type to its primary metric family, name the specific metrics to track for each, and flag the most common failure mode we should watch for during usability testing." Why this works: The user needs are already defined, so the prompt focuses on how to measure them. It helps teams connect user needs to clear UX metrics. Best for: test planning design handoffs metric planning leadership reviews Glare Framework · glare-define-user-needs · Define Area Handoffs: glare-define-audience · glare-define-collecting · glare-define-ux-metrics · glare-design-signals
# Comparing AI Skill Focus Area · Comparing Move · Decision Map --- ## 1. What the Skill Does The Comparing skill helps teams place signals side by side so results become meaningful. It is the third move inside the Focus area of Glare's Decision Map. This is where a single score — a usability rating, a satisfaction number, a task success rate — gets context by being measured against something else. A score by itself does not tell a team what to do. A 72% task success rate sounds okay until it is compared against the previous version at 61%, or a competitor at 84%, or a new user segment at 48%. Comparison is what turns a number into a signal. The Comparing skill gives teams a consistent way to do that — by naming what is being compared, using a shared metric, and turning the result into a finding with a clear tradeoff. Teams can compare along 12 different points depending on what the decision needs. | Comparison Point | Use when | |---|---| | Iteration | The team needs to know if the new version improved | | User Goals / Tasks | The team needs to know which task is easiest or hardest to complete | | Competitors | The team needs to understand where the market sets user expectations | | Feature Usage | The team needs to know which features create value and which get ignored | | Timeline | The team needs to see whether signals are improving over time | | Geographies | The experience needs to be checked across different markets or regions | | Segments | Different audiences may respond differently to the same design | | User Lifecycle | User needs change from new to habitual to dormant — the comparison shows where | | Journeys | Friction may live at one step in the flow, not the whole experience | | Behavioral Triggers | The team needs to understand what causes users to act or stop | | Platforms / Devices | The experience may work on desktop but break on mobile | | Season | Behavior may change by time of year — the team needs to know if the signal is durable | **The Shared Metric Rule** The most common mistake in Comparing is placing results side by side before agreeing on what is being measured. A team might compare two versions — one measured by satisfaction, another by task success — and walk away disagreeing about which one won because they were looking at different things. The rule is simple: use the same metric for everything being compared. If the metric differs between options, name that clearly before interpreting the results. A fair comparison requires a shared measure. Without it, the team is not comparing signals — they are comparing opinions about different things. --- ## 2. Business Benefit Comparing gives data context. Without comparison, results stay isolated and decisions drift back into preference and opinion. With comparison, the team has a reason to choose one direction over another — and can explain that reason to anyone who asks. This helps teams: - stop debating results that have no reference point - explain why one version, audience, or direction deserves more investment - catch tradeoffs before committing — not after shipping - build trust with stakeholders by showing the difference, not just the score - make the same data useful across product, design, marketing, and leadership Comparison is what turns evidence into direction. --- ## 3. Skill Output When used correctly, the skill produces a clear comparison finding for a design decision. The finding shows: - what was compared and why - which metric was used - which signal was stronger - what tradeoff appeared - what the team should do next The example below shows how this works for a mobile banking dashboard. | Field | Example Output (Mobile Banking Dashboard) | |---|---| | Comparison Point | Iteration — comparing the current home screen against the redesigned version | | Shared Metric | First-click success rate on balance and transaction history | | Result | Current version: 61% first-click success. Redesigned version: 79% first-click success. | | Strongest Signal | The redesigned version creates a significantly stronger first-click signal for habitual users | | Tradeoff | The redesigned layout improves findability but reduces visible account actions — users in testing noted fewer quick shortcuts than the current version | | Finding | Surfacing balance and transactions directly on the home screen improves first-click success by 18 percentage points. The tradeoff is reduced shortcut visibility, which may affect power users. | | Next Step | → glare-focus-decisions to choose whether to implement, refine, or test another iteration based on this finding | | Failure Mode to Watch | Asking which version won without naming the tradeoff. The strongest signal is not always the highest score — it is the best signal for the specific decision the team needs to make. | The output connects directly to the other Focus moves: - Initiatives provides the metric and decision that make the comparison fair - Methods provides the frame that determines what gets placed side by side - Decisions uses the finding and tradeoff to name the next move --- ## 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. --- ### Prompt 1 — Diagnostic Entry: Make sense of a single result "We ran a usability test on our mobile banking dashboard and first-click success on transaction history came back at 61%. Our stakeholders are not sure if that is good or bad. Using the glare-focus-comparing skill, help us identify what this score needs to be compared against to become meaningful, choose the right comparison point, and explain what a fair comparison using the same metric would look like." **Why this works:** A 61% score with no reference point is just a number. This prompt uses the shared metric rule and comparison points to give the result context — whether that means comparing against the previous version, a competitor benchmark, or a user segment — so the team can explain whether the score represents progress or a problem. **Best for:** - making sense of a single research result before a review - preparing a finding that needs to hold up in a stakeholder discussion - any situation where the team has a number but not a direction --- ### Prompt 2 — Segment Entry: Compare across audiences "We have first-click success data from our mobile banking dashboard test: habitual users scored 79%, new users scored 44%. We are not sure what to do with the gap. Using glare-focus-comparing, apply the Segments comparison point to this data, name the tradeoff the gap reveals, and tell us what finding and next step it supports." **Why this works:** A gap between user segments is one of the most useful signals a team can find — it shows that the same design works differently for different people. This prompt uses the segment comparison to turn the gap into a finding with a clear tradeoff, so the team can decide whether to optimize for new users, habitual users, or both. **Best for:** - any test where results differ significantly between user groups - identifying which audience a design decision should prioritize - preparing a finding that explains why one group needs a different approach --- ### Prompt 3 — Tradeoff Entry: Explain competing strengths "We compared two versions of the mobile banking dashboard. Version A scored higher on first-click success (79% vs 71%). Version B scored higher on post-task satisfaction (4.2 vs 3.7 out of 5). Our team is split on which one to move forward with. Using glare-focus-comparing, help us name the tradeoff between these two signals, explain what each version strengthens and weakens, and identify which finding best supports the decision." **Why this works:** When two versions each win on a different metric, the team needs to name the tradeoff explicitly before choosing. This prompt uses the strongest-signal step to move the team past the score debate and toward the real question: which metric matters most for the decision the initiative is trying to support. **Best for:** - any comparison where two versions each have a clear advantage - preparing a finding that explains why a decision is not as simple as picking the highest score - moving a split team toward a decision by naming what each option gives up --- *Glare Framework · glare-focus-comparing · Focus Area* *Handoffs: glare-focus-initiatives · glare-focus-methods · glare-focus-decisions · glare-lead*
# Decisions AI Skill Focus Area · Decisions Move · Decision Map --- ## 1. What the Skill Does The Decisions skill helps teams turn evidence into a clear next move. It is the final move inside the Focus area of Glare's Decision Map. This is where the work stops circling and the team commits — naming what should happen next, why, and who owns it. By the time a team reaches Decisions, the initiative is clear, the method has framed the data, and comparing has shown what is stronger. The only thing left is to choose. Without a decision, findings stay in decks. Meetings keep revisiting the same questions. Work that is ready to move stays open because nobody named the next step. The Decisions skill gives teams five clear types to choose from. Each one tells the team exactly what happens after the meeting. | Decision Type | When to use it | |---|---| | Implement | The signal is strong, the tradeoff is acceptable, and the next step is clear | | Refine Design | The direction is right but something needs to improve before the team commits | | Test Iteration | The direction is still open and a sharper comparison is needed | | Revisit Later | The idea has potential but the timing or context is not right | | Do Not Pursue | The signal is weak or the tradeoff is not worth more investment | Every decision lands in one of these five. If the team leaves a meeting with "let's keep exploring," that is not a decision — it is a deferred one. The Decisions skill forces the team to name which type applies and what happens next. **The Signal-First Rule** Teams often make decisions from feeling. A stakeholder feels confident. The team feels uncertain. The design feels ready. Feeling is not a signal — and a decision without a signal behind it will get reopened the next time someone questions it. The rule is simple: a decision should not be made until it is grounded in a signal that has been framed and compared clearly enough to support action. If the signal is not there yet, go back to Comparing or Methods. A decision made without evidence is just an opinion that got written down. --- ## 2. Business Benefit Decisions move work forward. Without them, research findings sit in documents and design effort loops back on itself. With a clear decision, every part of the business knows what to do next. This helps teams: - stop reopening discussions that have already produced a signal - give product a reason to prioritize one direction over another - give leadership something to back with confidence - protect the team from spending more time on work that does not create value - close the loop between research and the next sprint A strong decision is the last thing Focus produces — and the first thing the rest of the business acts on. --- ## 3. Skill Output When used correctly, the skill produces a clear decision record for a design effort. The record shows: - what was decided - the signal behind the choice - the tradeoff the team accepted - the decision type - the next step and who owns it The example below shows how this works for a mobile banking dashboard. | Field | Example Output (Mobile Banking Dashboard) | |---|---| | What Is Being Decided | Which home screen layout to move forward with for the mobile banking dashboard update | | Signal | Redesigned version scored 79% on first-click success vs. 61% for the current version (Helio usability test, 100 habitual users) | | Comparison Context | Redesigned version is 18 points stronger on findability. Tradeoff: reduced shortcut visibility may affect power users. | | Tradeoff Accepted | Improved findability for habitual users is the higher priority. Power user shortcuts will be addressed in a follow-on initiative. | | Decision Type | Implement — signal is strong, tradeoff is named and accepted, next step is clear | | Next Step | Design team moves redesigned layout into production planning. Research team flags power user shortcut gap for the next sprint. | | Failure Mode to Watch | Choosing "Test Iteration" as a way to avoid committing. Another round of testing is only valid if the team can name exactly what the new iteration would change and what a stronger signal would look like. Vague iteration decisions are deferred Implement or Do Not Pursue decisions. | | Next Step Handoff | → glare-lead to connect this decision to business KPIs and prove impact at the leadership level | The output connects directly to the other Focus moves: - Initiatives names what the decision is actually supporting - Methods names the frame the signal came from - Comparing provides the finding and tradeoff that ground the choice --- ## 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. --- ### Prompt 1 — Diagnostic Entry: Break a stalled decision "Our team has been discussing the mobile banking dashboard redesign for three sprints. We have usability data showing the new version performs better, but some stakeholders want another round of testing before committing. Using the glare-focus-decisions skill, help us apply the signal-first rule to this situation, name the decision type that fits our current evidence, and explain what would actually justify another round of testing vs. moving forward." **Why this works:** Requests for more testing are often a sign the decision has not been grounded in the signal clearly enough. This prompt uses the five decision types and the signal-first rule to name whether the evidence already supports Implement, or whether a Test Iteration decision is genuinely needed — with specific criteria for what would make the next round useful. **Best for:** - any decision that keeps getting deferred to another research round - preparing for a stakeholder review where someone will push back on moving forward - making the case that the current evidence is strong enough to act on --- ### Prompt 2 — Tradeoff Entry: Make a decision with competing signals "Our mobile banking dashboard comparison showed that Version A is stronger on first-click success (79% vs 71%) but Version B is stronger on post-task satisfaction (4.2 vs 3.7). Our initiative is focused on reducing session abandonment. Using glare-focus-decisions, help us name the tradeoff, choose the right decision type, and write the decision record — including the signal, the tradeoff accepted, the next step, and who owns it." **Why this works:** When two signals point in different directions, the team needs to connect both back to the initiative goal to find the right decision. This prompt uses the five-step decision process to move the team off the score debate and toward a named tradeoff the whole team can stand behind. **Best for:** - any decision where two versions each win on a different metric - preparing a decision record that needs to survive a stakeholder challenge - connecting competing research results to the specific goal the initiative is trying to move --- ### Prompt 3 — Closure Entry: Write a complete decision record "We have decided to move forward with the redesigned mobile banking dashboard home screen. The signal was strong, the tradeoff is named, and the team is aligned. Using glare-focus-decisions, help us write a complete decision record — covering the initiative, the signal, the comparison, the tradeoff accepted, the decision type, the next steps, and the handoffs to product, design, research, and leadership." **Why this works:** A decision that is not recorded gets reopened. This prompt uses the decision record structure to capture everything the team agreed on — so product knows what to prioritize, design knows what to refine next, research knows what gap still needs evidence, and leadership has something concrete to back. **Best for:** - closing out a sprint or research cycle with a documented decision - preparing a handoff that needs to travel to product, engineering, or leadership - building a habit of recording decisions alongside the evidence that produced them --- *Glare Framework · glare-focus-decisions · Focus Area* *Handoffs: glare-focus-initiatives · glare-focus-methods · glare-focus-comparing · glare-lead*
# Initiatives AI Skill Focus Area · Initiatives Move · Decision Map --- ## 1. What the Skill Does The Initiatives skill helps teams name the area of work that deserves attention before any testing or comparing begins. It is the first move inside the Focus area of Glare's Decision Map. This is where scattered requests — redesign this page, fix this flow, test this feature — get organized into one focused effort with a clear goal. Most teams have more ideas than capacity. Without a clear initiative, everything feels equally important at the same time. Design work scatters. Tests get run without a shared target. Results come back but do not connect to a decision. The Initiatives skill fixes that by giving the team a container for signals, methods, comparisons, and decisions. A strong initiative connects three things. | Part | What it names | |---|---| | User Need | What users are trying to do and where they are getting stuck | | Business Goal | What outcome the business is trying to move | | Measurable Part of the Experience | The specific area that needs to improve | All three must be present. An initiative with no user need is a roadmap item. An initiative with no business goal is a design exercise. An initiative with no measurable part of the experience is a direction without a target. **The Framing Rule** The most common failure in Focus is jumping to methods and testing before the initiative is named. Teams start comparing versions before they agree on what problem the versions are supposed to solve. Results come in and the team interprets them differently because they never aligned on the goal. The rule is simple: do not choose a method until the initiative names the area of experience, the audience, the user need, the business goal, and the UX metric that would show progress. If any of those is missing, the initiative is not ready. Stop and frame it before moving forward. --- ## 2. Business Benefit A clear initiative keeps design work focused on the problems that matter most to users and the business. It prevents teams from running tests that produce results nobody can act on. This helps teams: - connect scattered requests to a shared goal - choose the right methods and metrics for the work - make findings easier to explain to product and leadership - avoid spending time on work that does not connect to a business outcome - give stakeholders a clear picture of what the team is improving and why Work becomes easier to prioritize, defend, and build on over time. --- ## 3. Skill Output When used correctly, the skill produces a clear initiative brief for a design effort. The brief shows: - the area of experience being improved - the audience affected - the user need behind the work - the business goal it connects to - the UX metric that will show progress - the decision the team needs to make next The example below shows how this works for a mobile banking dashboard. | Field | Example Output (Mobile Banking Dashboard) | |---|---| | Initiative Type | Optimize Navigation & Discovery | | Area of Experience | Mobile banking dashboard home screen | | Audience | Habitual users who log in three or more times per week | | User Need | Findable — users need to locate balance and recent transactions without extra navigation | | Business Goal | Reduce session abandonment — users who cannot find key information leave without completing any action | | UX Metric | First-click success rate on balance and transaction history, session abandonment rate | | Decision Needed | Which home screen layout creates the strongest first-click signal for habitual users | | Failure Mode to Watch | Defining two or more initiatives at once. One initiative at a time forces the right focus. Multiple initiatives running in parallel is a sign the work has not been scoped yet. | | Next Step Handoff | → glare-focus-methods to choose the right frame for bringing data into this initiative | The output connects directly to the other Focus moves: - Methods uses the initiative to choose the right research frame - Comparing uses the user need and metric to place signals side by side fairly - Decisions uses the initiative to name what is actually being chosen --- ## 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. --- ### Prompt 1 — Diagnostic Entry: Start from scattered requests "Our team has been asked to redesign the mobile banking dashboard home screen, fix the transaction history flow, and test a new summary card feature — all at the same time. We are not sure what to focus on first. Using the glare-focus-initiatives skill, help us map these requests to a single initiative by connecting them to one user need, one business goal, and one UX metric." **Why this works:** Three separate requests without a shared frame is the exact symptom the Initiatives skill is built to fix. This prompt uses the five-step method to pull all three requests toward one clear area of improvement so the team can test and decide with focus. **Best for:** - sprint planning where the brief has too many directions - any situation where the team is trying to run multiple tests at once - connecting a list of design requests to a single business outcome --- ### Prompt 2 — Typing Entry: Name the right initiative type "We are improving the mobile banking dashboard and we are not sure whether to frame this as a Navigation initiative, an Engagement initiative, or a Personalization initiative. Using glare-focus-initiatives, help us identify which initiative type fits our work, explain what that type is designed to improve, and confirm whether our current framing has the right user need, business goal, and metric." **Why this works:** Choosing the wrong initiative type leads to the wrong methods, the wrong metrics, and comparisons that do not support the real decision. This prompt uses the ten initiative types to anchor the work in the right frame before testing begins. **Best for:** - teams starting a new focus effort without a clear category - any project where the scope keeps expanding because the type is unclear - preparing an initiative brief that needs to be shared with product or leadership --- ### Prompt 3 — Confidence Entry: Check an initiative before moving to methods "We have defined our initiative as: improve the mobile banking dashboard home screen to help habitual users find their balance and transaction history faster, reduce session abandonment, and track first-click success rate. Using glare-focus-initiatives, check this initiative against the three-part framework — user need, business goal, measurable experience — and tell us whether it is ready to move to methods or needs more framing." **Why this works:** Teams often think their initiative is clear but have left out the metric or left the user need too vague to test. This prompt uses the framing rule to catch gaps before methods are chosen, saving time and avoiding a round of research that cannot support a decision. **Best for:** - quality-checking an initiative before a research sprint - preparing for a methods discussion with product or engineering - any situation where the initiative has been described but not yet validated --- *Glare Framework · glare-focus-initiatives · Focus Area* *Handoffs: glare-focus-methods · glare-focus-comparing · glare-focus-decisions · glare-define · glare-measure*
# Methods AI Skill Focus Area · Methods Move · Decision Map --- ## 1. What the Skill Does The Methods skill helps teams choose the right frame for bringing data into an initiative. It is the second move inside the Focus area of Glare's Decision Map. This is where the team stops asking "what test should we run?" and starts asking "how should we look at this work?" Most teams default to the methods closest to their craft — another prototype test, another usability study, another survey. Those are useful, but they are not always the right frame. The Methods skill expands that thinking. Depending on the initiative, the right frame might be comparing competitors, mapping a journey, segmenting audiences, reviewing feature usage, or evaluating risk. The method should match the decision, not the habit. The skill organizes methods into 13 frames. Each one helps a team look at data differently. | Frame | Use when | |---|---| | Competitors | The team needs market context or wants to see where expectations come from | | Iterations | The team is refining a direction and needs to track improvement across versions | | Timeline | The team needs to sequence work, manage priority, or understand what should happen now vs. later | | Journeys | The problem spans multiple steps, touchpoints, or channels | | Platforms / Devices | The experience changes across mobile, desktop, or other contexts | | User Goals / Tasks | The initiative depends on how well the experience helps users get something done | | Geographies | The experience needs to work across different markets or cultures | | User Lifecycle | The work affects users at different stages of their relationship with the product | | Behavioral Triggers | The team needs to understand what causes users to act or hesitate | | Segments | Different audiences may respond differently to the same experience | | Feature Usage | The team needs to decide what to build, improve, reduce, or remove | | Risk and Proof | The cost of being wrong is high and the team needs to reduce uncertainty first | | Frameworks | The team needs a structured model to organize thinking or explain tradeoffs | **The Frame-First Rule** Teams often choose a method before they know what decision it needs to support. A journey map gets created because someone likes journey maps. An A/B test gets run because the team ran one last sprint. The method feels productive but the results do not connect to any clear choice. The rule is simple: name the decision before choosing the frame. Ask what the team needs to decide next — which version, which audience, which moment, which direction. The frame that best organizes evidence around that decision is the right one. A method chosen for its own sake produces data that does not guide action. --- ## 2. Business Benefit The right method makes data useful. The wrong method produces results that feel interesting but cannot support a decision. Choosing the frame before collecting evidence saves time and keeps research connected to the work. This helps teams: - avoid running research that generates discussion but not direction - match the method to the actual decision, not the nearest available tool - make existing data more useful before collecting more - explain tradeoffs more clearly when one method frame is used consistently - connect research results to specific business workflows Methods make the initiative testable. --- ## 3. Skill Output When used correctly, the skill produces a clear method plan for a design initiative. The plan shows: - the initiative objective and the decision the method needs to support - the frame that best organizes the data - the named methods inside that frame - what gets compared and why - what the team should have after the method runs The example below shows how this works for a mobile banking dashboard. | Field | Example Output (Mobile Banking Dashboard) | |---|---| | Initiative Objective | Understand where habitual users lose confidence on the home screen | | Decision to Support | Which home screen layout creates the strongest first-click signal | | Method Frame | Journeys — the problem spans the user's path from opening the app to completing a balance or transaction check | | Named Methods | Funnel review, Drop-off mapping, Touchpoint analysis | | What Gets Compared | Current home screen journey vs. redesigned home screen journey, measured by first-click success and session abandonment | | Existing Data to Review | Session drop-off analytics, post-task survey scores from previous studies | | Failure Mode to Watch | Choosing the Iterations frame too early. If the team does not yet understand where the journey breaks, testing two versions against each other will not reveal the root cause — it will only show which version performs less badly. | | Next Step Handoff | → glare-focus-comparing to place signals side by side once the method has produced data | The output connects directly to the other Focus moves: - Initiatives provides the user need, business goal, and metric the method is organized around - Comparing uses the method frame to ensure signals are placed side by side fairly - Decisions uses the method output to ground the final choice in evidence --- ## 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. --- ### Prompt 1 — Diagnostic Entry: Start from a method mismatch "Our team has been running A/B tests on the mobile banking dashboard for two sprints but the results keep coming back inconclusive. We are not making progress. Using the glare-focus-methods skill, help us diagnose whether the Iterations frame is the right fit for our initiative, and recommend an alternative frame that would help us understand the problem more clearly before comparing versions." **Why this works:** Inconclusive A/B results are often a sign the team is comparing versions before they understand the problem. This prompt uses the method-selection process to identify the right frame — likely Journeys or User Goals — so the next round of evidence can actually support a decision. **Best for:** - teams stuck in repeated testing cycles without a clear result - any situation where data keeps coming back but decisions keep stalling - diagnosing a method mismatch before another round of research is run --- ### Prompt 2 — Selection Entry: Choose a frame for a specific decision "We need to decide whether to prioritize improving the transaction history flow or the balance summary card on our mobile banking dashboard. We have session data, a post-task survey, and stakeholder input from the product and finance teams. Using glare-focus-methods, identify the right frame for this decision, tell us what data to pull in from what we already have, and name the specific methods inside that frame we should use." **Why this works:** A decision between two parts of the same experience is a User Goals or Feature Usage question, not an Iterations question. This prompt uses the five-step selection process to match the frame to the decision and make the most of the data the team already has. **Best for:** - prioritization decisions between two or more design areas - any situation where the team has data but is not sure how to organize it - choosing a frame that makes stakeholder input and user data comparable --- ### Prompt 3 — Competitive Entry: Add market context to an initiative "We are redesigning the mobile banking dashboard and want to understand how our experience compares to other banking apps. Users sometimes mention competitors in feedback. Using glare-focus-methods, help us apply the Competitors frame to our initiative — name the specific methods to use, identify what we are looking for, and explain how to connect the findings to our UX metric and initiative goal." **Why this works:** User expectations are often shaped by what they use outside your product. The Competitors frame gives the team market context before comparing their own versions — preventing the mistake of optimizing against a baseline that is already behind what users expect. **Best for:** - initiatives where user feedback mentions competitor experiences - any redesign where the team does not know what benchmark to compare against - connecting competitive analysis to specific UX metrics and initiative goals --- *Glare Framework · glare-focus-methods · Focus Area* *Handoffs: glare-focus-initiatives · glare-focus-comparing · glare-focus-decisions · glare-measure*
DEFINE User Needs AI Prompt This prompt helps you identify which user need your design is actually solving and where the experience is breaking down. Start with a product, flow, or feature you're trying to improve. It guides you to: Walk the seven Honeycomb layers in order from Findable to Desirable Stop at the first layer that's broken rather than patching the surface Separate real needs from stated wants using the validation rule Match each named need to a metric so it becomes testable You'll end with a prioritized needs list tied to specific Honeycomb layers, each with a metric attached. Use this before starting any design work to make sure you're solving the right problem at the right layer. AI Skill The User Needs skill file teaches your AI the full Honeycomb model so it can help you diagnose and anchor any design challenge to a real, testable user need. Load it when you need to go deeper than the prompt allows, including layer-by-layer diagnostics, needfinding practice, and want-versus-need validation. It gives your AI: The full per-layer breakdown with definition, diagnostic questions, signal types, and failure modes The needfinding practice guide including observation techniques and contradiction capture The validation rule for separating stated needs from behavioral evidence The metric mapping list for each Honeycomb layer Download the skill file below to use the full User Needs framework with your AI assistant. Audience AI Prompt This prompt helps you define exactly who your design signals should come from and how much weight each voice should carry. Start with a project and a rough sense of who you're designing for. It guides you to: Name the four voices involved and assign signal weight to each Choose 3–5 attributes that turn a vague group into a testable one Separate internal voices that guide from external voices that validate Avoid the most common traps including designing for everyone and over-segmenting You'll end with an audience profile your team can reference throughout the project to keep signals grounded. Use this at the start of any project before running tests or collecting feedback. AI Skill The Audience skill file teaches your AI the complete four-voice model and audience-build sequence so it can guide you through any audience definition question with the full framework behind it. Load it when you need to go deeper on lifecycle segmentation, balancing participant signals against customer behavior, or building a testable audience from scratch. It gives your AI: The four-audience model with per-audience signal weights and roles The five attribute types with examples for each The eight customer lifecycle segments with per-segment metric guidance The six-step define flow for building an external audience Download the skill file below to use the full Audience framework with your AI assistant. UX Metrics AI Prompt This prompt helps you define a small, balanced set of UX metrics you can use to make decisions. Start with a flow, concept, or question you want to evaluate. It guides you to: Choose one metric for how users feel, what they do, how it performs, and what they understand Pressure test each metric so it's clear, comparable, and tied to behavior Remove weak or vanity metrics that won't change a decision You'll end with a set of four metrics you can use to evaluate a test, compare options, and explain results. Use this before a test or research cycle to lock in measurement that holds up in leadership conversations and connects directly to decisions. AI Skill The UX Metrics skill file teaches your AI the full metrics taxonomy so it can help you choose, balance, and defend the right metrics for any design challenge. Load it when you need to go deeper on leading versus lagging indicators, diagnostic mismatches, or the four critical comparisons between metric types. It gives your AI: The four metric types with definitions, examples, and when to use each The four lenses for viewing metrics by type, stage, time, and engagement The four-principle quality test for pressure testing any candidate metric The diagnostic mismatch patterns including what high satisfaction with low completion actually means Download the skill file below to use the full UX Metrics framework with your AI assistant. Collecting AI Prompt This prompt helps you choose the right research approach and instruments for your specific user need and business goal. Start with a user need and the business outcome you're trying to move. It guides you to: Pair your user need with a business goal and write the collection hypothesis Choose the right stack and mode for your situation Match techniques and instruments to the metrics you're tracking Plan how findings will be shared at the project, team, and leadership level You'll end with a collection plan that's ready to execute, with technique, instrument, and audience all named. Use this before any fieldwork begins to make sure what you collect connects back to a decision. AI Skill The Collecting skill file teaches your AI the full five-step collection process and instrument library so it can recommend the right approach for any research situation. Load it when you need to go deeper on instrument selection, balancing the four feedback types, or connecting findings to a leadership-ready sharing format. It gives your AI: The five-step process from intent through to connecting findings The Research Stacks catalog with named instruments including SUS, SEQ, CES, and CASTLE The full Techniques table with metric mappings for every method The four-axis tool framework across attitudinal, behavioral, performance, and specialized tools Download the skill file below to use the full Collecting framework with your AI assistant. MEASURE Concepts AI Prompt This prompt helps you frame a design effort as a focused, testable concept before any solutioning begins. Start with a business problem, friction point, or initiative your team is working on. It guides you to: Pull the relevant user need and the business goal it connects to Narrow the effort to one actionable concept rather than several competing ones Write a concept statement that passes the 5-minute test Name the signal that will confirm the concept worked You'll end with a single concept statement your team can use as the intent anchor before moving into ideation or testing. Use this when a team is jumping to solutions before they've agreed on what problem they're solving. AI Skill The Concepts skill file teaches your AI the full Defining Intent process so it can help you frame any design effort as a focused, measurable concept rather than an open brief. Load it when you need to go deeper on the 5-minute test, the concept catalog, or structuring intent across a larger initiative with multiple moving pieces. It gives your AI: The Defining Intent step with the 5-minute test in full The user need plus business goal plus signal template with a worked example The 200-plus concept catalog across Products, Websites, Mobile Apps, E-commerce, and Marketing The From-Intent-to-Signals loop summary Download the skill file below to use the full Concepts framework with your AI assistant. Hunches AI Prompt This prompt helps you turn a team instinct into a falsifiable hypothesis you can actually test. Start with an observation, instinct, or early pattern you've noticed in your product or research. It guides you to: Write a belief statement that names the user need behind the instinct Promote it into a hypothesis using the "We believe that..." template Pressure test it for falsifiability so it can be confirmed or disproven Tie it to a UX metric so results become a signal rather than an opinion You'll end with a well-formed hunch ready to move into the Questioning step. Use this when your team has an instinct about what's wrong but hasn't found a way to test it yet. AI Skill The Hunches skill file teaches your AI the full belief and hypothesis formation process so it can help you shape and strengthen any instinct before it moves into questioning. Load it when you need to go deeper on spark techniques, rewriting weak hunches, or building the connection between a friction point and a testable metric. It gives your AI: The five spark techniques for surfacing instincts from existing evidence The belief and hypothesis templates with weak-to-strong rewriting examples The falsifiability pressure test with the questions that expose an opinion masquerading as a hunch The metric-linking step that turns a hypothesis into a comparable signal Download the skill file below to use the full Hunches framework with your AI assistant. Questioning AI Prompt This prompt helps you turn a research challenge into a set of questions that will actually produce usable signals. Start with a hunch, design decision, or area of uncertainty your team needs to investigate. It guides you to: Sort your questions into the four types: People, Process, Product, and Problem Choose the right mode for where you are: Exploratory, Evaluative, or Comparative Run a bias check so no question leads, assumes, or confuses Match each question to a UX metric and a collection technique You'll end with a prioritized, bias-checked question set ready to use in a test or research session. Use this before any research session or usability test to make sure your questions will produce signals rather than confirmation. AI Skill The Questioning skill file teaches your AI the full eight-step process and question taxonomy so it can help you write and refine research questions for any design challenge. Load it when you need to go deeper on the bias check criteria, testable question standards, or building a reusable Question Library across projects. It gives your AI: The four question types with definitions and examples for each The three measurement modes with guidance on when to use each The full bias check criteria across leading, assumption, emotional, and jargon dimensions The metric-and-technique mapping tables for turning questions into collection plans Download the skill file below to use the full Questioning framework with your AI assistant. Findings AI Prompt This prompt helps you translate raw research data into a signal your team can act on. Start with a data set, test result, or research output you need to make sense of. It guides you to: Describe the behavior your data shows and name the source Tie it to the specific user need it reveals or threatens Connect it to a business metric it affects Write a recommendation with both a user metric and a business metric attached You'll end with a completed signal that gives your team a clear direction and a recommendation ready to share. Use this after any test, research session, or analytics review when the team has data but not yet a decision. AI Skill The Findings skill file teaches your AI the full data-to-signal translation chain so it can help you close the loop between any data set and a clear, actionable recommendation. Load it when you need to go deeper on connecting findings to business results, cross-checking against the original hunch, or structuring findings so they travel beyond the design team. It gives your AI: The five-step translation chain from raw data through to a shareable signal The user value mapping guide for tying findings to specific Honeycomb needs The business results connection framework with a worked checkout example The signal definition test for knowing when a finding becomes a signal versus noise Download the skill file below to use the full Findings framework with your AI assistant. FOCUS Initiatives AI Prompt This prompt helps you define a clear initiative so your team has a shared container for signals, methods, comparisons, and decisions. Start with a set of requests, a redesign, or a design effort that feels scattered or hard to scope. It guides you to: Map the request back to the larger business goal behind it Find the user need the work is actually trying to support Group the concepts that could create value under one shared outcome Name the UX metric and decision that will show whether the initiative moved You'll end with a scoped initiative that connects a user need, a business goal, and a measurable part of the experience. Use this when everything feels important at once, or when scattered requests need a clearer frame before any testing or collection begins. AI Skill The Initiatives skill file teaches your AI the full five-step method for choosing and framing an initiative so it can help you move from scattered requests to focused work with a shared outcome. Load it when you need to go deeper on connecting requests to business goals, grouping concepts under an initiative, or using the 10 common initiative types as starting points. It gives your AI: The five-step method for choosing an initiative from competing requests The 10 common initiative types with their friction and signal patterns The six questions a strong initiative must answer before methods begin The guidance for knowing when scope is tight enough to learn quickly Download the skill file below to use the full Initiatives framework with your AI assistant. Methods AI Prompt This prompt helps you choose the right frame for understanding your data rather than defaulting to the nearest test. Start with a named initiative and the decision your team needs to make next. It guides you to: Name the objective for the work so the method stays tied to an outcome Look at the data already available before collecting more Choose the frame that best organizes your evidence around the decision Define what gets compared so the method produces a finding rather than a report You'll end with a named method frame and a clear comparison ready to bring into the next round of work. Use this after an initiative is scoped and before collection begins, when the team needs to agree on how to look at the data. AI Skill The Methods skill file teaches your AI all 13 method frames with named methods inside each one so it can help you choose the frame that fits the decision rather than the one most familiar to the team. Load it when you need to go deeper on matching a frame to an initiative objective, choosing between journey mapping and competitor analysis or segment comparison, or knowing when to route back to initiatives before forcing a method. It gives your AI: The 13 method frames with named methods and when to use each The five-step selection process from naming the objective through defining what gets compared The guidance for knowing when data is ready to frame versus when the initiative needs more clarity first The full list of named methods organized by frame including JTBD, Kano Model, HEART, RICE, and more Download the skill file below to use the full Methods framework with your AI assistant. Comparing AI Prompt This prompt helps you place your signals side by side so they produce a finding rather than a data display. Start with a set of results, scores, or signals from a completed round of research or testing. It guides you to: Choose the comparison point that best fits the decision in front of you Confirm a shared metric so the comparison is fair Look for the strongest signal and the tradeoff, not just the highest score Turn the comparison into a finding with a named next step You'll end with a clear comparison backed by a shared metric and a finding that tells the team what to do next. Use this after a research round is complete and before a decision is made about whether to advance, refine, or change direction. AI Skill The Comparing skill file teaches your AI the full 12 comparison points and five-step comparing process so it can help you turn isolated signals into a clear direction. Load it when you need to go deeper on choosing between comparison points, ensuring the metric is shared before signals are placed side by side, or interpreting what a tradeoff means for the decision. It gives your AI: The 12 comparison points with guidance on when each one fits the decision The five-step process from naming the decision through turning the comparison into a finding The shared metric rule and guidance for naming tradeoffs explicitly The guidance for knowing when to route back to methods if the comparison won't produce a usable signal Download the skill file below to use the full Comparing framework with your AI assistant. Decisions AI Prompt This prompt helps you turn a comparison finding into a named decision your team can act on. Start with a finding from a completed comparison, review, or research readout. It guides you to: Ground the decision in the strongest signal from the comparison Name the tradeoff the team is accepting Choose exactly one of the five decision types: Implement, Refine Design, Test Iteration, Revisit Later, or Do Not Pursue Lock the next move so someone owns what happens after the meeting You'll end with a clear decision backed by signal, with a tradeoff named and a next step recorded. Use this in any design review, sprint, or stakeholder readout where the team has evidence but keeps circling the same questions without committing. AI Skill The Decisions skill file teaches your AI the full five-move taxonomy and decision-making process so it can help you convert any comparison finding into a named, recorded decision that downstream teams can act on. Load it when you need to go deeper on the criteria for each decision type, writing tradeoffs explicitly, or distinguishing strategic decisions about direction from tactical decisions about the next move on a specific concept. It gives your AI: The five tactical decision types with the criteria for choosing between them The five-step process from naming what is being decided through locking the next move The guidance for grounding every decision in a signal rather than a preference The decision record structure including the initiative, signal, tradeoff, type, and next step Download the skill file below to use the full Decisions framework with your AI assistant. LEAD Business Goals AI Prompt This prompt helps you anchor your design work to a business outcome that will hold up in a leadership conversation. Start with a design initiative and a rough sense of the business problem it's connected to. It guides you to: Name which of the nine business pressures your work is primarily serving Choose a specific measurable goal from the twenty available Run your proposed metrics through the Quick Test to confirm they count as signals Remove any vanity metrics that won't survive a finance or executive review You'll end with a goal anchor and a metric set that connects your design work directly to the outcomes leadership is already tracking. Use this before any readout, OKR review, or planning session where you need to show the business case for your work. AI Skill The Business Goals skill file teaches your AI the full three-layer KPI structure and 20-goal map so it can help you anchor any design initiative to the business outcome it is actually serving. Load it when you need to go deeper on the Quick Test, distinguishing Design KPIs from Product KPIs from Business KPIs, or calling out the three pitfalls before they undermine a leadership conversation. It gives your AI: The three-layer KPI structure with definitions and examples for each layer The complete map of 20 measurable goals across nine business pressures The Quick Test with the three questions every metric must pass The three pitfalls with examples of how each one shows up in practice Download the skill file below to use the full Business Goals framework with your AI assistant. Workflows AI Prompt This prompt helps you translate a design signal into the language of a specific business function so it lands as their win rather than a design update. Start with a design signal and the function you most need to influence right now. It guides you to: Reframe the signal using that function's top metrics and vocabulary Name the lift opportunity that makes it relevant to their work Build the Design KPI to Business KPI signal chain for that function Run the Quick Test to confirm the signal will travel You'll end with a function-specific translation of your design signal ready to use in a readout, planning session, or cross-functional update. Use this before any conversation with Sales, Marketing, Product, Engineering, Strategy, Operations, Finance, or Legal where design needs to show up in their terms. AI Skill The Workflows skill file teaches your AI all eight cross-functional templates in full so it can help you translate any design signal into the language of any business function. Load it when you need to go deeper on a specific function's metrics, questions, and lift opportunity, or when you're preparing to influence multiple functions at once. It gives your AI: All eight function templates with top metrics, questions, lift opportunity, and jargon glossary The Design KPI to Business KPI signal chain for each function The Workflows Quick Test and the 30-minute Quick Exercise for starting small Worked examples for each function showing how to reframe design language into business terms Download the skill file below to use the full Workflows framework with your AI assistant. Mapping AI Prompt This prompt helps you draw the explicit chain that connects a user need to a business goal so the link is visible and defensible. Start with a design initiative that has produced UX signals and a business outcome you need to connect them to. It guides you to: Fill in all five rungs of the Chain of Proof by name Work upward from user need or downward from business goal depending on where your gap is Use the UX metrics rung as the connective tissue that makes the chain credible Check for any unnamed rung before sharing the chain with leadership You'll end with a completed Chain of Proof that makes your UX metrics the link between what users do and what leadership tracks. Use this before any OKR review, roadmap session, or executive conversation where you need to show that design work connects to business outcomes. AI Skill The Mapping skill file teaches your AI the full Chain of Proof process so it can help you build and maintain a defensible connection between user needs and business goals for any initiative. Load it when you need to go deeper on the five steps to map, the Make-It-Practical routine for an in-flight project, or embedding the chain into dashboards and planning cycles so it stays visible over time. It gives your AI: The five-rung Chain of Proof with the failure point rule for any unnamed rung The v1.1 Five Steps to Map with collaborator guidance for each direction The Make-It-Practical routine and Quick Checklist for in-flight projects The onboarding-to-revenue worked example as a complete chain template Download the skill file below to use the full Mapping framework with your AI assistant. Results AI Prompt This prompt helps you close the loop between a design initiative and the business outcome it was meant to drive. Start with one initiative that has gone through findings and decisions but hasn't yet been connected to a measurable outcome. It guides you to: Map the initiative through all four layers: Initiatives, Findings, Decisions, and Outcomes Identify which layer is missing or broken using the 15-symptom diagnostic Apply the right calibration from the five-dimension maturity model Frame the outcome as direction-of-travel proof rather than a single precise number You'll end with structured proof of impact that's ready to share in a leadership update, not assembled after one. Use this after any significant design effort when you need to show what the work actually changed and connect it to a business outcome leadership can verify. AI Skill The Results skill file teaches your AI the full four-layer Project Work loop and maturity model so it can help you close the gap between any design effort and the business outcome it was meant to drive. Load it when you need to go deeper on the 15-symptom diagnostic, the five Dimensions of Design Maturity, or using the University Website case study as a reference for what a complete Results loop looks like in practice. It gives your AI: The four-layer Initiatives to Outcomes loop with the break-identification process The five Dimensions of Design Maturity with scoring rubric and micro-actions for each The complete 15-symptom diagnostic with verbatim calibration tips The Results Alignment Checklist and the University Website case study Download the skill file below to use the full Results framework with your AI assistant.
# Design Review — Reference Compressed source: Glare | Design Review v1.3 (master), with anchors into Scoring Model, Rubric, Run a Transcript, and Techniques. ## What a design review is A design review is where ideas are discussed, feedback is shared, and decisions are made. Most teams already run them. The harder part is getting clear outcomes from the conversation. A review can feel useful in the moment and still leave the team with scattered feedback, unclear direction, and slow progress. Glare treats the design review as a **decision system** and applies the SIGNAL framework to it. The goal is to help teams turn feedback into decisions that move the work forward. ## What SIGNAL solves Design reviews break down in predictable ways: - the real problem stays unclear - success is not defined - feedback turns into scattered opinions - the conversation circles without direction - ownership is vague after the call - decisions are implied instead of confirmed A stronger review helps the team focus on the right problem, align on what matters, use signals to guide the discussion, move toward a clear decision, and carry momentum into the next step. ## The SIGNAL flow **Surface → Identify → Ground → Navigate → Align → Lock** In parallel: **Tension → Clarity → Evidence → Commitment → Responsibility → Momentum** Each step reduces uncertainty. By the end of the flow, the group moves from an open problem to a committed next step. | Letter | Step | Focus | |---|---|---| | S | Surface Challenges | Clarify the underlying problem | | I | Identify Outcomes | Define what "better" means | | G | Ground in Signals | Bring evidence into the conversation | | N | Navigate Decisions | Shape a clear direction | | A | Align Ownership | Make responsibility visible | | L | Lock Momentum | Turn the conversation into action | When a meeting stalls or feels incomplete, the cause is almost always in an earlier step that did not fully develop. ## During the review SIGNAL helps teams: - surface the real problem - identify what success should look like - ground feedback in signals - navigate options with more focus - align on ownership - lock the next step before the call ends ## After the review The **SIGNAL Call Rubric** evaluates how well a review followed the flow. The easiest input is a **call transcript** — it shows what actually happened, not what people remember. Using the transcript, teams can see where the problem became clear, where success was defined or left vague, where signals shaped the direction, where the conversation circled or moved forward, where ownership was clarified, and where momentum was locked into action. ## Scoring Each dimension is scored 1–5 (see `glare-review-scoring-model` for the full rubric). - **5 Leading** — the step clearly shaped how the group thought and decided - **4 Strong** — clear and influential, minor gaps - **3 Functional** — present and working but not shaping the conversation - **2 Emerging** — shows up briefly but does not influence direction - **1 Reactive** — mostly absent Total /30 maps to bands: 25–30 Leading, 19–24 Advancing, 13–18 Functional, 6–12 Reactive. **Patterns matter more than the total.** Look for the step where clarity first dropped — that's usually the best place to improve. ## Run a transcript - **Input:** a transcript from a real design review - **Process:** evaluate across Surface, Identify, Ground, Navigate, Align, Lock - **Output:** a clear view of where the review created clarity and where it lost strength You don't need to adopt everything at once. Start by reviewing one recent call, finding where the conversation lost clarity, and applying one or two techniques in the next review. See `glare-review-run-transcript` for the full workflow (Skill / Prompt / Ray app paths). ## Techniques Each SIGNAL letter has a small set of in-the-moment techniques you can use to bring the conversation back when it's losing strength. They are small moves — usually one or two well-timed ones can shift the entire review. See `glare-review-techniques` for the overview, then the individual dimension skills for full technique sets. ## Why this matters Design reviews improve when teams can see how decisions are forming. SIGNAL makes that visible. It shows where the conversation gained clarity, where it lost strength, and what to do next. That makes design review repeatable. Teams learn how to turn feedback into decisions that move the work forward. ## Section map | Skill | Use when | |---|---| | `glare-design-review` (this) | Overview, definitions, routing | | `glare-review-surface` | Surfacing the real problem | | `glare-review-identify` | Defining what better means | | `glare-review-ground` | Bringing in evidence/signals | | `glare-review-navigate` | Forming a recommendation | | `glare-review-align` | Clarifying who owns the next move | | `glare-review-lock` | Closing with action and timing | | `glare-review-techniques` | Picking a technique by stuck-moment | | `glare-review-scoring-model` | Scoring 1–5 per dimension | | `glare-review-rubric` | Full evaluation output (scores + observations + behavioral metrics) | | `glare-review-coach` | Pre-call prep + post-call coaching | | `glare-review-run-transcript` | Practical end-to-end workflow |
# Business Goals AI Skill Lead Area · Business Goals Move · Decision Map --- ## 1. What the Skill Does The Business Goals skill helps teams connect design work to the outcomes leadership already tracks. It is the first move inside the Lead area of Glare's Decision Map. This is where a UX metric stops being a number on a dashboard and becomes a signal that leadership can act on. Most teams measure the wrong things or measure the right things in the wrong layer. They track task completion without connecting it to product adoption. They track product adoption without connecting it to revenue. The metric sits in a report, looks fine, and gets ignored. The Business Goals skill fixes that by stacking three layers of KPIs and routing every signal to a named business pressure and a measurable goal. Every metric belongs to one of three layers — and every layer needs to connect to the one above it. | Layer | What it measures | Examples | |---|---|---| | Design KPIs | Whether users succeed in the moment | Task completion, error rate, time on task | | Product KPIs | Whether users adopt and return | Feature usage, retention, trial-to-paid conversion | | Business KPIs | Whether the company benefits | Revenue growth, churn reduction, lifetime value | A metric that lives only in the Design layer is a design metric. It becomes a signal when it connects upward to a product outcome and a business outcome. Without that chain, leadership has no reason to care. **The Vanity Metric Rule** Teams often track numbers that look good but do not connect to any decision. Page views, click counts, and app store ratings are common examples. They go up, the team feels good, and nothing changes about how the product is built or funded. The rule is simple: run every candidate metric through three questions before committing to it. Does it prove the design works for users? Does it show adoption or retention at the product level? Does it connect to growth, churn, or revenue at the business level? If any answer is no, the metric is missing a rung. Name the missing rung and fix it before the metric goes into a report. --- ## 2. Business Benefit When design metrics connect to business goals, design earns a seat in decisions that used to happen without it. Leadership starts treating UX signals as inputs to strategy, not outputs from a separate team. This helps teams: - stop presenting metrics that generate polite nodding but no action - give product and finance a chain they can trace from a design change to a business result - defend design investment in budget conversations with numbers leadership already uses - move faster because priorities are connected to outcomes, not opinions - build trust over time by being right about what the numbers will do Design becomes a lever, not a report. --- ## 3. Skill Output When used correctly, the skill produces a clear goal statement for a design effort — with each metric layer named and connected. The statement shows: - the business pressure the work is serving - the measurable goal inside that pressure - the Design KPI, Product KPI, and Business KPI that form the chain - any metrics that fail the three-question test The example below shows how this works for a mobile banking dashboard. | Field | Example Output (Mobile Banking Dashboard) | |---|---| | Business Pressure | Retention — keep customers and reduce churn | | Measurable Goal | Build brand loyalty — returning users feel in control of their finances and trust the product | | Design KPI | First-click success rate on balance and transaction history | | Product KPI | Session return rate and monthly active user frequency | | Business KPI | 90-day retention rate and reduction in account closure rate | | Metric Chain | First-click success improves → users return more frequently → 90-day retention rises | | Vanity Metric to Avoid | App store rating — it reflects general sentiment but does not connect to session return rate or retention and cannot guide a specific design decision | | Failure Mode to Watch | Stopping at the Design KPI layer. A 79% first-click success rate is a useful design signal but it is not a business result until it connects to return frequency and retention. | | Next Step Handoff | → glare-lead-mapping to build the full five-rung Chain of Proof from user need to business goal | The output connects directly to the other Lead moves: - Mapping builds the full ladder from user need to business goal using these KPIs - Workflows translates the business goal into the language of each function - Results tracks whether the goal is being moved over time --- ## 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. --- ### Prompt 1 — Diagnostic Entry: Fix a disconnected metric "Our team tracks first-click success, session duration, and app store rating for our mobile banking dashboard. Our VP of Product keeps asking how these connect to retention. Using the glare-lead-business-goals skill, run each metric through the three-question test, tell us which ones pass and which ones are vanity metrics, and recommend the right three-layer chain for this product." **Why this works:** First-click success is a Design KPI, session duration is a partial Product KPI, and app store rating is a vanity metric. This prompt uses the three-layer framework and the three-question test to identify the gap — there is no Business KPI in the current set — and replace the vanity metric with one that completes the chain. **Best for:** - auditing an existing metric plan before a leadership review - any situation where the team tracks numbers but cannot explain what they prove - preparing for a budget conversation that requires connecting design work to business outcomes --- ### Prompt 2 — Pressure Entry: Choose the right business pressure "We are updating our mobile banking dashboard and we need to decide which business pressure to connect our work to — Retention, User Experience, or Efficiency. Each one feels relevant. Using glare-lead-business-goals, help us pick the single primary pressure that best fits our initiative, name the measurable goal inside it, and identify the Design KPI, Product KPI, and Business KPI that would form the chain." **Why this works:** Work that serves all pressures usually proves none of them. This prompt forces a single primary pressure and routes it to a named goal — giving the team a specific chain to build and a specific number to move, instead of a list of metrics that look relevant but point in different directions. **Best for:** - sprint planning where the team is not sure which outcome to optimize for - connecting a dashboard redesign to a specific part of the business strategy - preparing a goal statement that can anchor an entire research and design cycle --- ### Prompt 3 — Readout Entry: Prepare a metric story for leadership "We ran a usability study on our mobile banking dashboard and first-click success on transaction history improved from 61% to 79%. We need to present this result to our CFO and VP of Product next week. Using glare-lead-business-goals, help us connect this Design KPI to a Product KPI and a Business KPI, name the business pressure it serves, and frame it as a signal — not just a number." **Why this works:** A 79% first-click success rate means nothing to a CFO without a chain. This prompt uses the three-layer framework to translate a design result into the terms leadership tracks — retention, churn, or revenue — so the readout earns attention instead of polite acknowledgement. **Best for:** - preparing a leadership presentation after a research or testing sprint - any situation where design results need to travel beyond the design team - connecting a single strong metric to the business outcome it supports --- *Glare Framework · glare-lead-business-goals · Lead Area* *Handoffs: glare-lead-mapping · glare-lead-workflows · glare-lead-results · glare-focus*
# Mapping AI Skill Lead Area · Mapping Move · Decision Map --- ## 1. What the Skill Does The Mapping skill helps teams draw a visible line from what users are trying to do all the way to the outcome the business is trying to reach. It is the second move inside the Lead area of Glare's Decision Map. This is where a design metric stops being a design team concern and becomes evidence that every function in the business can trace and trust. Most teams have metrics at each layer — a usability score, a retention number, a revenue target — but they live in separate documents owned by separate teams. Nobody connects them. Leadership sees lagging business numbers with no explanation. Design sees strong usability scores that nobody acts on. The Mapping skill fixes that by building one chain that links all five layers together, with UX metrics as the connective tissue in the middle. The chain has five rungs. Every rung must be named. | Rung | What it names | Example | |---|---|---| | 1. User Need | What users are trying to get done | Finish onboarding quickly and without confusion | | 2. Design KPI | Whether users succeed in the moment | Onboarding completion rate: 55% → 85% | | 3. Product KPI | Whether users adopt and return | Trial-to-paid conversion doubled | | 4. Business KPI | Whether the company benefits | Pipeline revenue grew by 25% | | 5. Business Goal | The pressure leadership already tracks | Revenue growth | An unnamed rung is a broken link. When a Product KPI drops, the Design KPI explains why. When a Business KPI rises, the Design and Product KPIs make it credible. Without the chain, leaders only see results they cannot explain and cannot reproduce. **The Missing Rung Rule** Teams often present a Design KPI and a Business KPI in the same slide as if the connection between them is obvious. It is not. A 79% task success rate does not self-evidently produce revenue growth. The product layer in the middle — adoption, return rate, trial conversion — is what makes the connection credible and repeatable. The rule is simple: before sharing any chain with leadership, fill in every rung by name. If any rung is blank, stop and find it before presenting. A chain with a gap in it is not a chain — it is two separate claims that happen to be on the same slide. --- ## 2. Business Benefit A complete Chain of Proof gives every function a shared way to see design's contribution. It makes design work explainable, defensible, and repeatable — not just in the sprint it was created, but in every review, roadmap session, and budget conversation that follows. This helps teams: - show leadership why a design metric matters without asking them to take it on faith - give product managers a reason to prioritize design work in the roadmap - make design investment easier to defend when budgets are reviewed - keep the chain visible and updated so it grows more credible over time - give every function — sales, finance, engineering — a rung they can point to One complete chain is worth more than ten disconnected metrics. --- ## 3. Skill Output When used correctly, the skill produces a complete Chain of Proof for a design effort. The chain shows: - all five rungs named with specific metrics at each layer - the direction of the connection from user need to business goal - the UX metric that makes the product and business layers explainable - a note on which rung is weakest or missing The example below shows how this works for a mobile banking dashboard. | Rung | Example Output (Mobile Banking Dashboard) | |---|---| | User Need | Users need to locate their balance and recent transactions within one tap so they feel in control of their finances | | Design KPI | First-click success on balance and transaction history: improved from 61% to 79% | | Product KPI | Session return rate increased — users who find information quickly return within 48 hours at a higher rate | | Business KPI | 90-day retention rate improved; account closure rate dropped | | Business Goal | Retention — keep customers and reduce churn | | Connective Tissue | The Design KPI explains why the Product KPI moved: users who succeed on first click return more often. The Product KPI explains why the Business KPI moved: higher return frequency reduces account closure. | | Weakest Rung | Product KPI — return rate data has not yet been pulled from analytics. This rung needs to be confirmed before the chain is shared with leadership. | | Next Step Handoff | → glare-lead-workflows to translate this chain into the language of each function that needs to act on it | The output connects directly to the other Lead moves: - Business Goals names the pressure and goal that anchor the top rung - Workflows uses the chain to translate each rung into a specific function's vocabulary - Results tracks whether the chain is moving over time and where it is breaking --- ## 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. --- ### Prompt 1 — Build Entry: Construct a chain from a design result "Our mobile banking dashboard usability test showed first-click success on transaction history improved from 61% to 79%. We believe this connects to retention but we have not built the full chain yet. Using the glare-lead-mapping skill, help us fill in all five rungs of the Chain of Proof — from the user need at the bottom to the business goal at the top — and identify which rung we still need data for." **Why this works:** A strong Design KPI with no chain is a design team result, not a business result. This prompt uses the five-rung structure to build the full connection and — critically — name which rung is currently missing so the team knows exactly what to find before the chain can be presented credibly. **Best for:** - teams that have a strong metric but have not connected it to a business outcome - preparing a chain before a leadership review or roadmap discussion - identifying which data gap needs to be filled before the chain can travel --- ### Prompt 2 — Audit Entry: Find the broken rung "We have been presenting our mobile banking dashboard results to leadership for two quarters. We share usability scores and a retention number but leadership keeps asking how they connect. Using glare-lead-mapping, audit our current metric set — Design KPI: first-click success 79%, Business KPI: 90-day retention up 4% — and tell us which rung is missing, why the chain does not feel credible to leadership, and what we need to fill the gap." **Why this works:** A Design KPI and a Business KPI with no Product KPI in between is the most common chain failure. This prompt uses the missing rung rule to name the gap precisely — the product adoption layer — and explain why the chain does not feel connected from a leadership perspective. **Best for:** - any situation where leadership acknowledges design results but does not act on them - diagnosing why a metric presentation keeps generating questions instead of decisions - building a more credible chain before the next review cycle --- ### Prompt 3 — Direction Entry: Build a chain downward from a business goal "Our company's top priority this quarter is improving 90-day retention for mobile banking users. Our leadership wants design to contribute to this goal. Using glare-lead-mapping, help us build the Chain of Proof downward from this business goal — naming the Business KPI, Product KPI, Design KPI, and user need that would form the chain — and tell us which design work to prioritize based on where the chain connects most directly." **Why this works:** Teams usually build chains upward from what they already measured. Building downward from a leadership goal is more powerful — it starts with what the business needs to move and traces back to the design work that can move it, so every research and design decision is anchored to the outcome from the start. **Best for:** - any sprint that needs to be explicitly connected to a company-level goal - preparing a design proposal that needs leadership backing before work begins - connecting a new initiative to an existing business priority without starting from scratch --- *Glare Framework · glare-lead-mapping · Lead Area* *Handoffs: glare-lead-business-goals · glare-lead-workflows · glare-lead-results · glare-focus*
# Results AI Skill Lead Area · Results Move · Decision Map --- ## 1. What the Skill Does The Results skill helps teams close the loop between the work they do and the outcomes leadership can see. It is the final move inside the Lead area of Glare's Decision Map. This is where individual findings, decisions, and metrics become a continuous proof that design is moving the business forward. Most teams produce results — usability scores, retention lifts, session improvements — but the results stay inside the design team. They do not connect to initiatives. They do not roll up to business goals. They get shared once, noted, and forgotten. The next sprint starts from zero. The Results skill fixes that by keeping the four-layer project loop connected and visible: Initiatives lead to Findings, Findings lead to Decisions, Decisions lead to Outcomes. The loop has four layers. Every layer needs to be present. | Layer | What it does | |---|---| | Initiatives | Frames a business pressure into a focused area of design work | | Findings | Turns testing and measurement into evidence tied to user needs | | Decisions | Converts evidence into a named next move | | Outcomes | Connects the decision to a user, product, and business result | When one layer is missing, the loop breaks. No Findings means the team is making decisions without evidence. No Decisions means Findings sit in decks and never reach action. No Outcomes means nobody can see whether the work moved anything. The Results skill finds the missing layer and fixes it. **The Clarity Rule** Teams often stall on results because they are trying to prove exact causation — they want to show that this specific design change caused this specific revenue number. That standard is almost impossible to meet, and chasing it keeps teams from sharing results that are genuinely useful. The rule is simple: perfect attribution is rare and that is fine. What matters is direction and progress. A finding that shows first-click success improved from 61% to 79%, session return rate increased, and 90-day retention is trending up does not prove causation — but it shows the chain moving in the right direction. Clarity beats precision every time. Share the direction of travel and let the chain speak for itself. --- ## 2. Business Benefit When the loop stays connected, design builds credibility with every sprint instead of starting over. Leadership stops asking whether design contributes — they can see the chain from initiative to outcome and follow it themselves. This helps teams: - stop starting each project from zero by building on connected evidence over time - give leadership a way to see design's contribution without asking for a special presentation - catch workflow breaks early — before they turn into quarters of invisible work - translate wins into repeatable frameworks the whole team can use - build the kind of trust with stakeholders that earns more design investment over time Results are how design proves it belongs in the room. --- ## 3. Skill Output When used correctly, the skill produces a connected project loop — or a clear diagnosis of where the loop is breaking. The output shows: - how each layer of the loop connects for a specific initiative - which layer is missing or disconnected - which of the five maturity dimensions the break lives under - the specific calibration to apply The example below shows how this works for a mobile banking dashboard. | Layer | Example Output (Mobile Banking Dashboard) | |---|---| | Initiative | Optimize the home screen for habitual users to reduce session abandonment and improve 90-day retention | | Findings | First-click success on transaction history: 61% → 79%. Post-task satisfaction: 3.6 → 4.1. Habitual users return faster when key information is on the home screen. | | Decision | Implement — redesigned home screen moves to production. Power user shortcuts flagged for a follow-on initiative. | | Outcomes | Design KPI: first-click success +18 points. Product KPI: session return rate increasing. Business KPI: 90-day retention trending up. Business Goal: Retention. | | Where the Loop is Breaking | Outcomes layer is incomplete — session return rate and 90-day retention are trending but not yet quantified. The chain cannot be shared with leadership until these numbers are confirmed. | | Maturity Dimension | Building Proof — the team has findings and a decision but the outcome is not yet tied to a measurable business result. Calibration: define measurable success upfront for the next initiative so the outcome layer is ready before the decision is made. | | Failure Mode to Watch | Sharing the Design KPI improvement without the full loop. A 79% first-click score is a strong design result but it is not a business outcome. The loop is only complete when all four layers have named, connected results. | | Next Step Handoff | → glare-lead-business-goals or glare-lead-mapping to confirm and communicate the full outcome chain | --- ## 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. --- ### Prompt 1 — Diagnostic Entry: Find where the loop is breaking "Our team has been doing good design work on the mobile banking dashboard for two quarters but leadership keeps saying they cannot see the impact. We have usability findings and we make design decisions every sprint but our results do not seem to connect to the business. Using the glare-lead-results skill, help us map one recent initiative through the four-layer loop — Initiatives, Findings, Decisions, Outcomes — and identify which layer is missing or disconnected." **Why this works:** "Leadership cannot see the impact" is almost always a loop break, not a results problem. This prompt uses the four-layer diagnostic to find the specific gap — usually a missing Outcomes layer or Findings that were never connected to a decision — so the team can fix the workflow instead of producing more reports. **Best for:** - any situation where design work feels invisible to leadership - diagnosing a workflow problem before the next sprint planning session - building a case for why the team needs a more structured way to track outcomes --- ### Prompt 2 — Maturity Entry: Diagnose which dimension is limiting results "Our mobile banking dashboard team has strong research but our findings rarely change what gets built. Decisions get made in roadmap sessions without our evidence. The loudest voice usually wins. Using the glare-lead-results skill, match this symptom to the right maturity dimension, apply the calibration, and tell us what to change in our workflow this sprint." **Why this works:** "Loudest voice wins" is a named symptom in the Guiding Decisions dimension. This prompt uses the 15-symptom diagnostic to match the pain precisely to a dimension and produce a specific calibration — not a general recommendation to "be more strategic," but a concrete change to make in the next sprint. **Best for:** - any recurring team frustration that feels structural rather than project-specific - diagnosing a maturity gap before a team retrospective or process review - preparing a case for a workflow change that needs leadership support --- ### Prompt 3 — Attribution Entry: Share results without perfect causation "We improved first-click success on our mobile banking dashboard from 61% to 79% last quarter. Session return rate is up and 90-day retention is trending positive. We want to share these results with our executive team but we are worried they will ask for stronger proof that our design changes caused the retention improvement. Using glare-lead-results, help us frame these results using the clarity-over-precision principle — showing the chain and the direction of travel without overclaiming causation." **Why this works:** Waiting for perfect attribution before sharing results means results never get shared. This prompt uses the clarity rule to frame a strong directional story — first-click success improved, users returned more often, retention is trending up — that shows the chain moving without claiming more certainty than the data supports. **Best for:** - preparing an executive update after a research and design cycle - any situation where the team has strong directional evidence but not controlled proof - building a habit of sharing results in progress rather than only after full confirmation --- *Glare Framework · glare-lead-results · Lead Area* *Handoffs: glare-lead-business-goals · glare-lead-mapping · glare-lead-workflows · glare-design-assessment*
# Workflows AI Skill Lead Area · Workflows Move · Decision Map --- ## 1. What the Skill Does The Workflows skill helps teams translate design signals into the language of the business function they are trying to influence. It is the third move inside the Lead area of Glare's Decision Map. This is where a UX finding stops sounding like a design report and starts sounding like something the receiving team already cares about. Every company runs on workflows that are already in motion — sales cycles, marketing campaigns, engineering sprints, finance reviews. Design gets sidelined when it cannot connect to those flows. The same signal lands differently depending on who is in the room. A 79% task success rate means something different to a product manager than it does to a CFO or a legal team. The Workflows skill translates the same finding into eight different business languages so it reaches the right audience in the right terms. Each function has its own frame for what matters. | Function | What it tracks | Where design creates lift | |---|---|---| | Sales | Pipeline growth, conversion rates, quota attainment | Trial friction → smooth onboarding that accelerates revenue | | Marketing | Lead quality, campaign ROI, customer acquisition cost | User signals → sharper messaging and stronger campaign performance | | Product | Feature adoption, retention, product-market fit | Feature testing → early proof of adoption before launch | | Engineering | Velocity, rework costs, defect rates | Usability failures caught early → fewer post-launch fixes | | Strategy | Market share, innovation rate, competitive differentiation | Desirability signals → de-risked bets on new directions | | Operations | Efficiency, support costs, internal adoption speed | Design fixes → operational savings and smoother processes | | Finance | Revenue growth, margin, ROI | Design ROI made visible and defensible | | Legal | Compliance rate, liability cost, risk exposure | Consent and accessibility clarity → reduced regulatory risk | **The Function-First Rule** Teams often present design findings using design language — "users struggled with the flow," "satisfaction was low," "comprehension dropped." That language is accurate but it does not land. The receiving function is not tracking comprehension. They are tracking pipeline, velocity, handle time, or compliance rate. The rule is simple: before sharing any finding with a business function, translate it into their top three metrics. Do not lead with the design metric — lead with the number they already track, then show how the design signal explains it. A finding framed in the function's own language gets into decisions. One framed in design language gets noted and forgotten. --- ## 2. Business Benefit When design findings travel in the right language, they enter decision cycles instead of sitting in reports. Each function starts treating design signals as inputs to their own work — not as updates from a separate team. This helps teams: - get design findings into product roadmaps, sales decks, and finance reviews - build credibility with functions that have not historically worked closely with design - make the same research more valuable by translating it once for multiple audiences - connect design outcomes to the metrics each function is already held accountable for - create a habit of cross-functional signal sharing that gets stronger over time One finding, translated well, can move eight different conversations forward. --- ## 3. Skill Output When used correctly, the skill produces a translated finding for a specific business function. The translation shows: - the function's top metrics - the design signal reframed in the function's vocabulary - the signal chain from Design KPI to Product KPI to Business KPI for that function - the specific lift opportunity design creates for that function The example below shows how this works for a mobile banking dashboard — translated for three different functions. | Function | Translated Finding (Mobile Banking Dashboard) | |---|---| | Product | "Task success on finding transaction history improved from 61% to 79%. Low task success was limiting feature adoption. The fix should increase session return rate and reduce churn in the first 90 days." Jargon used: adoption rate, retention, churn. | | Engineering | "The current transaction history flow has a 39% failure rate in testing. Every failure at this stage becomes a support ticket or a post-launch fix. Resolving it before handoff will reduce defect rate and protect sprint velocity." Jargon used: defect rate, rework, velocity. | | Finance | "Improving first-click success from 61% to 79% on the core retention flow is a leading indicator for 90-day retention improvement. A 5% retention lift in this segment translates to measurable reduction in account closure and an increase in lifetime value per user." Jargon used: retention, LTV, account closure. | | Failure Mode to Watch | Presenting all three translations in one meeting. Each function needs its own conversation. Mixing vocabularies in a single readout makes the finding feel unfocused and none of the audiences take clear ownership. | | Next Step Handoff | → glare-lead-results to track whether the translated signal entered each function's decision cycle and moved the outcome | --- ## 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. --- ### Prompt 1 — Translation Entry: Reframe a finding for one function "We have a usability finding from our mobile banking dashboard: first-click success on transaction history improved from 61% to 79% after the redesign. We need to present this to our VP of Engineering next week. Using the glare-lead-workflows skill, translate this finding into Engineering's language — using their top metrics, their vocabulary, and the specific lift opportunity design creates for their team." **Why this works:** An engineering leader is not tracking task success. They are tracking defect rates, sprint velocity, and rework cost. This prompt uses the Engineering function template to reframe the same finding into a language that answers the question an engineering leader is already asking: will this reduce rework? **Best for:** - preparing a single-function readout after a research or testing sprint - any situation where a finding needs to reach a team that does not normally engage with design results - building a habit of translating findings before sharing them outside the design team --- ### Prompt 2 — Multi-Function Entry: Prepare the same finding for multiple audiences "We need to share our mobile banking dashboard results with three different audiences next week: our Product Director, our CFO, and our Head of Operations. The core finding is that first-click success on transaction history improved from 61% to 79%, which we believe connects to retention. Using glare-lead-workflows, translate this finding for each of the three functions — using their metrics, vocabulary, and lift opportunity — and tell us what to lead with in each conversation." **Why this works:** The same finding has three different stories depending on who is listening. The product leader wants adoption and retention. The CFO wants margin and churn reduction. Operations wants ticket volume and handle time. This prompt translates once for each audience so every conversation opens with the number the function already tracks. **Best for:** - preparing for a week with multiple stakeholder presentations - making one research cycle produce findings that are useful across the whole business - connecting the same design result to the different priorities of different decision-makers --- ### Prompt 3 — Lift Entry: Find where design creates value for an unfamiliar function "Our Legal team has raised concerns about the consent flow on our mobile banking dashboard. We have not worked closely with Legal before and we are not sure how to frame our design work in terms they care about. Using glare-lead-workflows, explain where design creates lift for a Legal audience, name the metrics they track, and help us frame our consent flow testing as a risk-reduction signal in their language." **Why this works:** Legal teams track compliance rates, liability costs, and regulatory risk — not usability scores. This prompt uses the Legal function template to reframe consent flow testing as evidence of risk reduction, giving the team a way into a conversation with a function that has historically been a blocker rather than a collaborator. **Best for:** - any initiative that involves a function design does not regularly work with - translating design testing into risk, compliance, or cost terms for non-product audiences - building new cross-functional relationships by showing up in the function's language first --- *Glare Framework · glare-lead-workflows · Lead Area* *Handoffs: glare-lead-business-goals · glare-lead-mapping · glare-lead-results · glare-design-review*
# Concepts AI Skill Measure Area · Concepts Move · Decision Map --- ## 1. What the Skill Does The Concepts skill helps teams define exactly what they are trying to design and why before they start building or testing anything. It is the first move inside the Measure area of Glare's Decision Map. This is where teams turn a vague project direction into a focused, testable effort with a clear goal. Without a clear concept, work scatters. Teams run tests without knowing what they are trying to prove. Metrics pile up but do not mean anything. The Concepts skill fixes that by connecting a user need from the Define area to a business goal from the Lead area — and naming the signal that will prove the concept worked. Every concept has three parts. | Part | Where it comes from | Example | |---|---|---| | User Need | Define area | Clarity in checkout — users need to choose a payment option without making errors | | Business Goal | Lead area | Reduce payment drop-offs — lower abandonment rate by 15% this quarter | | Concept | Measure area | Redesign the payment step to reduce abandonment | All three parts must be present. A concept without a user need is a feature request. A concept without a business goal is a design exercise. A concept without a signal to measure is a guess. **The 5-Minute Test** Teams often think they have a clear concept when they do not. The problem shows up later — in testing, when results do not connect to any decision, or in reviews, when stakeholders ask what the work was actually trying to solve. The test is simple: write the user need, the business goal, and the signal that would prove the concept worked — in under five minutes. If the team cannot do it, the concept is not clear yet. Stop and sharpen the framing before moving to hunches or questions. --- ## 2. Business Benefit A clear concept keeps the whole team focused on the same problem. It connects design work to outcomes that leadership already cares about. This helps teams: - stop running tests that do not connect to a decision - avoid building features that solve the wrong problem - give every downstream hunch, question, and finding a target to validate - compare results fairly across different projects - replace debate with evidence faster Work becomes easier to prioritize, test, and explain. --- ## 3. Skill Output When used correctly, the skill produces a clear concept brief for a design effort. The brief shows: - the user need and its UX metric - the business goal and its business metric - the concept framed as one focused design effort - the signal that will prove it worked The example below shows how this works for a mobile banking dashboard. | Field | Example Output (Mobile Banking Dashboard) | |---|---| | User Need | Findable — users need to locate their balance and recent transactions within one tap | | UX Metric | First-click success rate on balance and transaction history | | Business Goal | Reduce session abandonment — users who cannot find key information leave without completing any action | | Business Metric | Session abandonment rate | | Concept | Redesign the mobile banking dashboard home screen to surface balance and transactions in one tap | | Signal to Prove It | First-click success rate improves from current baseline and session abandonment rate drops | | Failure Mode to Watch | Defining two or more concepts at once. One concept at a time forces the right focus. Multiple concepts in one brief is a sign the problem has not been narrowed yet. | | Next Step Handoff | → glare-measure-hunches to form the first falsifiable hypothesis based on this concept | The output connects directly to the other Measure moves: - Hunches takes the concept and forms a testable hypothesis - Questioning turns the hypothesis into specific research questions - Findings checks results against the original concept to confirm or disprove it --- ## 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. --- ### Prompt 1 — Diagnostic Entry: Start from a scattered project "Our team is updating the mobile banking dashboard but everyone has a different idea of what the project is trying to solve. Some people want to simplify the layout. Others want to add personalization. Others want to improve load speed. Using the glare-measure-concepts skill, help us run the 5-minute test and define one clear concept that connects a user need to a business goal and names the signal that will prove it worked." **Why this works:** Scattered projects usually mean no one has named the concept yet. This prompt uses the 5-minute test to force the team to choose one problem, connect it to a measurable outcome, and stop treating all directions as equally valid. **Best for:** - projects where the team cannot agree on what they are solving - sprint kickoffs where the brief is too broad - any situation where research results keep going in circles --- ### Prompt 2 — Framing Entry: Build a concept from existing work "We know from our Define area work that the user need is Findable — users cannot locate their transaction history within two taps. We know from our business goals that we need to reduce session abandonment. Using glare-measure-concepts, help us frame this into a single concept with the right user need, business goal, and signal — and check it against the 5-minute test." **Why this works:** Teams that have already done Define work often have the right inputs but have not connected them yet. This prompt uses the concept framework to assemble the three parts into a brief that is ready for testing. **Best for:** - teams coming out of a Define phase with user needs already named - connecting existing research to a new sprint goal - preparing a concept before writing research questions --- ### Prompt 3 — Catalog Entry: Find the right concept type for a use case "We are redesigning the mobile banking dashboard home screen. We are not sure whether to frame this as a Dashboard Engagement concept, an Onboarding concept, or something else. Using glare-measure-concepts, help us identify the right concept category for this work, explain what that category is designed to measure, and confirm whether our current framing fits." **Why this works:** Naming the right concept category gives teams a consistent lens across projects and helps them borrow from comparable work. This prompt uses the concept catalog to anchor the framing before testing begins. **Best for:** - teams starting a new design effort without a clear category - projects where the scope keeps expanding - preparing a concept brief that needs to travel to other teams --- *Glare Framework · glare-measure-concepts · Measure Area* *Handoffs: glare-measure-hunches · glare-measure-questioning · glare-measure-findings · glare-design-signals*
# Findings AI Skill Measure Area · Findings Move · Decision Map --- ## 1. What the Skill Does The Findings skill helps teams turn raw data into something the whole team can act on. It is the final move inside the Measure area of Glare's Decision Map. This is where numbers, drop-off rates, survey scores, and task results stop being data and become direction. Data alone tells you what happened. A finding tells you what it means. A signal tells you what to do next. The Findings skill closes that gap by connecting each piece of data to a user need, a business goal, and a design recommendation — in that order. Without this step, teams stall. The data sits in a deck. The meeting ends without a decision. Research gets run again. The Findings skill fixes that by giving every result a chain that leads to action. Every finding follows the same five-step chain. | Step | What happens | Example | |---|---|---| | 1. Translate the data | Pair the metric with its source and describe the behavior | 48% of users abandon checkout at the payment step (checkout analytics) | | 2. Tie to user value | Map to a specific user need | Clarity — users need payment options to be simple and error-free | | 3. Tie to business results | Link to a metric leadership cares about | Reducing abandonment increases conversion and revenue | | 4. Connect back to intent | Check against the original concept and hunch | Confirms the hunch that simplifying payment would reduce abandonment | | 5. Write the signal | Name the recommendation with both a user and business metric | Simplifying payment options will increase completion rate and lower error rate | If the team cannot complete all five steps, the finding is not ready. A result that cannot connect to a user need is noise. A result that cannot connect to a business goal is interesting but not actionable. **The Signal Rule** Teams often share data without completing the chain. They report that task success dropped to 61% but do not explain what user need that threatens or what business outcome it affects. The result is a number that generates discussion without generating decisions. The rule is simple: a finding is not finished until it has a user metric and a business metric in the same sentence. If the team cannot write both, go back and complete steps two and three before sharing anything. --- ## 2. Business Benefit Findings that complete the chain replace debate with evidence. They give product, engineering, and leadership a clear reason to act — tied to outcomes they already care about. This helps teams: - stop presenting data that generates discussion but not decisions - connect every research result to a user need and a business outcome - build trust with stakeholders by showing what the data means, not just what it says - close the loop between research and the next design sprint - make signals travel further by sharing questions alongside answers Research earns its investment when findings lead to action. --- ## 3. Skill Output When used correctly, the skill produces a clear signal for each finding. Each signal shows: - the raw data and its source - the finding described as user behavior - the user need it connects to - the business result it affects - the recommendation with both a user and business metric The example below shows how this works for a mobile banking dashboard. | Field | Example Output (Mobile Banking Dashboard) | |---|---| | Raw Data | Task success on finding transaction history: 61% (Helio usability test, 100 participants) | | Finding | Nearly four in ten users could not complete the task of finding their recent transactions without help | | User Need | Findable — users need to locate transaction history without extra navigation or confusion | | Business Result | Session abandonment rises when users cannot find key information quickly, reducing return visit rate | | Signal | Surfacing transaction history on the home screen will increase task success rate (user metric) and reduce session abandonment (business metric) | | What Would Disprove It | Task success rate does not improve after the change, or users still abandon at the same rate | | Failure Mode to Watch | Sharing the raw number without completing the chain. A 61% task success rate is not a finding — it is a starting point. The finding is what it means for users and what it costs the business. | | Next Step Handoff | → glare-focus to compare this signal against other versions or directions and decide what moves forward | The output connects directly to the other Measure moves: - Concepts provides the original intent to check the finding against - Hunches provides the hypothesis the finding confirms or disproves - Questioning provides the research prompts that produced the data --- ## 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. --- ### Prompt 1 — Diagnostic Entry: Turn a raw result into a finding "We ran a usability test on our mobile banking dashboard and task success on finding transaction history was 61%. We also have a post-task satisfaction score of 3.8 out of 5. We are not sure what to do with these numbers. Using the glare-measure-findings skill, walk the five-step chain for each result and produce a signal that connects to a user need and a business outcome." **Why this works:** Raw numbers without context do not guide decisions. This prompt uses the five-step chain to complete both findings — connecting task success to findability and a session metric, and connecting satisfaction to trust and a retention metric — so the team leaves with two actionable signals instead of two numbers. **Best for:** - making sense of usability test results - preparing findings for a sprint review or stakeholder readout - any situation where the team has data but cannot agree on what it means --- ### Prompt 2 — Mismatch Entry: Diagnose conflicting results "Our mobile banking dashboard testing showed high satisfaction scores but low task completion on the transaction history flow. Users say they like the app but keep abandoning the session. Using glare-measure-findings, explain what this mismatch means, which user need it threatens, what it signals for the business, and what we should do next." **Why this works:** High satisfaction with low completion is one of the most common metric mismatches in UX research. It means users feel good about the product but cannot finish. This prompt uses the findings chain to name the gap precisely and produce a signal that the team can act on in the next sprint. **Best for:** - any situation where positive feedback and behavioral data contradict each other - preparing a findings summary that needs to explain a counterintuitive result - deciding which metric to prioritize when two are pulling in different directions --- ### Prompt 3 — Handoff Entry: Prepare findings for a leadership review "We have three findings from our mobile banking dashboard research: task success on transaction history is 61%, session abandonment is up 18% this quarter, and new users rate trust at 3.2 out of 5. We need to present these to our VP of Product next week. Using glare-measure-findings, complete the five-step chain for each finding, write a signal for each one, and organize them in order of business impact." **Why this works:** Leadership reviews need findings that are already connected to business outcomes. This prompt uses the findings chain to complete all three results and rank them by impact — so the presentation leads with what matters most to the business, not what was most interesting to the research team. **Best for:** - preparing a leadership readout after a research sprint - prioritizing which findings to act on when there are more than two or three - connecting multiple research results into a single, coherent story --- *Glare Framework · glare-measure-findings · Measure Area* *Handoffs: glare-measure-concepts · glare-measure-hunches · glare-measure-questioning · glare-focus · glare-design-signals*
# Hunches AI Skill Measure Area · Hunches Move · Decision Map --- ## 1. What the Skill Does The Hunches skill helps teams turn instinct into something they can actually test. It is the second move inside the Measure area of Glare's Decision Map. This is where teams take a shaped feeling about what might be wrong — or right — and write it down as a clear, falsifiable hypothesis. Every team has hunches. The problem is most hunches stay vague. They sound like opinions, not bets. They cannot be confirmed or disproved because they were never specific enough to test. The Hunches skill fixes that by giving instinct a structure — a template that forces teams to name the change, the audience, the expected impact, and the reason why. A good hunch has four qualities. | Quality | What it means | |---|---| | Tied to a user problem | It starts from friction, not a feature idea | | Clear and specific | It names the change, the audience, and the expected impact | | Linked to a metric | Results can be measured, not just observed | | Falsifiable | It could be wrong — and that is the point | The hunch template: **"We believe that [this change] for [this group] will [have this impact] because [supporting reason]."** Weak: "We believe users like shorter forms." Stronger: "We believe removing two steps from the signup form for first-time users will increase completion rates because users drop off at step 3." The difference is specificity. The stronger version names exactly what to change, who it affects, what should happen, and why. That means it can be tested, confirmed, or disproved. **The Falsifiability Rule** Teams often write hunches that sound testable but are not. The sign is that no result could actually change the team's mind. If the hunch is framed so that any outcome confirms it, it is not a hunch — it is an opinion dressed up as a hypothesis. The rule is simple: before moving to questions, ask what behavior would prove the hunch wrong. If the team cannot answer that, the hunch needs to be rewritten. A hunch that cannot be disproved cannot produce a signal. --- ## 2. Business Benefit Strong hunches keep teams moving without waiting for perfect data. They replace open-ended debate with a specific bet that research can confirm or kill. This helps teams: - stop building features based on assumptions nobody has named out loud - align the team around one specific belief before testing begins - move faster because the research question is already inside the hunch - reduce the cost of getting it wrong by testing early - keep a learning loop alive after launch by treating results as input, not final verdicts Hunches make instinct useful. --- ## 3. Skill Output When used correctly, the skill produces a clear hypothesis for a design effort. The hypothesis shows: - the change being proposed - the audience it is designed for - the expected impact and the reason for it - the metric that will confirm or disprove the result The example below shows how this works for a mobile banking dashboard. | Field | Example Output (Mobile Banking Dashboard) | |---|---| | Belief Statement | We believe users need to see their balance and recent transactions immediately so they can feel in control of their money without extra navigation | | Design Hypothesis | We believe that surfacing balance and the last three transactions on the home screen for habitual users will increase session completion because users currently abandon within 30 seconds when they cannot find this information quickly | | Metric to Track | Session completion rate and first-click success on balance | | What Would Disprove It | Session completion rate does not improve after the change, or users still abandon at the same rate despite the new layout | | Failure Mode to Watch | Writing a hunch so broad it cannot be tested. "Users want a better experience" is not a hunch. A hunch names a specific change with a specific expected result. | | Next Step Handoff | → glare-measure-questioning to turn this hypothesis into specific, testable research questions | The output connects directly to the other Measure moves: - Concepts provides the user need and business goal the hunch is built on - Questioning turns the hunch into specific research prompts - Findings checks whether the data confirms or disproves the hypothesis --- ## 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. --- ### Prompt 1 — Diagnostic Entry: Strengthen a weak hunch "Our team believes the mobile banking dashboard needs to be simpler. Users have said they find it confusing. Using the glare-measure-hunches skill, apply the hunch template to this belief and rewrite it as a strong, falsifiable hypothesis — naming the specific change, the audience, the expected impact, the reason why, and the metric we would track to confirm it." **Why this works:** "The dashboard needs to be simpler" is an opinion, not a hunch. It cannot be tested because nothing is specific enough to change or measure. This prompt uses the template to force the team to name exactly what simpler means and who it is for. **Best for:** - sprint planning where the problem feels obvious but undefined - any belief the team keeps repeating without testing - turning stakeholder feedback into something researchable --- ### Prompt 2 — Evidence Entry: Build a hunch from existing data "Our session data shows users abandon the mobile banking dashboard within 30 seconds. Drop-off is highest on users who are trying to find transaction history. Using glare-measure-hunches, help us write a design hypothesis that ties this behavior to a specific design change, names the expected impact, and identifies the metric we would track to confirm it." **Why this works:** Existing data is one of the best inputs for a hunch because it names the friction point precisely. This prompt uses the behavioral evidence to build a hypothesis that is already grounded in real user behavior, not assumption. **Best for:** - teams that have analytics data but have not turned it into a research direction - connecting session or funnel data to a design decision - starting a test sprint from a known problem --- ### Prompt 3 — Pressure-Test Entry: Check a hunch before testing "We have written this hypothesis: 'We believe that adding a summary card to the mobile banking dashboard home screen for new users will reduce time to find balance because users currently have to scroll to find it.' Using glare-measure-hunches, pressure-test this hypothesis for falsifiability — tell us what would prove it wrong, whether the metric is strong enough to guide a decision, and whether anything needs to be rewritten before we move to research questions." **Why this works:** Teams often move from hunch to testing without checking whether the hypothesis is actually falsifiable. This prompt uses the falsifiability rule to catch problems before research is run, saving time and avoiding results that cannot guide a decision. **Best for:** - quality-checking a hypothesis before a research sprint - any hunch the team feels confident about but has not stress-tested - preparing for a research review where findings need to hold up to scrutiny --- *Glare Framework · glare-measure-hunches · Measure Area* *Handoffs: glare-measure-concepts · glare-measure-questioning · glare-measure-findings · glare-design-signals*
# Questioning AI Skill Measure Area · Questioning Move · Decision Map --- ## 1. What the Skill Does The Questioning skill helps teams write research questions that actually produce useful answers. It is the third move inside the Measure area of Glare's Decision Map. This is where teams take a hypothesis and turn it into specific, testable prompts that can be run in a study, a test, or a survey. Weak questions stall research. They are too broad to answer, too leading to trust, or too vague to connect to a metric. The Questioning skill fixes that by helping teams sort their questions into the right type, pick the right research mode, check for bias, and connect every question to a UX metric and a collection technique. Every research question belongs to one of four types. | Type | What it explores | Example | |---|---|---| | People | Habits, behaviors, and preferences | How often do users check their balance in the app? | | Process | Steps, flows, and friction points | What steps do users take to find their transaction history? | | Product | Clarity, usefulness, and comprehension | Do users understand what the summary card is showing them? | | Problem | Barriers and drop-off causes | What stops users from completing a transaction review? | People and Process questions give context. Product and Problem questions give clarity. A good research plan includes all four. **The Bias Check Rule** Teams often write questions that look neutral but quietly push toward the answer they already expect. The most common forms are leading questions ("Why do users prefer the simpler layout?"), questions loaded with internal jargon ("Does the information architecture feel intuitive?"), and questions that assume the problem exists before it has been confirmed ("What frustrates you most about finding your balance?"). The rule is simple: before running any question, check it against three filters. Is it leading? Does it echo an internal assumption? Is it too technical for the audience? If any answer is yes, rewrite it. A biased question produces data that confirms what the team already believed — which is not research, it is validation theater. --- ## 2. Business Benefit Good research questions cut discovery time and produce findings that teams can act on immediately. They replace open-ended exploration with targeted prompts tied to real decisions. This helps teams: - stop running studies that produce interesting results but no clear direction - connect every question to a metric before the study begins - catch bias before it corrupts the data - build a library of reusable questions across sprints - share questions alongside answers so context travels with findings Research becomes faster and easier to trust. --- ## 3. Skill Output When used correctly, the skill produces a set of research questions ready to run. Each question shows: - which type it belongs to: People, Process, Product, or Problem - which research mode it fits: Exploratory, Evaluative, or Comparative - which UX metric it connects to - which collection technique to use The example below shows how this works for a mobile banking dashboard. | Field | Example Output (Mobile Banking Dashboard) | |---|---| | Research Mode | Evaluative — the redesigned home screen exists and we need to know if it works | | People Question | How often do habitual users check their balance and transactions in a single session? → Metric: Frequency, Technique: Survey | | Process Question | What steps do users take to find their last three transactions on the current home screen? → Metric: Completion Rate, Technique: Task Success Test | | Product Question | Can users identify what the summary card is showing them without any explanation? → Metric: Comprehension, Technique: First Click Test | | Problem Question | At what point in the flow do users give up trying to find transaction history? → Metric: Drop-off Rate, Technique: Clickstream Analysis | | Bias Check | "Why do users struggle to find transactions?" is leading — it assumes they struggle. Rewrite: "Where do users go first when looking for recent transactions?" | | Failure Mode to Watch | Writing questions without a metric attached. A question that cannot connect to a number is an interview prompt, not a research question. It can produce useful context but cannot guide a design decision on its own. | | Next Step Handoff | → glare-measure-findings to translate the data these questions produce into signals tied to user needs and business outcomes | The output connects directly to the other Measure moves: - Hunches provides the hypothesis each question is designed to test - Findings uses the questions as context when interpreting the data - Collecting from the Define area tells you which tools to run the questions in --- ## 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. --- ### Prompt 1 — Diagnostic Entry: Fix a weak question set "We are about to run a usability study on our mobile banking dashboard redesign. Our current research questions are: 'Do users like the new layout?' and 'Is the dashboard easier to use?' Using glare-measure-questioning, apply the bias check to these questions, explain what is wrong with each one, and rewrite them as testable questions aligned to a UX metric and a collection technique." **Why this works:** "Do users like the layout?" is a leading question that assumes the team wants a yes. "Is it easier?" assumes easier is the right goal. This prompt uses the bias check and testable question criteria to replace both with questions that can actually produce actionable data. **Best for:** - auditing a research plan before a study runs - any question set written quickly without a bias check - preparing for a study where findings need to hold up in a stakeholder review --- ### Prompt 2 — Mode Entry: Choose the right research mode "We have two versions of the mobile banking dashboard home screen. Version A shows balance and the last three transactions upfront. Version B uses a summary card users tap to expand. We need to run research to choose between them. Using glare-measure-questioning, identify the right research mode for this decision, write one question for each of the four question types, and match each to a UX metric and technique." **Why this works:** Choosing between two versions is a Comparative question, not an Evaluative one. The research mode changes which techniques are valid and which metrics are meaningful. This prompt uses the mode framework to make sure the question set fits the actual decision. **Best for:** - any sprint where two design directions need a tiebreaker - preparing questions for an A/B or preference test - making sure the research method matches what is actually being decided --- ### Prompt 3 — Library Entry: Build reusable questions for a recurring topic "Our team runs usability research on our mobile banking dashboard every quarter. We keep writing the same questions from scratch each time. Using glare-measure-questioning, help us build a reusable question library for dashboard research — covering all four question types, all three research modes, and the most common UX metrics we need to track." **Why this works:** Teams that write questions from scratch every sprint waste time and introduce inconsistency. A question library makes results comparable across rounds. This prompt uses the skill to build a structured, reusable set of prompts the team can pull from each quarter. **Best for:** - teams running recurring research on the same product area - building a shared research foundation across product, design, and marketing - making quarterly results comparable over time --- *Glare Framework · glare-measure-questioning · Measure Area* *Handoffs: glare-measure-hunches · glare-measure-findings · glare-define-collecting · glare-design-signals*