# Methods AI Skill Focus Area · Methods Move · Decision Map --- ## 1. What the Skill Does The Methods skill helps teams choose the right frame for bringing data into an initiative. It is the second move inside the Focus area of Glare's Decision Map. This is where the team stops asking "what test should we run?" and starts asking "how should we look at this work?" Most teams default to the methods closest to their craft — another prototype test, another usability study, another survey. Those are useful, but they are not always the right frame. The Methods skill expands that thinking. Depending on the initiative, the right frame might be comparing competitors, mapping a journey, segmenting audiences, reviewing feature usage, or evaluating risk. The method should match the decision, not the habit. The skill organizes methods into 13 frames. Each one helps a team look at data differently. | Frame | Use when | |---|---| | Competitors | The team needs market context or wants to see where expectations come from | | Iterations | The team is refining a direction and needs to track improvement across versions | | Timeline | The team needs to sequence work, manage priority, or understand what should happen now vs. later | | Journeys | The problem spans multiple steps, touchpoints, or channels | | Platforms / Devices | The experience changes across mobile, desktop, or other contexts | | User Goals / Tasks | The initiative depends on how well the experience helps users get something done | | Geographies | The experience needs to work across different markets or cultures | | User Lifecycle | The work affects users at different stages of their relationship with the product | | Behavioral Triggers | The team needs to understand what causes users to act or hesitate | | Segments | Different audiences may respond differently to the same experience | | Feature Usage | The team needs to decide what to build, improve, reduce, or remove | | Risk and Proof | The cost of being wrong is high and the team needs to reduce uncertainty first | | Frameworks | The team needs a structured model to organize thinking or explain tradeoffs | **The Frame-First Rule** Teams often choose a method before they know what decision it needs to support. A journey map gets created because someone likes journey maps. An A/B test gets run because the team ran one last sprint. The method feels productive but the results do not connect to any clear choice. The rule is simple: name the decision before choosing the frame. Ask what the team needs to decide next — which version, which audience, which moment, which direction. The frame that best organizes evidence around that decision is the right one. A method chosen for its own sake produces data that does not guide action. --- ## 2. Business Benefit The right method makes data useful. The wrong method produces results that feel interesting but cannot support a decision. Choosing the frame before collecting evidence saves time and keeps research connected to the work. This helps teams: - avoid running research that generates discussion but not direction - match the method to the actual decision, not the nearest available tool - make existing data more useful before collecting more - explain tradeoffs more clearly when one method frame is used consistently - connect research results to specific business workflows Methods make the initiative testable. --- ## 3. Skill Output When used correctly, the skill produces a clear method plan for a design initiative. The plan shows: - the initiative objective and the decision the method needs to support - the frame that best organizes the data - the named methods inside that frame - what gets compared and why - what the team should have after the method runs The example below shows how this works for a mobile banking dashboard. | Field | Example Output (Mobile Banking Dashboard) | |---|---| | Initiative Objective | Understand where habitual users lose confidence on the home screen | | Decision to Support | Which home screen layout creates the strongest first-click signal | | Method Frame | Journeys — the problem spans the user's path from opening the app to completing a balance or transaction check | | Named Methods | Funnel review, Drop-off mapping, Touchpoint analysis | | What Gets Compared | Current home screen journey vs. redesigned home screen journey, measured by first-click success and session abandonment | | Existing Data to Review | Session drop-off analytics, post-task survey scores from previous studies | | Failure Mode to Watch | Choosing the Iterations frame too early. If the team does not yet understand where the journey breaks, testing two versions against each other will not reveal the root cause — it will only show which version performs less badly. | | Next Step Handoff | → glare-focus-comparing to place signals side by side once the method has produced data | The output connects directly to the other Focus moves: - Initiatives provides the user need, business goal, and metric the method is organized around - Comparing uses the method frame to ensure signals are placed side by side fairly - Decisions uses the method output to ground the final choice in evidence --- ## 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. --- ### Prompt 1 — Diagnostic Entry: Start from a method mismatch "Our team has been running A/B tests on the mobile banking dashboard for two sprints but the results keep coming back inconclusive. We are not making progress. Using the glare-focus-methods skill, help us diagnose whether the Iterations frame is the right fit for our initiative, and recommend an alternative frame that would help us understand the problem more clearly before comparing versions." **Why this works:** Inconclusive A/B results are often a sign the team is comparing versions before they understand the problem. This prompt uses the method-selection process to identify the right frame — likely Journeys or User Goals — so the next round of evidence can actually support a decision. **Best for:** - teams stuck in repeated testing cycles without a clear result - any situation where data keeps coming back but decisions keep stalling - diagnosing a method mismatch before another round of research is run --- ### Prompt 2 — Selection Entry: Choose a frame for a specific decision "We need to decide whether to prioritize improving the transaction history flow or the balance summary card on our mobile banking dashboard. We have session data, a post-task survey, and stakeholder input from the product and finance teams. Using glare-focus-methods, identify the right frame for this decision, tell us what data to pull in from what we already have, and name the specific methods inside that frame we should use." **Why this works:** A decision between two parts of the same experience is a User Goals or Feature Usage question, not an Iterations question. This prompt uses the five-step selection process to match the frame to the decision and make the most of the data the team already has. **Best for:** - prioritization decisions between two or more design areas - any situation where the team has data but is not sure how to organize it - choosing a frame that makes stakeholder input and user data comparable --- ### Prompt 3 — Competitive Entry: Add market context to an initiative "We are redesigning the mobile banking dashboard and want to understand how our experience compares to other banking apps. Users sometimes mention competitors in feedback. Using glare-focus-methods, help us apply the Competitors frame to our initiative — name the specific methods to use, identify what we are looking for, and explain how to connect the findings to our UX metric and initiative goal." **Why this works:** User expectations are often shaped by what they use outside your product. The Competitors frame gives the team market context before comparing their own versions — preventing the mistake of optimizing against a baseline that is already behind what users expect. **Best for:** - initiatives where user feedback mentions competitor experiences - any redesign where the team does not know what benchmark to compare against - connecting competitive analysis to specific UX metrics and initiative goals --- *Glare Framework · glare-focus-methods · Focus Area* *Handoffs: glare-focus-initiatives · glare-focus-comparing · glare-focus-decisions · glare-measure*