# Questioning AI Skill Measure Area · Questioning Move · Decision Map --- ## 1. What the Skill Does The Questioning skill helps teams write research questions that actually produce useful answers. It is the third move inside the Measure area of Glare's Decision Map. This is where teams take a hypothesis and turn it into specific, testable prompts that can be run in a study, a test, or a survey. Weak questions stall research. They are too broad to answer, too leading to trust, or too vague to connect to a metric. The Questioning skill fixes that by helping teams sort their questions into the right type, pick the right research mode, check for bias, and connect every question to a UX metric and a collection technique. Every research question belongs to one of four types. | Type | What it explores | Example | |---|---|---| | People | Habits, behaviors, and preferences | How often do users check their balance in the app? | | Process | Steps, flows, and friction points | What steps do users take to find their transaction history? | | Product | Clarity, usefulness, and comprehension | Do users understand what the summary card is showing them? | | Problem | Barriers and drop-off causes | What stops users from completing a transaction review? | People and Process questions give context. Product and Problem questions give clarity. A good research plan includes all four. **The Bias Check Rule** Teams often write questions that look neutral but quietly push toward the answer they already expect. The most common forms are leading questions ("Why do users prefer the simpler layout?"), questions loaded with internal jargon ("Does the information architecture feel intuitive?"), and questions that assume the problem exists before it has been confirmed ("What frustrates you most about finding your balance?"). The rule is simple: before running any question, check it against three filters. Is it leading? Does it echo an internal assumption? Is it too technical for the audience? If any answer is yes, rewrite it. A biased question produces data that confirms what the team already believed — which is not research, it is validation theater. --- ## 2. Business Benefit Good research questions cut discovery time and produce findings that teams can act on immediately. They replace open-ended exploration with targeted prompts tied to real decisions. This helps teams: - stop running studies that produce interesting results but no clear direction - connect every question to a metric before the study begins - catch bias before it corrupts the data - build a library of reusable questions across sprints - share questions alongside answers so context travels with findings Research becomes faster and easier to trust. --- ## 3. Skill Output When used correctly, the skill produces a set of research questions ready to run. Each question shows: - which type it belongs to: People, Process, Product, or Problem - which research mode it fits: Exploratory, Evaluative, or Comparative - which UX metric it connects to - which collection technique to use The example below shows how this works for a mobile banking dashboard. | Field | Example Output (Mobile Banking Dashboard) | |---|---| | Research Mode | Evaluative — the redesigned home screen exists and we need to know if it works | | People Question | How often do habitual users check their balance and transactions in a single session? → Metric: Frequency, Technique: Survey | | Process Question | What steps do users take to find their last three transactions on the current home screen? → Metric: Completion Rate, Technique: Task Success Test | | Product Question | Can users identify what the summary card is showing them without any explanation? → Metric: Comprehension, Technique: First Click Test | | Problem Question | At what point in the flow do users give up trying to find transaction history? → Metric: Drop-off Rate, Technique: Clickstream Analysis | | Bias Check | "Why do users struggle to find transactions?" is leading — it assumes they struggle. Rewrite: "Where do users go first when looking for recent transactions?" | | Failure Mode to Watch | Writing questions without a metric attached. A question that cannot connect to a number is an interview prompt, not a research question. It can produce useful context but cannot guide a design decision on its own. | | Next Step Handoff | → glare-measure-findings to translate the data these questions produce into signals tied to user needs and business outcomes | The output connects directly to the other Measure moves: - Hunches provides the hypothesis each question is designed to test - Findings uses the questions as context when interpreting the data - Collecting from the Define area tells you which tools to run the questions in --- ## 4. Prompt Strategies The prompts below show different ways to use this skill. Each example uses a mobile banking dashboard update. --- ### Prompt 1 — Diagnostic Entry: Fix a weak question set "We are about to run a usability study on our mobile banking dashboard redesign. Our current research questions are: 'Do users like the new layout?' and 'Is the dashboard easier to use?' Using glare-measure-questioning, apply the bias check to these questions, explain what is wrong with each one, and rewrite them as testable questions aligned to a UX metric and a collection technique." **Why this works:** "Do users like the layout?" is a leading question that assumes the team wants a yes. "Is it easier?" assumes easier is the right goal. This prompt uses the bias check and testable question criteria to replace both with questions that can actually produce actionable data. **Best for:** - auditing a research plan before a study runs - any question set written quickly without a bias check - preparing for a study where findings need to hold up in a stakeholder review --- ### Prompt 2 — Mode Entry: Choose the right research mode "We have two versions of the mobile banking dashboard home screen. Version A shows balance and the last three transactions upfront. Version B uses a summary card users tap to expand. We need to run research to choose between them. Using glare-measure-questioning, identify the right research mode for this decision, write one question for each of the four question types, and match each to a UX metric and technique." **Why this works:** Choosing between two versions is a Comparative question, not an Evaluative one. The research mode changes which techniques are valid and which metrics are meaningful. This prompt uses the mode framework to make sure the question set fits the actual decision. **Best for:** - any sprint where two design directions need a tiebreaker - preparing questions for an A/B or preference test - making sure the research method matches what is actually being decided --- ### Prompt 3 — Library Entry: Build reusable questions for a recurring topic "Our team runs usability research on our mobile banking dashboard every quarter. We keep writing the same questions from scratch each time. Using glare-measure-questioning, help us build a reusable question library for dashboard research — covering all four question types, all three research modes, and the most common UX metrics we need to track." **Why this works:** Teams that write questions from scratch every sprint waste time and introduce inconsistency. A question library makes results comparable across rounds. This prompt uses the skill to build a structured, reusable set of prompts the team can pull from each quarter. **Best for:** - teams running recurring research on the same product area - building a shared research foundation across product, design, and marketing - making quarterly results comparable over time --- *Glare Framework · glare-measure-questioning · Measure Area* *Handoffs: glare-measure-hunches · glare-measure-findings · glare-define-collecting · glare-design-signals*