Pages and links tagged with Methods.
No pages tagged with this facet yet.
Bindiya Thakkar
LogRocket guide explaining what a prototype is and walking through feasibility, low-fidelity, high-fidelity, and data-driven prototypes. Useful when picking the right prototype type for a question you want to answer.
Jack O'Donoghue
Lists 22 creative ways to prototype, like role-play, video explainers, packaging mockups, and Lego layouts, all aimed at low-cost learning. Useful when a team feels stuck on the same Figma prototype and wants new ways to test ideas.
Nam Pham
Offers tips for keeping participants focused during async usability tests, like clear briefs, simple linear tasks, and tools that capture short clips. Useful when running unmoderated tests and worried that users zone out without a moderator.
fastercapital
Outlines four key elements of a successful investor pitch: structure, skillsets, experience, and gaps, plus problem, business model, team, and financials. Useful when prepping for an investor pitch and you want a clean checklist.
Tim Bailey
Maps a pitch framework onto product manager interviews, using a shortened 10-slide style and emphasizing that vision is more than features. Useful when prepping for a PM interview and you need a clear way to pitch a product vision.
Luke Grimstrup
Walks through making a sharp product presentation using the Pain-Agitate-Solution framework so stakeholders quickly align on direction. Useful when you have to present a product update or vision and want stakeholder buy-in fast.
Stacie Shaw
Walk-through for employees pitching ideas inside a company, leaning on their close-up view of customers and operations. Useful when an employee has an internal innovation idea and wants to pitch it well to leadership.
Misael Neto
Walks through experiment types like PoC, prototype, and single-feature MVP and links them to continuous discovery and Torres' opportunity solution tree. Useful when picking the right experiment for the assumption you need to test.
Gregory Ciotti
Copyblogger's classic on the five most persuasive words: you, because, instantly, new, and free. Useful when writing CTAs, emails, or headlines and you want a quick set of trigger words backed by behavioral research.
Nick Babich
Lays out 16 rules for clear UX writing, like keeping it simple, using active voice, and avoiding jargon, especially in errors. Useful when reviewing product copy and you want a quick rules list to apply.
Shane Barker
Shares five less common ways to test products, like TikTok videos and micro-influencers, to get fast and honest feedback. Useful when traditional surveys feel stale and you want fresh signal from real users.
Noah Kagan
Noah Kagan shares a three-part framework for product marketing using the 3Ws (problem, customer, where) and an experiment framework to test tactics. Useful when launching a small business or new line and you want a tactical loop for testing channels.
Tope Longe
Lays out the benefits, best practices, and tools for UX concept testing, including surveys, interviews, prototypes, and analytics. Useful when picking the right testing method for an early-stage idea.
Kevan Chew
Shares three case studies for qualitative concept testing and explains how to write and run a strong test, even when fewer than half of new launches hit their targets. Useful when you want examples and steps for quickly de-risking new product ideas.
Gabriella Lanning
Pitches concept validation as a UX research method that fits the middle stage between early interviews and full prototype tests, using mid-fidelity scenarios. Useful when the team has rough ideas but isn't ready for a full prototype test.
Liz Schemanski
Shares real examples of using sacrificial concepts, which are extreme, sketchy ideas you show users to spark strong reactions and learn what they really care about. Useful when interviews feel polite or when the team needs deeper insight on user needs.
Ligia Fascioni
Reviews Ellen Lupton's book to show how storyboards, the rule of three, and scenarios push design beyond pretty pictures into clearer thinking. Useful when a designer wants narrative tools to test ideas and explain choices.
Kevin Szpak
Argues that conversion rate work is much bigger than tweaking button colors and lays out methods like 5-second tests, heatmaps, exit surveys, and interviews. Useful when a team is stuck on shallow tests and wants more meaningful CRO inputs.
Chitika, Inc
Walks through how to use usability testing to pick colors that are easy to read, accessible, and match the right tone. Useful when deciding on colors for a site or app and you want feedback from real users, not opinions.
Alex Ponomarev
Explains iteration as a cycle of building, testing, and improving small versions of a product based on feedback. Useful when teams want to avoid long planning cycles and instead ship fast and learn from each round.
Yasin Altaf
Looks at whether market research is needed before building a Minimum Viable Product, weighing the cost of research against the risk of building the wrong thing. Useful when leaders debate skipping research to move faster on an MVP.
UserPilot
Process guide for product design testing, covering goals, method choice (A/B, usability, session replay), and participant selection. Useful when standing up a design testing routine and you need a clear template.
Zuzanna Sobczyk
Walks through how to run product design tests that pair quantitative data with empathy from real user feedback. Useful when leaders ask for data without losing the qualitative why behind a design choice.
Zasima Razack
Step-by-step guide to product validation including market analysis, MVP testing, and customer feedback loops. Useful when planning a validation pass for a new product before investing further.
Vy Alechnavicius
Video tour of UX research methods beyond the standard interview and A/B test, including diary studies and participatory design. Useful when a team is over-reliant on the same two methods and missing other useful signals.
Jack O'Donoghue
Catalog of 41 design testing methods with guidance on how to choose the right one. Useful when picking a test method for a specific design question and you want options beyond the usual two or three.
Aleks Petrov
A long-form guide to user research methods including heuristic evaluation, with a focus on data-intensive products and design systems. Useful when a team wants one shared reference covering common research methods.
Klaas Hermans
Six barriers to product adoption (awareness, complexity, switching costs, fit, support, trust) and how to address each. Useful when launches do not stick and you want a checklist of likely culprits.
Serra Alban
Sixteen onboarding mistakes including too much copy, missing segmentation, and no NPS or session-heatmap signals. Useful when an onboarding flow needs a critique pass before launch.
Alexander Estner
Thirteen B2B SaaS activation tips spanning customer journey maps, CTAs, personalized messaging, and social proof, with a note that PLG and sales-led plays differ. Useful when activation needs a quick refresh and you want lots of small bets to test.
Maria Margarida
Walks through Trouva’s product design audit: gather quant and qual signals, focus on a high-traffic page, then write a report with fixes. Useful when small UX issues keep slipping past prioritization.
Hailey Friedman
Primer on market research, covering surveys, interviews, focus groups, and online methods, with a growth-marketing lens. Useful when a non-research lead needs a clean intro before scoping a project.
Bud Hennekes
Walks through a positioning process with examples and warns against common mistakes like being too broad or never refining. Useful when reworking positioning and you need a clear set of steps and pitfalls to avoid.
Shaun Price
Network-marketing focused take on attention marketing, including a 15-second prospecting strategy and content tactics for visibility. Useful when running a small business or solo brand that needs to be noticed without big ad budgets.
Jenifer Bulcock
Lays out a research sequence for innovation work: exploratory first, then generative, then evaluative, with clear goals for each stage. Useful when planning research for a new product space and you need to pick the right method for the moment.
TeamLearnable
TeamLearnable shows how MVPs help startups quickly validate ideas and avoid building the wrong thing. Useful when explaining MVP value to a non-technical founder.
Maven
Maven's guide to mastering MVP testing for product managers, with tactics PMs can apply directly. Useful when a PM wants a clean MVP testing primer with examples.
Sergey Krasotin
Sergey Krasotin shows how to test an MVP in two weeks or less, with focus on the smallest viable test. Useful when leadership pressure is on but the team has no signal yet.
Sudeep Srivastava
Sudeep Srivastava covers 21 ways to validate an MVP, from landing pages to concierge tests. Useful when a team needs a long menu of low-cost validation methods.
Tope Longe
Tope Longe at UXCam explains MVP testing strategies and how to validate the minimum viable product with real signals. Useful when a team wants a process to confirm their MVP works.
Monika Mejs
Mood Up Team explains how usability testing is the practical way to validate designs before scaling them. Useful when a team is shipping designs without any user check.
Mark Peter Davis
Mark Peter Davis interview on testing MVPs with real users, including starting with users who already love the product. Useful when a founder is unsure who to recruit for first user tests.
Amit Manchanda
Net Solutions covers fifteen ways to test an MVP, split into low-fidelity and high-fidelity techniques. Useful when a team needs to match testing depth to stage and budget.
Abhishek Jaiswal
Six proven strategies for testing an MVP, from customer interviews and explainer videos to single-feature tests. Useful when a PM needs a short menu of test methods to pick from.
Payline
Payline outlines how Agile, Lean, and modern AI tools combine for rapid MVP iteration. Useful when a team picking a delivery method wants a current view that includes AI.
Michael Seibel
Michael Seibel of Y Combinator explains how to build a ridiculously simple MVP using examples from Airbnb, Twitch, and Stripe. Useful when a founder needs permission to start tiny and grow from real users.
James Zhao
James Zhao explains how to compress the MVP timeline, from validating demand first to picking the smallest feature set. Useful when a startup is burning time before having anything in users' hands.
Billy Sweetman
Billy Sweetman walks through the mindset and tools needed before building an MVP, so the team builds the right thing. Useful when a founder is about to spend on engineering and wants the basics first.
Aman Sharma
Aman Sharma lists ten MVP iteration strategies for reaching product-market fit faster, like changing one thing at a time. Useful when an early-stage team is iterating fast and feels lost.
Trinity Nguyen
Argues that tracking champions when they change jobs is a high-performing demand channel that drove 17% of UserGems' pipeline. Useful when a B2B marketer wants new pipeline ideas and is willing to chase former buyers.
Joanne Duong
Lists seven tips for running guerrilla tests, like picking high-traffic spots and approaching individuals not groups. Useful when a designer is about to head out for street-level user testing and wants quick rules of thumb.
Nick Babich
Long-form guide to guerrilla UX testing for validating critical assumptions cheaply and fast. Useful when a team needs a clear playbook for setting up its first guerrilla test.
Kartik Malviya
Tips for running guerrilla testing well, including setting clear goals and matching the test to a paper or finished prototype. Useful when a designer wants to avoid wasting a guerrilla session and is unsure what to test.
John Crabill
John Crabill explains how iterative testing drives better eLearning marketing campaigns by tweaking parts like CTAs, taglines, and ads. Useful when a marketing team wants to learn how iterative testing applies beyond classic A/B work.
Sara Mansell
Sara Mansell explains when the RITE method (Rapid Iterative Testing and Evaluation) is the right fit — and when slower, more rigorous tests serve better. Useful when a team is choosing between RITE and traditional usability testing and needs criteria to pick.
Berto Arroyo
Berto Arroyo shows how small microinteractions — button states, transitions, and feedback cues — can bring delight without slowing the user down. Useful when a designer wants quick wins that lift product feel without rebuilding the whole experience.
Dylan Dotolo
Guide to rapid prototyping with AI using Cursor as the main tool. Useful when designers want to try code-based prototyping with AI help.
Jeff Sauro
Lays out three mixed-methods designs: explanatory sequential, exploratory sequential, and convergent parallel. Useful when a researcher needs a clear pattern for combining qual and quant in a single study.
BetterEvaluation
Practical guide on combining qualitative and quantitative data using concurrent, sequential, or component design approaches. Useful when an evaluator or researcher is planning a mixed methods study and needs a clear structure.
Daljinder (DJ) Sanghera
Argues that founders should skip formal user research and use sales calls to learn while trying to close deals. Useful when an early founder needs to validate a product but cannot afford pure research time.
Kobiruo Otebele
Frames guerrilla testing as fast, low-cost early design feedback that fits before bigger usability studies. Useful when a team wants to learn quickly during iteration without committing to a full study.
Markus Pirker
Seven-step DIY guide to guerrilla usability testing covering tasks, scenarios, and recruiting on the fly. Useful when a small team wants a checklist to run their first guerrilla test on their own.
Emily Grace Adiseshiah
Walk-through of guerrilla testing as a fast way to spot trends and lift conversion when budgets are tight. Useful when a UX team needs to make the case for cheap testing and wants to know what results to expect.
Eric Chung
Walks through guerilla usability testing as a cheap way to get candid feedback from people in cafes or parks. Useful when a small team needs quick user feedback on a prototype but has no research budget.
Matt Cowell
Questions whether the explosion of contact channels has actually helped customers or just sold more software. Useful when a CX or contact center leader is rethinking their channel mix and chasing focus over breadth.
Marshall Hargrave
Lays out 10 growth tactics like influencer partnerships and micro-influencer marketing aimed at fast user acquisition. Useful when a startup needs a list of proven tactics to test for fast user growth.
Toni Koraza
Lists nine SaaS growth hacks for 2024 covering email sequences, segmentation, and automation to convert leads into paying users. Useful when a SaaS team wants quick growth tactics that have already worked for others.
Dana Mitroff Silvers
Dana Mitroff Silvers applies the double diamond to collaboration and problem solving in museums. Useful for facilitators using divergent and convergent thinking outside tech.
Sakshi Bhardwaj
Sakshi Bhardwaj surveys design processes beyond the double diamond and shows where each fits. Useful for design leads choosing a process for a new team or project.
Victory Brown
Victory Brown rethinks the double diamond for modern, fast product teams. Useful when a UX lead wants a simple process explainer for new team members.
Yuri Teodorowych
Yuri Teodorowych argues why the double diamond is too clean for real design work and offers a richer model. Useful when teams want a more honest, messy version of the process.
Dan Ramsden
Dan Ramsden shares process models that go beyond the classic double diamond. Useful when a team feels the diamond no longer fits their work and wants other options.
Kiran Bir Sethi
Describes the FIDS method (Feel, Imagine, Do, Share) used by Design for Change to help students drive social change. Useful for educators and facilitators planning change-based projects.
Saviour Egbe
Looppanel explains generative research — what it is, when to use it, and how it differs from evaluative work. Useful when a researcher needs to defend the choice of generative versus evaluative for an upcoming project.
Ivano Aquilano
Proposes a chunking approach that breaks the design process into small repeatable pieces. Useful when bigger frameworks feel too heavy for a small team or fast cycle.
Scott Middleton
Compares three popular product design processes and shows where each one fits best. Useful when a team needs to pick a process that matches the stage and risk of their project.
Silvia Vitali
Compares lo-fi and hi-fi wireframes and stresses why mapping the flow matters more than pixel detail early on. Useful when a team is stuck polishing screens before nailing down the user flow.
Kara Pernice
Compares low and high fidelity prototypes and explains when each one is the right tool. Useful when picking the cheapest format that still answers the question you have.
Moyosore Ale
Walks through how to move from rough sketches to polished prototypes from a researcher's view, and what to test at each stage. Useful when a team needs a step-by-step path from idea to testable design.
Yael Ben-David
Shows a small, scrappy A/B test on an onboarding screen and what the team learned from cheap qualitative signals. Useful when a content or UX writer wants quick proof for a copy choice without a full research setup.
David Di Sipio
David Di Sipio shows how lean usability testing with about five users finds 85% of problems and lets teams act with confidence. Useful when a team thinks they can't afford usability testing in tight cycles.
Userpilot
Userpilot compares empathy maps and personas, with personas explaining who users are and empathy maps explaining how they feel. Useful when teams confuse the two and want to choose when to use each.
Rajat Chauhan
Rajat Chauhan walks through the anatomy of a high-converting landing page: headline, hero, value prop, CTA, social proof, and speed. Useful when a team is scoring its landing page and wants a clear checklist.
Jayson DeMers
Jayson DeMers explains how to build perfect landing pages for SEO or PPC, with custom messages per audience and A/B testing. Useful when running paid traffic to multiple pages and you need consistent quality.
Tudor Baidoc
Tudor Baidoc shares ways to optimize landing-page UX, including F-pattern reading, fast loads, minimal navigation, and clean layouts. Useful when a marketer-designer pair wants research-backed UX rules for a landing page.
Distillery Tech
Distillery Tech walks through how to design an effective landing page by understanding personas, avoiding clutter, and writing motivating CTAs. Useful when a team is launching a landing page and needs a quick checklist.
Peep Laja
Peep Laja makes the case that iterative A/B testing is essential when you cannot predict user behavior — small repeated tests beat big one-shot redesigns. Useful when a leader is asking for a single big bet and the team needs an argument for testing instead.
Andrew Harder
Andrew Harder's Mind the Product talk shows how user research can keep product discovery on track, especially the difference between Discovery and Creation mindsets. Useful when teams keep arguing whether to ask users or invent for them.
Michael Loboyko
Michael Loboyko maps four canvases (Business Model, Lean, Product, Opportunity) to different stages of building a product. Useful when teams pull out the wrong canvas and confuse strategy with execution.
Marty Cagan
Marty Cagan compares the Lean Canvas and Opportunity Assessment, picking Canvas for new businesses and Opportunity Assessment for existing products. Useful when a PM is deciding which lightweight tool fits their current investment.
Lalatendu Satpathy
Lalatendu Satpathy combines How Might We with the Value Proposition Canvas into a single Opportunity Canvas for early product work. Useful when teams find pure HMW too vague and value prop canvases too detailed.
Erik Messaki
Erik Messaki shares a Board Game Method, plus motion, color, and reward triggers that designers can use to keep users coming back. Useful when a designer wants concrete UI moves to improve a specific feature's stickiness.
Ksenia Sternina
Ksenia Sternina explains common UX research methods like field studies, interviews, focus groups, and eye tracking using short GIFs. Useful when junior team members need a friendly visual primer before reading dense method guides.
Micah Bowers
Micah Bowers walks through UX research methods that build empathy, like card sorting that exposes how differently users group ideas. Useful when a team needs to lower their own assumptions before they design.
User Interviews
User Interviews' field guide module covers the main UX research methods and when to use each, with a tilt toward fundamentals like interviews and surveys. Useful when a new team needs a single starter library to pick from.
Martina Dove PhD
Martina Dove shares ways to test generative AI features, including UX audits and onboarding research that reveal how confused users are. Useful when teams ship a chatbot or AI feature and traditional testing methods miss the new failure modes.
Christian Rohrer
Christian Rohrer maps 20 UX research methods across attitudinal vs behavioral, qualitative vs quantitative, and product phase. Useful when a team needs a single chart to pick the right method fast.
Nick Groeneveld
Nick Groeneveld places UX research methods on a chart of qualitative vs quantitative and explore vs validate so designers can pick by goal. Useful when a designer is new to research and needs a simple way to choose where to start.
Simone Viani
Simone Viani lays out interaction patterns that help users notice and trust AI features, from labels to confirmations. Useful when designers are adding AI to an existing product and need conventions users can recognize fast.
Jesse Showalter
Jesse Showalter's video on prototype and test in ten minutes or less — covering what you can actually learn from very fast prototype rounds. Useful when a designer wants permission to run extremely lightweight tests instead of waiting for a real research project.
Johnathan Dane
Johnathan Dane lists 14 website usability testing methods — heuristic reviews, click maps, five-second tests, and more — with simple use cases. Useful when a CRO or marketing team wants a menu of light-weight UX tests they can run themselves.
Kathryn Whitenton
Kathryn Whitenton from NN/g covers advanced user testing methods that accelerate innovation — beyond standard usability testing. Useful when a research lead has standard testing dialed and wants a tour of newer methods to add to the toolkit.
David DeSanto
GitLab's UX research handbook entry on RITE — Rapid Iterative Testing and Evaluation — covering when to run it and how the team should be set up. Useful when a remote team wants a public playbook for RITE that they can copy or adapt.
Reddit, Inc.
Reddit thread of practitioners sharing real opinions on design sprints — what worked, what flopped, and which contexts they avoid. Useful when a team is on the fence about sprints and wants honest, varied takes from working designers.
Kelly Dern
Kelly Dern explains rapid usability testing for designers — small-batch sessions, quick analysis, and fast fixes between rounds. Useful when a designer wants to add lightweight testing to their workflow without a research team.
Richard Rutter
Richard Rutter shares the unexpected benefits of design sprints — like team bonding, sharper questions, and better stakeholder alignment, beyond the prototype. Useful when a leader is weighing the cost of a sprint and a designer wants to make the case for the side benefits.
Brian Sullivan
Brian Sullivan offers five ways to make rapid usability testing even faster — covering recruiting, scripts, and parallel runs. Useful when a usability team is already running tests but the cycle still feels too slow for the product cadence.
Mauricio Wolff
Mauricio Wolff argues design sprints are not for everyone — they need a specific kind of team and problem, and forcing them on the wrong context wastes a week. Useful when a leader is shopping sprints around the org and a designer wants to make the case for selectivity.
Robert Skrobe
Robert Skrobe explains when a design sprint is not a good idea — for problems that are too big, too vague, or already decided. Useful when a leader keeps proposing a sprint and a facilitator needs criteria to push back when it does not fit.
Marc Fonteijn
Marc Fonteijn's video on the problems with design sprints — and how to fix them by adjusting prep, team, and follow-through. Useful when a team has done sprints that fizzled and wants a take from a service-design lens on what to fix.
Google's Design Sprint Kit — agendas, exercises, and templates teams can copy to run a sprint. Useful when a facilitator wants ready-made artifacts and does not want to design the sprint themselves.
Braden Kowitz
Companion site for The Design Sprint by Jake Knapp, John Zeratsky, and Braden Kowitz — the original five-day sprint framework. Useful when a team is about to run their first sprint and wants the source material before adapting.
Skjoldbroder
Skjoldbroder shares a Google design sprint that went wrong — and the lessons that came from it about scope, team, and prep. Useful when a team is planning their first sprint and wants honest failure stories before they over-promise.
Erica Kucharczyk
Erica Kucharczyk gives step-by-step guidance for using generative research to build the right thing — from framing to synthesis. Useful when a team wants a checklist they can run end-to-end on their first generative study.
Anusha Bollimuntha
Anusha Bollimuntha breaks down generative research — what it is, when to run it, and why it sits at the front of the design process. Useful when a researcher is making the case for early-stage discovery work to skeptical leaders.
Alexandra W.
Argues generative research uncovers blind spots that traditional studies miss — surfacing unmet needs and hidden contexts. Useful when traditional studies keep coming up empty and a team needs a frame to invest in deeper exploration.
Kate Conrick
Kate Conrick offers a clean breakdown of UX research types — generative, evaluative, qualitative, quantitative — and which fits which question. Useful when a team or new researcher is confused about which method to use and wants a one-page reference.
Andréa Crofts
Andrea Crofts walks through generative and evaluative research for visually impaired users — covering recruitment, methods, and what to listen for. Useful when a team is planning accessibility research and wants a video walkthrough from someone who has done it.
Jasmine Kim
Jasmine Kim explains the RITE method — what it is, when to use it, and how to set up the team to act on findings between sessions. Useful when a team is brand new to RITE and needs a primer on the rules of engagement.
Jas Nijhar quickly
BTS design team shares how they use RITE to resolve experience issues fast — testing, fixing, and re-testing in the same week. Useful when a team has known UX problems and a release window and wants a method that ships fixes in days, not months.
Greg Nudelman
Greg Nudelman argues classic usability tests fail for AI-driven products — because outputs are non-deterministic — and proposes new methods focused on trust and prompt patterns. Useful when a team is shipping AI features and finding their old test plans miss the real risks.
Userpilot
Userpilot shows how iterative testing helps build better products — with examples of test types and what each surfaces. Useful when a PM wants a quick reference of test types and outcomes to plan their own program.
Paula Barraza
Six tips for running iterative usability testing well — set tight goals, plan small rounds, recruit fast, and act between rounds. Useful when a UX team plans usability tests but never closes the loop and needs a checklist to keep iterations honest.
Andrew Birgiolas
Andrew Birgiolas shares the six steps the Sephora app team followed to redesign through iterative testing and user feedback. Useful when a team is planning a big app redesign and wants a sequenced playbook from a real consumer product.
Ricardo Gerstl
Case study on Berst, a lean UX research project — small samples, fast iterations, and how the team turned signals into product changes weekly. Useful when a team wants a real example of how lean UX research runs in practice, not just in theory.
Userlane
Userlane lists proven product storytelling techniques — hero's journey, before-and-after, and customer voice — with examples teams can copy. Useful when marketing or PM needs a small toolkit of story shapes for upcoming launches and reviews.
Helena Liu
Helena Liu walks through using ChatGPT to draft user stories — prompts, edits, and how to keep the human voice in the final output. Useful when a PM wants to speed up story drafting but is worried about losing the user perspective when AI writes them.
cindyangelira
Walks through using clickstream data to map how users actually move through a product, and how to spot common paths and dead ends. Useful when a team has analytics data but needs a method to turn raw clicks into a journey view.
Alita Kendrick
Nielsen Norman Group video covers three onboarding approaches — guided tours, contextual help, and progressive disclosure — and when each one fits. Useful when a team is choosing an onboarding pattern and wants research-backed guidance on the trade-offs.
Alexander Estner
Compiles 22 B2B SaaS marketing tactics for getting more qualified leads — covering positioning, content, paid, and lifecycle plays. Useful when a small SaaS team needs a checklist to find the next tactic to test instead of guessing.
Michael Boysen
Five quick tips for analyzing a customer journey by treating it as a series of jobs to be done, not just steps in a funnel. Useful when a team is staring at a long journey map and needs a fast way to spot the moments that actually drive value.
Jordan Gronkowski
Walks through eight ways to analyze a customer journey — from funnel and cohort views to path, sentiment, and drop-off analysis — with notes on when each one earns its keep. Useful when a team has journey data but is not sure which lens will surface the friction that matters most.
Tomasz Bąk
Seven ways to use AI to build an MVP fast. Useful when teams want to compress build time using AI tools.
Jeff Humble
Top five methods for continuous discovery and UX research. Useful when teams want a shortlist of practical methods to start with.
Evelyn So, M.Sc
Lessons from skipping wireframes in favor of AI-driven prototypes. Useful when teams wonder if classic wireframes are still worth the time.
Henrik Kniberg
Henrik Kniberg shows rapid prototyping powered by Claude in this video. Useful when teams want a live demo of AI-driven prototyping speed.
Reddit, Inc.
Reddit thread where no-code makers share how they use AI for early prototypes. Useful when teams want real-world tips before trying AI prototyping.
Ekta Srivastava
Lists practical techniques for rapid prototyping at different fidelity levels. Useful when a team wants a simple menu of methods to try.
Nicolle Merrill
Monthly meeting recording covering AI prototyping tools for design work. Useful when designers are weighing which AI tools to add to their stack.
Andrea Saez
Shows how to run experiments off an Opportunity Solution Tree to compare ideas fairly. Useful when teams have many ideas and need a structured way to test them.
Esther Han
Defines root cause analysis and gives an eight-step framework with HBS's congruence model. Useful when leaders want a more structured RCA process across the org.
Rock Content Team
Roundup of eight RCA tools, including 5 Whys, fishbone, Pareto, FMEA, and DMAIC. Useful when a team wants to pick the right RCA tool for their kind of problem.
Owen Fay
Shows how fast, low-cost usability tests fit agile work and break research bottlenecks. Useful when teams feel research is too slow to keep up with builds.
Manoj R.
Side-by-side breakdown of attitudinal vs behavioral research approaches. Useful when you need a quick teaching aid to explain the difference to a team.
Page Laubheimer
NN/g piece on the difference between attitudinal and behavioral research in UX. Useful when you are picking research methods and need to pick the right type.
Neal O'Grady
Quick rundown of copywriting frameworks like AIDA and PAS that pull a lot of weight. Useful when you want a small toolbox of frameworks to lean on.
Brian Byun
How to set up unmoderated concept tests that still produce strong signals. Useful when you can't run live sessions but still need real feedback fast.
Zoe Dimov, PhD
Smashing Magazine guide to running UX research quickly under tight timelines. Useful when you have a few days to learn from users and need a leaner research plan.
Rick Pastoor
Pushes past the Eisenhower urgent/important matrix toward more nuanced personal prioritization. Useful when you feel the Eisenhower matrix is too coarse for real work choices.
Sarah Gibbons
NN/g overview of five prioritization methods for UX roadmaps, with when to use each. Useful when you need a clear comparison of methods to share with stakeholders.
Reddit, Inc.
Reddit thread where PMs share the prioritization frameworks they actually use on their roadmaps. Useful when you want unfiltered peer input on RICE, MoSCoW, and other methods.
Rian van der Merwe
Reviews several product prioritization methods and asks how we really know what matters most. Useful when you are picking a prioritization method and want a tour of the options.
Clayton Kjos
Lists six reasons the impact-vs-effort matrix can mislead teams and slow down good decisions. Useful when your team leans on the matrix by default and you want to challenge it.
Robert Drury
Argues early-stage teams should do unscalable things like one-on-one interviews and personalized emails to find product-market fit. Useful when you are pre-PMF and feel pressure to automate too early.
Kaemon Lovendahl
First-person story of building a full blog by prompting ChatGPT to generate 90% of the components. Useful when you are curious how vibe coding actually plays out on a small project before trying it yourself.
Kat Allen
Kat Allen on how to simplify complex technical information for non-technical readers. Useful when a designer or writer is translating engineering content for a wider audience.
Florence Vincent
Florence Vincent on simplifying complex content step by step in higher-ed contexts. Useful when a content designer or writer is breaking down a tangled topic into something a reader can follow.
Taras Bakusevych
Taras Bakusevych offers a practical guide to hyper-personalization in UX, including tactics and traps. Useful when a designer is asked to add hyper-personalization and wants a clear-eyed playbook.
Aleksandr P.
Argues for using iterative prototyping to fight scope creep by validating before building too much. Useful when a team keeps adding features mid-project and wants prototyping as a check.
Formclick
Formclick on the importance of design validation as part of UX, with simple ways to bake it into a project. Useful when a team is shipping designs without validating them and wants quick checks to add.
Sherry W.
Asks how much research is enough and gives heuristics for not over- or under-researching. Useful when a researcher is being pushed to do more or less research and wants a way to decide.
VMware Tanzu Team
VMware Tanzu's take on running lean experiments inside enterprise teams, with patterns that work in big orgs. Useful when a team in a large company wants to run small experiments without getting blocked by process.
Timan Rebel
Eighteen of the most used Lean Startup experiments, each with examples and when to use them. Useful when a team has an assumption to test and wants a menu of experiment styles to pick from.
Gorilla
Gorilla post on how gamification is used inside behavioral science research, with study examples. Useful when a researcher is designing a study and wondering if gamified tasks would help engagement.
Karl Purcell
Karl Purcell shares seven lessons from behavioral science that game designers use to shape user behavior. Useful when a product team wants science-backed plays they can borrow without copying game mechanics blindly.
Yu-kai Chou
Yu-kai Chou explains the Octalysis framework and its eight core drives that motivate users. Useful when a team wants a deeper model than 'add points' to understand why users do what they do.
Irina Nikulina
Shows how to apply gamification and behavioral design techniques to lift engagement, with concrete examples. Useful when a PM or designer is trying to boost a key behavior without slapping on points and badges.
Cynthia Vinney
Walks through how to design intuitive UIs, with patterns and gotchas pulled from real products. Useful when a designer feels their interface is right but users keep stumbling.
Reddit, Inc.
Reddit thread of UX designers swapping ways they use AI tools in product design work. Useful when a designer wants peer-tested workflows for AI in their daily process, not vendor pitches.
Srikanth R B
Walks through a product discovery process with frameworks and tool picks like opportunity solution trees, interviews, and prototypes. Useful when a team is building its discovery toolkit from scratch and wants a starter map.
Neil Turner
Lists twelve practical tips for getting more out of data in UX work, from picking the right question to mixing methods. Useful when a team wants a quick checklist to upgrade how they use data day to day.
Sean Ryan
Sean Ryan's PGE — Plan, Generate, Execute — design pattern stages AI generation so the user can review and steer in between, lowering perceived latency. Useful when an AI feature feels slow or surprising and you want a way to keep users engaged mid-flow.
Lekha Priyadarshini Bhan
Lekha Priya's six agentic AI patterns include ReACT, with details on how each splits reasoning and action loops. Useful when implementing autonomous workflows and you want a specific reference for agent loop design.
Chris Butler
Chris Butler argues for nuanced use of AI design patterns, mixing classic frameworks with new heuristics built for AI/ML uncertainty. Useful when teams adopt AI patterns blindly and you want a frame to balance basics with novelty.
Sandip Das
Sandip Das walks through gen AI architecture patterns including RAG, prompt engineering, and data pipelines. Useful when shipping a gen AI feature and you need an architecture refresher before designing the user flow.
Bonnie Y.
Seven UI patterns for AI products — collaborative canvases, agent task orchestration, form-based interfaces, and more — with practical sub-patterns. Useful when designing AI-powered tools and you need vocabulary for inline suggestions, slash commands, or task logs.
Avi Chawla
Avi Chawla's visual breakdown of five agentic AI patterns: Reflection, Tool Use, ReAct, Planning, Multi-Agent. Useful when explaining agent design choices to your team or comparing patterns side by side.
Anil Kumar Jain
Anil Jain's overview of agentic AI architectures and design patterns including Reflection, Tool Use, ReAct, Planning, and Multi-Agent Collaboration. Useful when designing autonomous AI workflows and you want named patterns and frameworks like LangChain and AutoGen.
Rahul Suresh
Five buckets of AI design patterns — prompting, responsible AI, UX, AI-Ops, optimization — with code examples for each. Useful when building AI systems and you want a structured catalog of patterns beyond the classic Gang of Four.
Sarah Gold
Sarah Gold's IF studio shares 16 design patterns for AI focused on trust, collaboration, and robustness. Useful when designing AI features and you need real, named patterns for showing confidence, provenance, and digital proofs.
Mahmood R.
Compares SWOT and SOAR, arguing SOAR (Strengths, Opportunities, Aspirations, Results) is more action-oriented and forward-looking. Useful when a strategy session needs more energy and you want a positive frame.
Brett Farmiloe
Brett Farmiloe's SCORE roundup lists nine business-leader-recommended alternatives to SWOT, including NOISE, customer feedback, and cross-org review. Useful when a small business wants to plan and SWOT feels stale.
Tyler Hilker
ROWS reframes a personal SWOT by replacing passive Threats with proactive choices that fit personal evaluation. Useful when running a personal review and SWOT feels too defensive for what you actually need.
NMBL Strategies
NMBL Strategies lists four alternatives to SWOT — including PESTEL and NOISE — for teams who want different framings. Useful when a team has outgrown SWOT and you need quick options to try next.
Mukundan Sankar
Shows how neurosymbolic AI deepens SWOT analysis by tying strengths and weaknesses into a strategic narrative beyond what GPT-3.5 can do. Useful when SWOT outputs feel shallow and you want to see how AI can sharpen the analysis.
Kate Kaplan
Explains why and when to make a customer journey map and walks through the five-step process from skeleton to narrative visualization. Useful when starting a journey mapping project and you want a clear framework before pulling stakeholders in.
Sarah Gibbons
Cheat sheet comparing empathy maps, journey maps, experience maps, and service blueprints, with notes on when each is best. Useful when picking a map type for a project and you need a quick side-by-side reference.
Walter Lima
Argues that usability tests alone miss whether a product is wanted, and pairs qualitative interviews with KANO/JTBD surveys to fill the gap. Useful when usability looks fine but you suspect the product still won't sell or stick.
Adyasha Panda
Lists low-cost ways to test desirability — surveys, interviews, explainer videos, emails, coming-soon pages, and shadow buttons. Useful when a feature is on the roadmap and you want to test demand before investing real engineering effort.
Drew Freeman
Frames desirability testing as a way to go past usability and ask whether users actually want the product on an emotional level. Useful when usability scores look fine but adoption is weak and you suspect emotional fit is the gap.
Michael Hawley
Shows how Microsoft Reaction Cards can quickly capture emotional response to a design and align it to brand attributes. Useful when you need a low-cost way to sanity check a visual direction with users before committing to it.
Julie Anderson
Introduces desirability studies as a method for testing aesthetic appeal and emotional response on visual designs. Useful when launching a new visual direction and you want a quick read on whether users feel the right way about it.
Ananda Nadya
Explains a 2x2 matrix that sorts work into ship-and-measure, research-light, design-heavy, or research-heavy modes based on certainty and risk. Useful when you need a quick way to label a project and pick how much research it really calls for.
Gabriel Steinhardt
Gabriel Steinhardt's Blackblot algorithmic prioritization model walks PMs through a finite set of decisions to deterministically rank features. Useful when teams distrust subjective scoring methods like RICE or MoSCoW.
Rameez Kakodker
Rameez Kakodker walks through feature prioritization frameworks like Impact/Effort and Magic Quadrant, and argues no single framework wins everywhere. Useful when PMs want a tour of the prioritization landscape before picking one.
Lena Sesardic
Lena Sesardic shares how she validated her first product idea (Hippokite) with a simple landing page before building. Useful as an entrepreneur's example of moving from idea to first signal cheaply.
Silvia Romanelli
Silvia Romanelli argues internal KPI dashboards rarely get user-tested but should, and shares simple task-based tests she ran. Useful when teams ship internal dashboards that nobody actually uses.
Sarah Tan
Sarah Tan shows how she applied a 5-step Human-Centered AI methodology to a real travel-concierge MVP, starting with the user's inner narrative not the tech stack. Useful when teams want a concrete example of HCAI design from blank page to MVP.
Thomas Nagels
The article explains how teams can reduce product risk by identifying their riskiest assumptions and deliberately testing those first using lightweight experiments. Use this when deciding what to validate before building and how to sequence learning to avoid investing in the wrong ideas.
Rosie Hoggmascall
The article explores how Monzo, the UK challenger bank, structures assumption testing as part of its product process, focusing on identifying demand, defining key hypotheses about user behaviour and validating them with research and early signals before investing in solutions. Use this when deciding how to operationalize assumption testing in your discovery work to reduce risk and ensure learning informs decisions.
Paweł Huryn
The article introduces the Assumption Prioritization Canvas as a practical tool to help teams surface assumptions, assess them by risk and uncertainty, and decide which ones to test first. Use this when deciding where to focus discovery effort, reduce product risk early, and align teams on what needs validation before building.
Cesar Tapia
The article explains how product teams at carwow identify assumptions, prioritize them based on risk, and test them early to reduce uncertainty before committing to delivery. Use this when deciding what to validate first, how to avoid building on untested beliefs, and how to structure learning to inform product decisions.
Michael Storrs
The article explains how testing assumptions — not whole product ideas or features — is the fastest, cheapest way to learn whether a product direction is viable, by identifying the riskiest assumptions and validating them early with low-cost experiments. Use this when you’re deciding which parts of an idea to test first to reduce risk and inform product decisions before building prototypes or solutions.