Card Sorting FAQ

Find answers to frequently asked questions about card sorting methodology, online tools, best practices, and implementation strategies.

Card Sorting Basics

Card sorting is a user research method where participants organize content items (cards) into categories that make sense to them. It helps designers understand how users naturally think about and group information, which informs the creation of intuitive navigation structures and information architecture.

Each "card" represents a piece of content, feature, or topic. Participants sort these cards into groups and, in open card sorts, name those groups using terminology that makes sense to them.

Open card sorting: Participants create and name their own categories. This approach is best for discovering how users naturally group content and what terminology they use. It's ideal when you're starting from scratch or want to understand user mental models.

Closed card sorting: Participants sort cards into predefined categories. This is ideal for validating an existing structure, testing proposed category names, or comparing different organizational schemes.

Hybrid card sorting: Combines both approaches, giving participants predefined categories with the option to create new ones if needed. This is useful when you have a partial structure but want to remain open to new organizational ideas.

Card sorting is most valuable early in the design process when establishing information architecture. Use it:

• Before designing navigation for a new website or application
• When restructuring or reorganizing existing content
• When validating proposed categories or navigation schemes
• When you need to understand user mental models for a new product or feature
• During a redesign to ensure the new structure aligns with user expectations

Card sorting works for any digital product requiring information organization: websites, mobile apps, software applications, intranets, knowledge bases, documentation systems, and more. The methodology remains the same—you're discovering how users mentally group features, functions, or content regardless of platform.

For mobile apps, card sorting can help organize navigation menus, feature sets, settings, or any content that users need to browse or search.

Planning Your Study

Research suggests 15-30 participants is ideal for most card sorting studies. With 15 participants, you'll identify most major patterns in how users group content. 20-30 participants provide stronger statistical confidence and reveal more nuanced groupings.

More than 30 participants yields diminishing returns for most studies, though larger samples can be valuable if you're comparing different user segments or need very high confidence in your results.

Most successful card sorts use 30-60 cards. This range provides enough complexity to reveal meaningful patterns without overwhelming participants.

Fewer than 30 cards may not provide enough complexity to reveal how users would organize a realistic amount of content. More than 60 cards can overwhelm participants and lead to fatigue, reducing the quality of results as they rush to complete the task.

If you have more than 60 items to test, consider running multiple studies focusing on different sections or content types.

Recruit participants who represent your target audience—typically regular users, not experts. Experts often have specialized knowledge and mental models that don't reflect how most users think about your content.

You want to understand your typical users' mental models, as they're the ones who will navigate your site or application. Participants should be familiar enough with your domain to understand the card labels, but not so expert that they sort content based on industry taxonomies rather than intuitive groupings.

Effective card labels are clear, concise, and use language your users understand:

• Avoid jargon and internal terminology
• Keep labels brief (2-5 words typically)
• Use concrete terms rather than abstract concepts
• Be consistent in voice and style across all cards
• Test labels with a few users first to ensure clarity
• Add descriptions or tooltips for complex items if your tool supports it

Good: "Return a product" / Bad: "RMA process initiation"

Conducting the Study

Online card sorting offers scalability, remote participation, automated analysis, and lower costs. It's ideal for unmoderated studies, large sample sizes, and when you need quantitative data quickly.

Physical card sorting allows for better observation of participant behavior, easier facilitation of think-aloud sessions, and more natural interaction. It's best for exploratory research where you want deep qualitative insights.

Choose based on your research goals, budget, timeline, and whether you need primarily quantitative data or qualitative insights.

Yes, online tools enable unmoderated studies where participants complete the sort independently without a researcher present. This approach allows for larger sample sizes, geographic diversity, participants completing the study at their convenience, and lower costs per participant.

However, you lose the ability to observe behavior in real-time, ask follow-up questions, or clarify misunderstandings. Unmoderated studies work best when your cards are clearly labeled and your instructions are unambiguous.

Most card sorting sessions take 20-40 minutes, depending on the number of cards and complexity. Keep sessions under 45 minutes to avoid participant fatigue, which can lead to rushed decisions and lower quality data.

If your study requires more time, consider breaking it into multiple sessions, reducing the number of cards, or using a hybrid sort with some predefined categories to speed up the process.

For open sorts, allow flexibility but consider setting reasonable limits—typically 5-15 categories. Too few categories force artificial groupings that don't reflect users' true mental models. Too many categories create unusable fragmentation.

Some tools let you set minimum and maximum category numbers to guide participants while still capturing their mental models. This prevents extreme results (like one participant creating 30 categories) while maintaining the value of open sorting.

Analyzing Results

Analysis includes several key steps:

• Review similarity matrices to see which cards participants commonly grouped together
• Examine dendrograms to identify natural hierarchies and clusters
• Analyze category names from open sorts for common terminology
• Identify outlier cards that participants struggled to categorize
• Look for patterns and consensus across multiple participants
• Note cards that were consistently grouped together vs. those with high disagreement

Most online tools provide automated analysis features including these visualizations.

A dendrogram is a tree diagram showing hierarchical relationships between cards based on how frequently they were grouped together by participants. Cards that appear closer together on the tree were more commonly grouped together.

The diagram helps identify natural clusters and potential category structures. Branch points show where groups split, with cards that stay together longer indicating stronger relationships. Use dendrograms to determine how many main categories make sense and which cards belong together.

Cards consistently left unsorted or placed in many different categories may indicate several things:

• The card label is unclear or ambiguous
• The item doesn't belong in your information architecture
• The content needs its own dedicated category
• The item genuinely fits in multiple places (consider cross-linking)
• Participants don't understand the concept

Review these outliers carefully—they often reveal important insights about your content or your users' understanding. Don't ignore them; investigate why they're problematic.

Online Card Sorting Tools

Online card sorting tools offer significant advantages:

• Enable remote, unmoderated studies with participants anywhere in the world
• Provide automated data analysis, saving hours of manual work
• Generate dendrograms, similarity matrices, and other visualizations
• Support larger sample sizes more easily than in-person studies
• Cost less overall than in-person research
• Allow participants to complete studies at their convenience
• Make it easy to recruit participants from diverse geographic locations

Online card sorting tools range from $49-$249 per month for basic paid plans. Costs vary based on features, number of concurrent studies, participant limits, and analysis capabilities. Some tools offer free trials or free tiers with limited features.

Additional costs may include participant recruitment if you don't have your own user base. Some platforms offer built-in participant recruitment for an additional fee. See our tools comparison page for detailed pricing across platforms.

Best Practices

Common mistakes that can compromise your results include:

• Using too many or too few cards (stick to 30-60)
• Writing ambiguous or jargon-filled card labels
• Recruiting participants who don't represent your actual users
• Not piloting your study with a few users first
• Forcing participants into unrealistic time constraints
• Using terminology users won't understand
• Failing to act on the results or implement findings
• Not combining with other research methods for validation

Yes, card sorting works excellently with other research methods:

Tree testing: Validate the resulting structure by having users find specific items
Follow-up interviews: Gain qualitative insights into why users made certain grouping decisions
Surveys: Collect demographic data and additional context
Usability testing: Verify the implemented navigation works as expected
Analytics review: Compare card sort results with actual user behavior data

This multi-method approach provides more comprehensive insights than any single method alone.

Card sorting and tree testing serve different but complementary purposes:

Card sorting is generative—it helps you discover and create information architecture by seeing how users naturally group content. Use it when you're building or restructuring navigation.

Tree testing is evaluative—it validates existing or proposed navigation structures by having users find specific items within a hierarchy. Use it to test if your structure actually works.

These methods work well together sequentially: use card sorting to create structure, then tree testing to validate it works.

Ready to Choose a Card Sorting Tool?

Now that you understand card sorting methodology, explore our comprehensive comparison of online tools to find the best platform for your research needs.

Compare Tools View Resources