Blog

  • User Research Is Storytelling

    User Research Is Storytelling

    I’ve been fascinated by shows since I was a child. I loved the figures and the excitement—but most of all the reports. I aspired to be an artist. And I believed that I’d get to do the things that Indiana Jones did and go on exciting activities. Perhaps my friends and I had movie ideas to make and sun in. But they never went any farther. However, I did end up working in user experience ( UI). Today, I realize that there’s an element of drama to UX— I hadn’t actually considered it before, but consumer analysis is story. And you must show a compelling story to entice stakeholders, such as the product team and decision-makers, to learn more in order to get the most out of consumer research.

    Think of your favorite film. It more than likely follows a three-act construction that’s frequently seen in movies: the layout, the conflict, and the resolution. The second act shows what exists now, and it helps you get to know the characters and the challenges and problems that they face. The fight begins in Act 2, which introduces the issue. Here, issues grow or get worse. The decision comes in the third and final action. This is where the issues are resolved and the figures learn and change. This structure, in my opinion, is also a fantastic way to think about consumer research, and it might be particularly useful for explaining user research to others.

    Use story as a framework for conducting research

    It’s sad to say, but many have come to see studies as being dispensable. Research is frequently one of the first things to go when finances or deadlines are tight. Instead of investing in study, some goods professionals rely on manufacturers or—worse—their personal judgment to make the “right” options for users based on their experience or accepted best practices. That might lead to some groups getting in the way, but it’s too easy to overlook the real problems facing users. To be user-centered, this is something we really avoid. User study improves style. It keeps it on trail, pointing to problems and opportunities. Being aware of problems with your goods and taking corrective actions can help you keep ahead of your competition.

    In the three-act structure, each action corresponds to a part of the process, and each part is important to telling the whole story. Let’s take a look at the various functions and how they relate to customer research.

    Act one: installation

    Fundamental analysis comes in handy because the layout is all about comprehending the background. Basic research ( also called relational, discovery, or preliminary research ) helps you understand people and identify their problems. You’re learning about the problems people face now, what options are available, and how those challenges impact them, just like in the films. To do basic research, you may conduct cultural inquiries or journal studies ( or both! ), which can assist you in identifying both prospects and problems. It doesn’t need to get a great investment in time or money.

    What is the least sustainable ethnography that Erika Hall can do is spend fifteen minutes with a consumer and say,” Walk me through your day yesterday. That’s it. Give that one ask. Opened up and listen to them for 15 days. Do everything in your power to protect both your objectives and yourself. Bam, you’re doing ethnography”. Hall predicts that “[This ] will likely prove quite fascinating. In the very unlikely event that you didn’t learn anything new or helpful, carry on with increased confidence in your way”.

    This makes sense to me in all its entirety. And I love that this makes consumer studies so visible. You can only attract participants and do it! You don’t need to create a lot of documentation. This can offer a wealth of knowledge about your customers, and it’ll help you better understand them and what’s going on in their life. That’s exactly what work one is all about: understanding where people are coming from.

    Maybe Spool talks about the importance of basic research and how it really type the bulk of your research. If you can substitute what you’ve heard in the fundamental research by using more customer information that you can obtain, such as surveys or analytics, or to highlight areas that need more research. Together, all this information creates a clearer picture of the state of things and all its deficiencies. And that’s the start of a gripping tale. It’s the place in the story where you realize that the principal characters—or the people in this case—are facing issues that they need to conquer. This is where you begin to develop compassion for the characters and support their success, much like in films. And maybe partners are now doing the same. Their business may lose money because users didn’t finish particular tasks, which may be their love. Or probably they do connect with customers ‘ problems. In either case, action one serves as your main strategy for piqueing interest and investment from the participants.

    When partners begin to understand the value of basic research, that is open doors to more opportunities that involve users in the decision-making approach. And that can help product teams become more user-centric. This benefits everyone—users, the product, and stakeholders. It’s similar to winning an Oscar for a film because it frequently results in a favorable and successful outcome for your product. And this can be an incentive for stakeholders to repeat this process with other products. Knowing how to tell a good story is the only way to convince stakeholders to care about doing more research, and storytelling is the key to this process.

    This brings us to act two, where you iteratively evaluate a design or concept to see whether it addresses the issues.

    Act two: conflict

    Act two is all about digging deeper into the problems that you identified in act one. This typically involves conducting directional research, such as usability tests, where you evaluate a potential solution ( such as a design ) to see if it addresses the issues you identified. The issues could include unmet needs or problems with a flow or process that’s tripping users up. More issues will come up in the process, much like in act two of a movie. It’s here that you learn more about the characters as they grow and develop through this act.

    Usability tests should typically consist of five participants, according to Jakob Nielsen, who found that that number of users can typically identify the majority of the issues:” As you add more and more users, you learn less and less because you will keep seeing the same things again and again… After the fifth user, you are wasting your time by observing the same findings repeatedly but not learning much new.”

    There are parallels with storytelling here too, if you try to tell a story with too many characters, the plot may get lost. With fewer participants, each user’s struggles will be more memorable and accessible to other parties when presenting the research. This can help convey the issues that need to be addressed while also highlighting the value of doing the research in the first place.

    Usability tests have been conducted in person for decades, but you can also conduct them remotely using software like Microsoft Teams, Zoom, or other teleconferencing software. This approach has become increasingly popular since the beginning of the pandemic, and it works well. You might consider in-person usability tests like watching a movie as opposed to remote testing like attending a play. There are advantages and disadvantages to each. Usability research in person is a much more valuable learning experience. Stakeholders can experience the sessions with other stakeholders. You also get real-time feedback on what they’re seeing, including surprises, disagreements, and discussions about them. Much like going to a play, where audiences get to take in the stage, the costumes, the lighting, and the actors ‘ interactions, in-person research lets you see users up close, including their body language, how they interact with the moderator, and how the scene is set up.

    If conducting usability testing in the field is like watching a play that is staged and controlled, where any two sessions may be very different from one another. You can take usability testing into the field by creating a replica of the space where users interact with the product and then conduct your research there. Or you can meet users at their location to conduct your research. With either option, you get to see how things work in context, things come up that wouldn’t have in a lab environment—and conversion can shift in entirely different directions. You have less control over how these sessions end as researchers, but this can occasionally help you understand users even better. Meeting users where they are can provide clues to the external forces that could be affecting how they use your product. In-person usability tests add a level of detail that is frequently absent from remote usability tests.

    That’s not to say that the “movies” —remote sessions—aren’t a good option. A wider audience can be obtained from remote sessions. They allow a lot more stakeholders to be involved in the research and to see what’s going on. Additionally, they make the doors accessible to a much wider range of users. But with any remote session there is the potential of time wasted if participants can’t log in or get their microphone working.

    You can ask real users questions to understand their thoughts and understanding of the solution as a result of usability testing, whether it is done remotely or in person. This can help you not only identify problems but also glean why they’re problems in the first place. You can also test your own ideas and determine whether they are true. By the end of the sessions, you’ll have a much clearer picture of how usable the designs are and whether they work for their intended purposes. Act two is where the excitement is at the heart of the narrative, but there are also potential surprises. This is equally true of usability tests. Sometimes, participants will say unexpected things that alter the way you look at them, which can lead to unexpected turns in the story.

    Unfortunately, user research is sometimes seen as expendable. Usability testing is often the only method of research that some stakeholders believe they ever need, especially in this regard. In fact, if the designs that you’re evaluating in the usability test aren’t grounded in a solid understanding of your users ( foundational research ), there’s not much to be gained by doing usability testing in the first place. Because you’re narrowing the scope of what you’re receiving feedback on without understanding the needs of the users. As a result, there’s no way of knowing whether the designs might solve a problem that users have. In the context of a usability test, it’s only feedback on a particular design.

    On the other hand, if you only do foundational research, while you might have set out to solve the right problem, you won’t know whether the thing that you’re building will actually solve that. This demonstrates the value of conducting both directional and foundational research.

    In act two, stakeholders will—hopefully—get to watch the story unfold in the user sessions, which creates the conflict and tension in the current design by surfacing their highs and lows. And in turn, this can encourage stakeholders to take action on the issues raised.

    Act three: resolution

    The third act is about resolving the issues raised by the first two acts, whereas the first two are about comprehending the context and the tensions that can compel action. While it’s important to have an audience for the first two acts, it’s crucial that they stick around for the final act. That includes all members of the product team, including developers, UX experts, business analysts, delivery managers, product managers, and any other parties who have a say in the coming development. It allows the whole team to hear users ‘ feedback together, ask questions, and discuss what’s possible within the project’s constraints. And it gives the UX design and research teams more time to clarify, suggest alternatives, or provide more context for their choices. So you can get everyone on the same page and get agreement on the way forward.

    Voiceover narration of this act is typically used with audience input. The researcher is the narrator, who paints a picture of the issues and what the future of the product could look like given the things that the team has learned. They provide the stakeholders with their suggestions and direction for developing this vision.

    Nancy Duarte in the Harvard Business Review offers an approach to structuring presentations that follow a persuasive story. The most effective presenters” set up a conflict that needs to be resolved” using the same methods as great storytellers, Duarte writes. ” That tension helps them persuade the audience to adopt a new mindset or behave differently”.

    This type of structure aligns well with research results, and particularly results from usability tests. It provides proof for “what is “—the issues you’ve identified. And “what could be “—your recommendations on how to address them. And so forth and forth.

    You can reinforce your recommendations with examples of things that competitors are doing that could address these issues or with examples where competitors are gaining an edge. Or they can be as visual as quick sketches of a potential solution to a problem. These can help generate conversation and momentum. And this continues until the session is over when you’ve concluded by bridging the gaps and offering suggestions for improvement. This is the part where you reiterate the main themes or problems and what they mean for the product—the denouement of the story. This stage provides stakeholders with the next steps and, hoped, the motivation to take those steps!

    While we are nearly at the end of this story, let’s reflect on the idea that user research is storytelling. The three-act structure of user research contains all the components for a good story:

      Act one: You meet the protagonists ( the users ) and the antagonists ( the problems affecting users ). The plot begins here. In act one, researchers might use methods including contextual inquiry, ethnography, diary studies, surveys, and analytics. These techniques can produce personas, empathy maps, user journeys, and analytics dashboards as output.
      Act two: Next, there’s character development. The protagonists encounter problems and difficulties, which they must overcome, and there is conflict and tension. In act two, researchers might use methods including usability testing, competitive benchmarking, and heuristics evaluation. Usability findings reports, UX strategy documents, usability guidelines, and best practices can be included in the output of these.
      Act three: The protagonists triumph and you see what a better future looks like. Researchers may use techniques like presentation decks, storytelling, and digital media in act three. The output of these can be: presentation decks, video clips, audio clips, and pictures.

    The researcher performs a number of tasks: they are the producer, the director, and the storyteller. The participants have a small role, but they are significant characters ( in the research ). And the audience are the stakeholders. But the most important thing is to get the story right and to use storytelling to tell users ‘ stories through research. By the end, the parties should leave with a goal and an eagerness to address the product’s flaws.

    So the next time that you’re planning research with clients or you’re speaking to stakeholders about research that you’ve done, think about how you can weave in some storytelling. User research is ultimately a win-win situation for everyone, and all you need to do is pique stakeholders ‘ interest in how the story ends.

  • From Beta to Bedrock: Build Products that Stick.

    From Beta to Bedrock: Build Products that Stick.

    I’ve lost count of the times I’ve watched promising thoughts go from zero to warrior in a few days before failing to deliver within weeks as a product developer for very long.

    Financial items, which is the area of my specialization, are no exception. It’s tempting to put as many features at the ceiling as possible and hope someone sticks because people’s true, hard-earned money is on the line, user expectations are high, and a crammed market. However, this strategy is a formula for disaster. Why, you see this:

    The fatalities of feature-first growth

    It’s easy to get swept up in the enthusiasm of developing innovative features when you start developing a financial product from scratch or are migrating existing client journeys from papers or phone channels to online bank or mobile applications. They may think,” If I may only add one more thing that solves this particular person problem, they’ll enjoy me”! But what happens if you eventually encounter a roadblock as a result of your security team’s negligence? don’t like it? When a battle-tested film isn’t as well-known as you anticipated or when it fails due to unforeseen difficulty?

    The concept of Minimum Viable Product ( MVP ) is applied to this. Even if Jason Fried doesn’t usually refer to this concept, his book Getting Real and his audio Rework frequently discuss it. An MVP is a product that offers only sufficient value to your users to keep them interested, but not so much that it becomes difficult to keep up. Although it seems like an easy idea, it requires a razor-sharp eye, a ruthless edge, and the courage to stand up for your position because it is easy to fall for” the Columbo Effect” when there is always” just one more thing …” to add.

    The issue with most funding apps is that they frequently turn out to be reflections of the company’s internal politics rather than an experience created exclusively for the customer. This implies that the priority should be given to delivering as many features and functionalities as possible in order to satisfy the requirements and wishes of competing internal departments as opposed to crafting a compelling value statement that is focused on what people in the real world actually want. These products may therefore quickly become a muddled mess of confusing, related, and finally unlovable client experiences—a feature salad, you might say.

    The significance of the foundation

    What is a better strategy, then? How may we create products that are user-friendly, firm, and, most importantly, stick?

    The concept of “bedrock” comes into play here. Rock is the main feature of your solution that really matters to customers. It serves as the foundation for the fundamental building block that creates price and maintains relevance over time.

    The rock must be in and around the standard servicing journeys in the retail banking industry, which is where I work. People only look at their existing account once every blue moon, but they do so every day. They purchase a credit card every year or every other year, but they at least once a month examine their stability and pay their bills.

    The key is in identifying the main tasks that individuals want to complete and therefore relentlessly striving to make them simple, reliable, and trustworthy.

    But how do you reach the foundation? By focusing on the” MVP” strategy, giving clarity the top priority, and working toward a distinct value proposition. This means avoiding unnecessary functions and putting your users first, and adding real value.

    It also requires having some fortitude, as your coworkers might not always agree with you immediately. And dubiously, occasionally it can even suggest making it clear to customers that you won’t be coming to their house and making their breakfast. Sometimes you need to use the sporadic “opinionated user interface design” ( i .e. clunky workaround for edge cases ) to test a concept or to give yourself some more time to work on something more crucial.

    Realistic methods for creating financially successful products

    What are the main learnings I’ve made from my own research and practice, then?

    1. What trouble are you trying to solve first and foremost with a distinct “why”? Whom? Before beginning any project, make sure your goal is completely clear. Make certain it also aligns with the goals of your business.
    2. Avoid the temptation to put too many characteristics at once and focus on getting that right first. Choose one that actually adds benefit, and work from that.
    3. When it comes to financial goods, clarity is often more important than difficulty. Eliminate unwanted details and concentrate on what matters most.
    4. Accept constant iteration as Bedrock is a powerful process rather than a set destination. Continuously collect customer feedback, improve your product, and work toward that foundational position.
    5. Stop, look, and listen: Don’t just go through with testing your product as part of the delivery process; test it consistently in the field. Use it for yourself. Move the A/B testing. User comments on Gatter. Speak to the users of it and make adjustments accordingly.

    The foundational dilemma

    Building towards core implies sacrificing some short-term expansion potential in favor of long-term balance, which is an interesting paradox at play here. But the reward is worthwhile because products built with a focus on rock will outlive and surpass their rivals over time and provide users with long-term value.

    How do you begin your quest for rock, then? Take it slowly. Start by identifying the essential components that your customers actually care about. Concentrate on developing and improving a second, potent have that delivers real value. And most importantly, make an obsessive effort because, whatever you think, Abraham Lincoln, Alan Kay, or Peter Drucker, you can’t deny it! The best way to foretell the future is to build it, he said.

  • An Holistic Framework for Shared Design Leadership

    An Holistic Framework for Shared Design Leadership

    Imagine this: Two people are conversing in what appears to be the same pattern issue in a conference room at your software company. One is talking about whether the staff has the right abilities to handle it. The other is examining whether the solution really addresses the user’s issue. Similar room, the same issue, and entirely various perspectives.

    This is the lovely, sometimes messy fact of having both a Design Manager and a Guide Designer on the same group. And you’re asking the right question if you’re wondering how to make this job without creating confusion, coincide, or the feared” to some cooks” situation.

    The conventional solution has been to create a table with clear lines. The Design Manager handles persons, the Lead Designer handles art. Best, problem is fixed, right? Except that clear organizational charts are dream. In fact, both roles care greatly about crew health, style quality, and shipping great work.

    When you start thinking of your style organization as a style organism, the magic happens when you embrace the collide rather than fighting it.

    A Healthy Design Team’s Biology

    Here’s what I’ve learned from years of being on both flanks of this formula: think of your design team as a living organism. The layout manager is guided by the group dynamics, emotional security, and career growth. The Lead Designer concentrates on the body ( the handiwork, the design standards, the hands-on projects that are delivered to users ).

    But just like mind and body aren’t totally separate systems, but, also, do these tasks overlap in significant ways. Without working in harmony with one person, you can’t have a good man. The technique is to recognize those overlaps and how to manage them gently.

    When we look at how good team really function, three critical devices emerge. Each role must be combined, but one has to assume the lead role in keeping that structure sturdy.

    Folks & Psychology: The Nervous System

    Major caretaker: Design Manager
    Supporting duties: Direct Artist

    The anxious system is all about mental health, comments, and signals. When this technique is good, information flows easily, people feel safe to take risks, and the staff may react quickly to new problems.

    The primary caregiver is around, the Design Manager. They are keeping track of the team’s emotional state, making sure feedback loops are good, and creating the environment for growth. They’re hosting job meetings, managing task, and making sure no single burns out.

    However, the Lead Designer has a vital enabling position. They’re offering visual feedback on build development needs, identifying stagnant design skills in someone, and pointing out potential growth opportunities that the Design Manager might overlook.

    Design Manager tends to:

    • discussions about careers and career development
    • mental stability and dynamics of the group
    • Job management and resource planning
    • Performance evaluations and opinions management systems
    • Providing understanding options

    Direct Custom supports by:

    • Providing craft-specific evaluation of team member growth
    • identifying opportunities for growth and style talent gaps
    • Giving design mentoring and assistance
    • indicating when staff members are prepared for more challenging problems.

    The Muscular System: Design, Design, and Execution

    Major caretaker: Lead Designer
    Supporting duties: Design Manager

    The skeletal structure focuses on developing strength, coordination, and talent development. When this technique is healthy, the team can do complicated design work with precision, maintain regular quality, and adjust their craft to fresh challenges.

    The Lead Designer is in charge of everything here. They oversee the creation of quality standards, provide craft instruction, and set design standards. They’re the ones who can tell you if a design decision is sound or if we’re solving the right problem.

    However, the Design Manager has a significant supporting role. They are making sure the team has the resources and support they need to perform their best work, including ensuring that an athlete receives adequate nutrition and time for recovery.

    Lead Designer tends to:

    • Definition of system usage and design standards
    • Feedback on design output that meets the required standards
    • Experience direction for the product
    • Design choices and product-wide alignment are at stake.
    • advancement of craft and innovation

    Design Manager supports by:

    • ensuring that all members of the team are aware of and adopt design standards
    • Confirming that the right course of action is being taken
    • Supporting practices and systems that scale without bottlenecking
    • facilitating design alignment among all teams
    • Providing resources and removing obstacles to outstanding craft work

    The Circulatory System: Strategy &amp, Flow

    Both the lead designer and the design manager were caretakers.

    How do decisions, energy, and information flow through the team according to the circulatory system? When this system is healthy, strategic direction is clear, priorities are aligned, and the team can respond quickly to new opportunities or challenges.

    True partnership occurs in this context. Although both roles are responsible for keeping the circulation strong, they both bring in different viewpoints.

    Lead Designer contributes:

    • User requirements are satisfied with the finished product
    • overall experience and product quality
    • Strategic design initiatives
    • User requirements for each initiative are based on research.

    Contributes the design manager:

    • Communication to team and stakeholders
    • Management of stakeholders and alignment
    • Team accountability across all levels
    • Strategic business initiatives

    Both parties work together on:

    • Co-creation of strategy and leadership
    • Team goals and prioritization approach
    • organizational structure decisions
    • Success frameworks and measures

    Keeping the Organism Healthy

    Understanding that all three systems must work together is the key to making this partnership sing. A team with excellent craftmanship but poor psychological protection will eventually burn out. A team with great culture but weak craft execution will ship mediocre work. A team that has both but poor strategic planning will work hard on the wrong things.

    Be Specific About the System You’re Defending.

    When you’re in a meeting about a design problem, it helps to acknowledge which system you’re primarily focused on. Everyone has context for their input.” I’m thinking about this from a team capacity perspective” ( nervous system ) or” I’m looking at this through the lens of user needs” ( muscular system ).

    This is not about staying in your own path. It’s about being transparent as to which lens you’re using, so the other person knows how to best add their perspective.

    Create Positive Feedback Loops

    Which partnerships have created clear feedback loops between the systems in the most effective ways?

    Nervous system signals to muscular system:” The team is struggling with confidence in their design skills” → Lead Designer provides more craft coaching and clearer standards.

    The nervous system receives the message” The team’s craft skills are progressing more quickly than their project complexity.”

    Both systems communicate to the circulatory system that” We’re seeing patterns in team health and craft development that suggest we need to adjust our strategic priorities.”

    Handle Handoffs Gracefully

    When something switches from one system to another, this partnership’s most crucial moments occur. This might occur when a team’s ( nervous system ) needs to be exposed to a design standard ( muscular system ), or when a strategic initiative ( circulatory system ) needs specific craft execution ( muscular system ).

    Make these transitions explicit. I’ve defined the new component requirements. Can you give me some ideas on how to get the team up to speed?” or” We’ve agreed on this strategic direction. From here, I’ll concentrate on the particular user experience approach.

    Stay original and avoid being a tourist.

    The Design Manager who never thinks about craft, or the Lead Designer who never considers team dynamics, is like a doctor who only looks at one body system. Great design leadership requires both parties to be concerned with the entire organism, even when they are not the primary caregiver.

    Rather than making assumptions, one must ask questions. ” What do you think about the team’s craft development in this area”? or” How do you think this is affecting team morale and workload”? keeps both viewpoints present in every choice.

    When the Organism Gets Sick

    This partnership has the potential to go wrong, even with clear roles. What are the most typical failure modes I’ve seen:

    System Isolation

    The Design Manager ignores craft development and concentrates solely on the nervous system. The Lead Designer ignores team dynamics and concentrates solely on the muscular system. Both people retreat to their comfort zones and stop collaborating.

    The signs: Mixed messages are sent to team members, poor morale is attained, and there are negative things.

    Reconnect with other people and discuss shared outcomes. What are you both trying to achieve? It’s typically excellent design work that arrives on time from a capable team. Discover how both systems accomplish that goal.

    Poor Circulation

    There is no clear strategic direction, shifting priorities, or accepting responsibility for keeping information flowing.

    The signs: Team members are unsure of their priorities, work is duplicated or dropped, and deadlines are missed.

    The treatment: Explicitly assign responsibility for circulation. Who is communicating with whom? How frequently? What’s the feedback loop?

    Autoimmune Response

    The other person’s expertise makes them feel threatened. The Design Manager thinks the Lead Designer is undermining their authority. The Design Manager is allegedly misunderstanding the craft, according to the lead designer.

    The signs: defensive behavior, territorial disputes, team members sucked into the middle.

    The treatment: Remember that you’re both caretakers of the same organism. The entire team suffers when one system fails. The team thrives when both systems are strong.

    The Payoff

    Yes, there is more communication required with this model. Yes, it requires that both parties be able to assume full responsibility for team health. But the payoff is worth it: better decisions, stronger teams, and design work that’s both excellent and sustainable.

    When both roles are well-balanced and functioning well together, you get the best of both worlds: strong people leadership and deep craft knowledge. When one person is overly sick, on vacation, or overworked, the other can help keep the team’s health. When a decision requires both the people perspective and the craft perspective, you’ve got both right there in the room.

    The framework scales, which is most important. You can use the same system thinking to new challenges as your team grows. Need to launch a design system? Both the muscular system ( standards and implementation ), the nervous system (team adoption and change management ), and both have a tendency to circulate ( communication and stakeholder alignment ).

    The End result

    The relationship between a Design Manager and Lead Designer isn’t about dividing territories. Multipliering impact is what is concerned with. Magic occurs when both roles are aware that they are tending to various components of the same healthy organism.

    The mind and body work together. The team receives both the craft excellence and strategic thinking they need. And most importantly, users benefit from both perspectives when they receive the work.

    So the next time you’re in that meeting room, wondering why two people are talking about the same problem from different angles, remember: you’re watching shared leadership in action. And if it’s functioning well, your design team’s mind and body are both strengthening.

  • Design Dialects: Breaking the Rules, Not the System

    Design Dialects: Breaking the Rules, Not the System

    Language is a completely coherent system bound to environment and behavior, not just a set of related noises, clauses, rules, and meanings. — Kenneth L. Pike

    Voices are present on the web. But if our manufacturing processes.

    Designing methods as living cultures

    Designing languages are living languages, not portion libraries. The elements are terms, patterns are phrases, and sentences are layouts. Tokens are phonemes. The conversations we have with people are what shape the stories that our goods represent.

    But let’s remember one thing that we’ve forgotten: the more tones a terminology you help without losing its meaning the more fluently it is spoken. English in Scotland and English in Sydney are undeniably different, but both are clearly English. The speech adapts to the situation while maintaining its fundamental significance. As a Brazilian Portuguese speech who grew up in Sydney and learned English with an American accent, this was even more apparent to me.

    Our pattern processes must operate in the same manner. A rigorous adhesion to physical conventions results in brittle techniques that disintegrate under pressure from the outside. Fluidic techniques stretch without buckling.

    Consistent behavior turns into a captivity

    Design systems had a promise that was easy: regular components may speed up development and bring together experiences. But as methods evolved and products developed more sophisticated, that promises has grown to the point of being a prison. Groups submit “exception” calls innumerate. Alternatively of system parts, products release with solutions. Designers devote more time promoting persistence than resolving customer issues.

    Our design techniques may acquire dialects to function properly.

    A design pronunciation is a comprehensive adaptation of a design system that maintains its foundational principles while creating novel patterns for particular situations. Dials maintain the state’s necessary language while expanding its vocabulary to fit various people, conditions, or constraints, unlike one-off customizations or product themes.

    When Perfect Consistency Is A Failure

    I at Booking.com took this session without warning. Everything we A/B tested was color, version, key, and logo colors. I found this stunning as a specialist with a background in graphic design and company type guides. Booking expanded into a huge without ever taking into account physical consistency, despite everyone’s adoration for Airbnb’s flawless design system.

    The panic taught me things that persistence is not ROI, but rather solved problems are.

    At Shopify. Our crown jewel was Polyris ( ), a mature design language that worked well for laptop manufacturers. We were expected to follow Polaris as-is as a product group. Then my accomplishment team slammed” Oh, Ship”! momentous as we had to create an app for inventory pickers using our program on shared, battered Android scanners in dark aisles, wearing heavy gloves, scanning dozens of items per second, some with only minimal English comprehension.

    Polaris common: 0 % task execution.

    Every element that worked wonders for retailers entirely failed to satisfy farmers. Bright backgrounds produced light. Click targets for 44px were hidden behind covered fingers. Sentence-case names took too long to interpret. Multi-step flows confused non-native listeners.

    Polaris had to be completely abandoned, or it could be taught to communicate inventory.

    The Dialect’s Baby

    We favored development over trend. We created what we now refer to as a style pronunciation by adhering to Polaris’s core values of clarity, efficiency, consistency.

    ConstraintFluent ShiftRationale
    Low light, small light, and lightBlack text + black areasLower the light on screens with low DPI
    Gloves andamp; Eagerness90px tap targets ( ~2cm )Use comfortable boots
    MultilingualSingle-tasking displays in simple speechreduce the number of people who think

    Results: Task completion increased from 0 % to 100 %. From three days to one change, recruitment time was cut.

    This was a dialect, not a modification or theming; it was a systematic translation that preserved Polaris ‘ fundamental language while creating new words for a particular context. It had picked up the language of inventory and not failed.

    The Flexibility Framework

    Working on the Jira platform, which is a component of the larger Atlassian method, at Atlassian, I advocated for formalizing this understanding. We needed organized flexibility because dozens of products shared a design language across various codebases, but we built our methods of working directly into our own. The previous model, which required exception requests and specific approvals, was failing on a scale.

    To help manufacturers determine how versatile they wanted their pieces to be, we created the Flexibility Framework.

    TierActionOwnership
    ConsistentAdopt as isprogram locks style + script
    OpinionatedAdapt within limitsSoftware offers intelligent failures, and products can be modified.
    Flexibleextend easilySoftware defines behavior, and products define their presentation.

    We tied down every aspect of a tracking redesign. International search and logo remained steady. The actions of situational context and breadcrumbs became flexible. Product teams could quickly identify areas where development was welcomed and where consistency was important.

    The Decision Ladder

    Freedom requires restrictions. We built a straightforward rope to determine when rules may obstruct:

    Great: Send with already-existing system components. Quick, reliable, and reliable.

    Better: somewhat bend a part. Document the shift. Bring developments up to the program so that everyone can use it.

    Best: Create the ideal encounter second. Update the system to make it compatible if consumer testing proves the benefit.

    Which solution allows users to achieve the fastest? is the key question.

    Laws are tools, not objects.

    Unity Beats Uniformity

    Google, Drive, and Maps all speak with their own accent, but they are clearly Google. They achieve coherence through shared values rather than copied parts. About$ 30K in engineer time is spent on one additional month of box color debate.

    Competency is a consumer outcome, while unification is a brand outcome. Edge the customer when the two conflict.

    Gates ‘ Gates’ Law:

    How can alignment be maintained while enabling languages? Treat your body like a life lexicon:

    Document every change, such as dialects or warehouses. director with explanations for the photos and reasoning.

    Promote shared patterns – when three teams adopt a slang individually and independently critique it for key addition.

    Retire ancient idioms using flags and migration notes; deprecate with context; not a big-bang purge.

    A living vocabulary performs better than a freezing code.

    Your First Dialect: Start Small

    Are you ready to start introducing accents? Begin with a bad practice:

    Get one user flow this week where great consistency prevents tasks from being completed. May be accessibility issues that mobile users have with desktop-sized components or that your standard patterns don’t target.

    What causes normal patterns to fail here? Document the context: economic restrictions customer capabilities intensity of the job?

    Design one consistent change: prioritize actions over looks. If gloves are the issue, bigger targets aren’t “broken the program”; they’re serving the consumer. Create the adjustments and incorporate them into your life.

    Test and determine: Does the shift make tasks more effective? Time to increase performance? customer satisfaction

    Present the savings: Competence has already paid off by letting that dialect free perhaps a sprint.

    Beyond the Component Library

    We are now cultivating layout languages, no managing design systems. cultures that develop in line with the speakers. tones without losing any significance in spoken language. cultures that prioritize the needs of people over visual ideals.

    Our buttons breaking the style guide didn’t matter, the warehouse workers who went from 0 % to 100 % of their tasks were satisfied with our work. They were concerned that the knobs would suddenly function.

    Your customers share your opinion. Offer your program permission to use their speech.

  • The Coziest Games to Play This Fall

    The Coziest Games to Play This Fall

    Video game are frequently praised for their ability to challenge people to overcome a variety of difficulties. Whether it be powerful multitasking, sniping rivals from far, or racing with friends through high-speed bridge tracks, this addicting activity is known to increase the hairs on our necks and push us to break a sweat. Fortunately, no all]…

    The second article on Den of Geek was The Coziest Games to Play This Fall.

    One of the last major film activity exhibitions of the year is Tokyo Game Show, an international function and integration stage where game developers from around the world–though naturally most strongly from Japan—offer glimpses of the future during the last weekend in September. And we were on the ground to cover the event in person, checking out numerous hands-on demos, getting exclusive access to upcoming title sneak peeks, and speaking with several developers. Simply put, TGS 2025 was a gamer’s dream, honoring a field that has attracted both creative talent and fans from all over the world for a three-day event in the heart of Japan.

    From demos to announcements, here are the things that we loved the most at this year’s Tokyo Game Show that should be on your radar moving forward. Keep an eye out for more exclusive coverage of the Tokyo Game Show 2025 here at Den of Geek!, including a few well-known interviews.

    cnx. cmd. cnx ( playerId:” 106e33c0-3911-473c-b599-b1426db57530 ), ): function ( ). render ( “0270c398a82f44f49c23c16122516796” ),.

    MIRESI: Invisible Future Unveils a Time-Bending Fantasy Tale

    The first hands-on demo at Tokyo Game Show 2025 was published by Smilegate called MIRESI: Invisible Future. The game, which was developed by CONTROL9, heavily revolves around time travel and using future knowledge to pass through a time bomb. The player teams up with a group of characters specializing in different kinds of magic to battle monsters in turn-based arena combat.

    The player travels to a more tranquil place before the catastrophe, which was also included in the MIRESI demo. In frenetic combat that allows players to rewind turns and events and unleash ultimate attacks, this allows the players to get to know their companions before all chaos starts to break loose once more. Expected for release on PC and mobile in 2026, MIRESI: Invisible Future has a deep replayability appeal as players team up with different characters to fight for the future before it is lost forever, something its demo illustrated beautifully.

    Card-Based RPG Action in Chaos Zero Nightmare is Available.

    The upcoming fantasy RPG Chaos Zero Nightmare, which is optimized for mobile platforms, was the second hands-on we had. Developed by Super Creative, the game is reminiscent of the popular RPG Darkest Dungeon with its branching level progression, turn-based combat, and small party focus. The combat system, which revolves around extensively developed card-based mechanics, is where Chaos Zero Nightmare stands out and excels.

    One of the most enjoyable demos we played at TGS 2025 was Chaos Zero Nightmare, which was scheduled for an Oct. 22 release on PC and mobile. The card system was engrossing and intuitive, and the accompanying animation as attacks unfolded added to the overall appeal. Chaos Zero Nightmare is ready to give fans of the dark fantasy genre a fresh perspective by offering an anime-inspired twist on games like Darkest Dungeon and Slay the Spire.

    Its Newest Challenger in Street Fighter 6 Spotlights

    Over two years after its launch, Street Fighter 6 still has plenty of tricks up its sleeve as Capcom’s fighting game progresses through its third season of DLC. At its TGS 2025 booth, Capcom provided a number of different Street Fighter 6-centric demos that were highly popular with gamers. The real highlight, however, was the first playable demo of the new DLC character Crimson Viper, aside from sharing the game’s crossplay capabilities and Nintendo Switch 2 build.

    Introduced in 2008’s Street Fighter IV, C. Viper hasn’t been a playable character in any Capcom title since 2011’s Ultimate Marvel vs. Capcom 3. The shadowy figure has since been working under an alias when she’s reintroduced in Street Fighter 6‘s story mode to confront the player character, according to Capcom’s TGS presentation and demo. On October 15, Street Fighter 6 will offer a standalone purchase or a Season 3 DLC pass that includes Crime Viper.

    Sega Brings Back a Yakuza Classic with a Twist

    Over the past few years, the Yakuza/Like a Dragon franchise has grown to be a cornerstone of Sega’s gaming properties as it has been embraced by global audiences. Sega announced that the third game will finally receive its own remake after remaking the first two games in the series using contemporary graphical and gameplay engines. 2009’s Yakuza 3, originally released for the PlayStation 3, will be remade under the title Yakuza Kiwami 3 for the PlayStation 5 and 4, Xbox Series X|S, Nintendo Switch 2, and PC.

    Yakuza Kiwami 3 will also come bundled with the brand-new spinoff game Dark Ties, which will be released on February 12, 2026. The full-length game is more action-focused than the Yakuza 3 antagonist Yoshitaka Mine, with the usual minigames included in Mine’s side story. As an added bonus, gamers who preorder Yakuza Kiwami 3 will be able to add fan-favorite Ichiban Kasuga to their biker gang.

    Ghost of Ytei Receives One Last Sneak Peek

    In the final months before the PlayStation 5’s launch, Ghost of Tsushima, one of the best games to end the PlayStation 4 era, was one of the best. This makes its standalone sequel Ghost of Yōtei one of the most eagerly anticipated titles of 2025, and PlayStation Studios showcased the game at TGS ahead of its Oct. 2 release. Ghost of Ytei follows new protagonist Atsu as she pursues the six samurai who nearly killed her and destroyed her family more than 300 years after the events of the first game.

    At its TGS 2025 booth, PlayStation Studios highlighted a number of upcoming and new games, including Marvel Tkon: Fighting Souls, but Ghost of Ytei unquestionably had us the most excited. While retaining the combat system and celebration of Japanese culture from Tsushima, Ghost of Yōtei expands on them and takes full advantage of the technical capabilities on the PS5 hardware. According to what we saw, Ghost of Ytei effectively doubles down on samurai spectacle.

    Ninja Gaiden 4 Is Unleashed This October.

    2025 has been a great year for Ninja Gaiden fans with an impressive remaster of Ninja Gaiden 2 and the side-scroller Ninja Gaiden: Ragebound released earlier this year. At TGS 2025, Koei Tecmo unveiled the upcoming Ninja Gaiden 4 in all of its bloody glory, revealing what developers Team Ninja and PlatinumGames have been working on, including the series ‘ new protagonist Yakumo. This included a look at this Master Ninja difficulty setting, which raises the shinobi-fueled fury to a fever pitch.

    While Yakumo definitely has gameplay similarities to the franchise’s classic protagonist, Ryu Hayabusa, it’s clear that he has his own unique combat abilities too. Yakumo is most notable for having the best control over both his own blood and the blood of his foes when battling off against his own foes. Ninja Gaiden 4 marks the culmination of Koei Tecmo’s beloved franchise’s successful year as it celebrates its modern revival and is scheduled for release on October 21 for the PlayStation 5, Xbox One, and PC.

    Surprise Guest Stars Join Sonic Racing: CrossWorlds

    Sonic Racing: Cross Worlds, a franchise title from Sega, may have just dropped earlier this month, but the game’s roster included some notable names that were revealed at TGS 2025. The paid DLC for the video game already had confirmed the roster for Minecraft and SpongeBob Squarepants, but Sega recently released a new trailer featuring two unexpected additions. In a special collaboration with Capcom, Mega Man and Proto Man will both be playable racers available separately or as part of the game’s season pass or digital deluxe edition.

    The collaboration also includes a car based on Mega Man’s devoted canine buddy Rush and a track based on his long-deadline nemesis Doctor Wily’s castle hideout. This completes a packed DLC season with content based on Pac-Man, Avatar Legends, and Teenage Mutant Ninja Turtles: Mutant Mayhem. More than just highlighting Sega’s extensive library of fan-favorite characters, Sonic Racing: CrossWorlds lives up to its title with surprising outside properties revving up to join the race.

    Mega Man Star Force Remastered by Capcom

    At TGS 2025, Capcom gave in-depth analyses of several compilations of classic games for contemporary consoles. In addition to Mega Man, Capcom gave in-depth analyses of several compilations of classic games for modern consoles. One of the biggest surprises is the Mega Man Star Force Legacy Collection, compiling the seven games making up one of the Blue Bomber’s more overlooked eras. The Star Force titles, which were first released for the Nintendo DS, feature this particular iteration of Mega Man use Battle Cards to defeat his foes in a grid-like arena setting.

    The collection, which is scheduled for release in 2026, includes a dual-screen layout as well as other quality-of-life adjustments to make the games more accessible to modern audiences. In addition to compiling the Mega Man Star Force series, Capcom also announced the release of a significant new update to its Phoenix Wright: Ace Attorney Trilogy Collection, adding an art gallery, expanded language options, and a guided system to help stumped players. These announcements strengthen Capcom’s position as a leader in game preservation, remastering classic titles while preserving what made them outstanding.

    The first post Our Favorite Things at Tokyo Game Show 2025 appeared on Den of Geek.

  • Goodfellas: Ray Liotta’s Iconic Laugh Is A Lot More Than Just a Meme

    Goodfellas: Ray Liotta’s Iconic Laugh Is A Lot More Than Just a Meme

    Goodfellas is unquestionably a common. Since it first released 35 years ago, Martin Scorsese’s attractive version of Nicholas Pileggi’s non-fiction text Wiseguy has earned respect for its dazzling film and its stunning depiction of gangland violence. It has had an enormous impact on a number of artists, including Paul Thomas Anderson, whose most new film One Battle After Another has received [ …] …]…

    The second article on Den of Geek was Goodfellas: Ray Liotta’s Iconic Laugh Is A Lot More Than Just a Meme.

    One of the last major film activity exhibitions of the year is Tokyo Game Show, an international function and integration stage where game developers from around the world–though naturally most strongly from Japan—offer glimpses of the future during the last weekend in September. And we were there in person to witness the event, spotting numerous hands-on demos, getting exclusive access to upcoming titles, and speaking with several developers. Simply put, TGS 2025 was a gamer’s dream, honoring a field that has attracted both creative talent and fans from all over the world for a three-day event in the heart of Japan.

    From demos to announcements, here are the things that we loved the most at this year’s Tokyo Game Show that should be on your radar moving forward. Keep an eye out for more exclusive coverage of the Tokyo Game Show 2025, including a number of well-known interviews, here at Den of Geek!

    cnx. cmd. cnx ( playerId:” 106e33c0-3911-473c-b599-b1426db57530 ), ): function ( ). render ( “0270c398a82f44f49c23c16122516796” ),

    MIRESI: Invisible Future Unveils a Time-Bending Fantasy Tale

    The first hands-on demo at Tokyo Game Show 2025 was published by Smilegate called MIRESI: Invisible Future. The game, which was developed by CONTROL9, heavily revolves around time travel and using future knowledge to pass through a cataclysm. The player teams up with a group of characters specializing in different kinds of magic to battle monsters in turn-based arena combat.

    Beyond the combat, the MIRESI demo also included a look at the game’s narrative, with the player traveling to a more tranquil place before the catastrophe. In frenetic combat that allows players to rewind turns and events and unleash ultimate attacks, this allows the players to get to know their companions before all chaos breaks loose once more. Expected for release on PC and mobile in 2026, MIRESI: Invisible Future has a deep replayability appeal as players team up with different characters to fight for the future before it is lost forever, something its demo illustrated beautifully.

    Card-Based RPG Action is a Newcomer to Chaos Zero Nightmare.

    The upcoming fantasy RPG Chaos Zero Nightmare, which is optimized for mobile platforms, was the second hands-on we had. Developed by Super Creative, the game is reminiscent of the popular RPG Darkest Dungeon with its branching level progression, turn-based combat, and small party focus. The combat system, which revolves around extensively developed card-based mechanics, is where Chaos Zero Nightmare stands out and excels.

    One of the most enjoyable demos we played at TGS 2025 was Chaos Zero Nightmare, which was scheduled for an Oct. 22 release on PC and mobile. The card system was engrossing and intuitive, and the accompanying animation as attacks unfolded added to the overall appeal. Chaos Zero Nightmare is ready to give fans of the dark fantasy genre a fresh perspective by offering an anime-inspired twist on games like Darkest Dungeon and Slay the Spire.

    Its Newest Challenger in Street Fighter 6 Spotlights

    Over two years after its launch, Street Fighter 6 still has plenty of tricks up its sleeve as Capcom’s fighting game progresses through its third season of DLC. At its TGS 2025 booth, Capcom provided a number of different Street Fighter 6-centric demos that were highly regarded by gamers. The real highlight, however, was the first playable demo of the new DLC character Crimson Viper, aside from sharing the game’s crossplay capabilities and Nintendo Switch 2 build.

    Introduced in 2008’s Street Fighter IV, C. Viper hasn’t been a playable character in any Capcom title since 2011’s Ultimate Marvel vs. Capcom 3. The shadowy figure has since been working under an alias when she’s reintroduced in Street Fighter 6‘s story mode to confront the player character, according to Capcom’s TGS presentation and demo. On October 15, Street Fighter 6 will offer a standalone purchase or a Season 3 DLC pass that includes Crime Viper.

    Sega Brings Back a Yakuza Classic with a Twist

    Over the past few years, the Yakuza/Like a Dragon franchise has grown to be a pillar of Sega’s gaming properties because it has attracted attention from all over the world. Sega announced that the third game will finally receive its own remake after remaking the series ‘ first two games using contemporary graphical and gameplay engines. 2009’s Yakuza 3, originally released for the PlayStation 3, will be remade under the title Yakuza Kiwami 3 for the PlayStation 5 and 4, Xbox Series X|S, Nintendo Switch 2, and PC.

    Yakuza Kiwami 3 will come pre-ordered with Dark Ties, a brand-new spinoff game, and will be available on February 12, 2026. The full-length game is more action-oriented than the Yakuza 3 antagonist Yoshitaka Mine, though with the usual minigames included in Mine’s side story. As an added bonus, gamers who preorder Yakuza Kiwami 3 will be able to add fan-favorite Ichiban Kasuga to their biker gang.

    One Last Sneak Peek for Ghost of Ytei

    In the final months leading up to the PlayStation 5’s launch, Ghost of Tsushima, one of the best games to end the PlayStation 4 era, was one of the best. This makes its standalone sequel Ghost of Yōtei one of the most eagerly anticipated titles of 2025, and PlayStation Studios showcased the game at TGS ahead of its Oct. 2 release. Ghost of Ytei follows new protagonist Atsu as she pursues the six samurai who nearly killed her and destroyed her family more than 300 years after the events of the first game.

    At its TGS 2025 booth, PlayStation Studios highlighted a number of upcoming and new games, including Marvel Tkon: Fighting Souls, but Ghost of Ytei unquestionably had us the most excited at the event. While retaining the combat system and celebration of Japanese culture from Tsushima, Ghost of Yōtei expands on them and takes full advantage of the technical capabilities on the PS5 hardware. According to what we saw, Ghost of Ytei effectively doubles down on samurai spectacle.

    Ninja Gaiden 4 Is Unleashed This October.

    2025 has been a great year for Ninja Gaiden fans with an impressive remaster of Ninja Gaiden 2 and the side-scroller Ninja Gaiden: Ragebound released earlier this year. At TGS 2025, Koei Tecmo unveiled the upcoming Ninja Gaiden 4 in all of its bloody glory, including Yakumo, the series ‘ new protagonist. This included a look at this Master Ninja difficulty setting, which raises the shinobi-fueled fury to a fever pitch.

    While Yakumo definitely has gameplay similarities to the franchise’s classic protagonist, Ryu Hayabusa, it’s clear that he has his own unique combat abilities too. Yakumo is most famous for having the best of both his own blood and his own blood to use sanguine force against adversaries. Ninja Gaiden 4 marks the culmination of Koei Tecmo’s beloved franchise’s successful year as it celebrates its modern revival and is scheduled for release on October 21 for the PlayStation 5, Xbox One, and PC.

    Surprise Guest Stars Join Sonic Racing: CrossWorlds

    Sonic Racing: Cross Worlds, a franchise title from Sega, may have just dropped earlier this month, but the game’s roster included some notable names that were revealed at TGS 2025. The paid DLC for the video game already had confirmed the roster for Minecraft and SpongeBob Squarepants, but Sega recently released a new trailer featuring two unexpected additions. In a special collaboration with Capcom, Mega Man and Proto Man will both be playable racers available separately or as part of the game’s season pass or digital deluxe edition.

    The collaboration also includes a track named after his long-deadline nemesis Doctor Wily and a car based on Mega Man’s loyal canine buddy Rush from the Capcom franchise and two new racers in addition to the original two. This strengthens a packed DLC season that includes content based on Bandai Namco’s Pac-Man, Mutant Mayhem, and Avatar Legends. More than just highlighting Sega’s extensive library of fan-favorite characters, Sonic Racing: CrossWorlds lives up to its title with surprising outside properties revving up to join the race.

    Capcom Remasters Mega Man Star Force

    At TGS 2025, Capcom gave in-depth analyses of several classic game compilations for contemporary consoles, in addition to Mega Man. One of the biggest surprises is the Mega Man Star Force Legacy Collection, compiling the seven games making up one of the Blue Bomber’s more overlooked eras. The Star Force titles, which were first released for the Nintendo DS, feature this particular iteration of Mega Man use Battle Cards to defeat his foes in a grid-like arena setting.

    The collection, which is scheduled for release in 2026, includes a dual-screen layout, as well as other quality-of-life adjustments to make the games more accessible to contemporary audiences. In addition to compiling the Mega Man Star Force series, Capcom also announced the release of a significant new update to its Phoenix Wright: Ace Attorney Trilogy Collection, adding an art gallery, expanded language options, and a guided system to help stumped players. These announcements strengthen Capcom’s position as a leader in game preservation, remastering classic titles while preserving what made them outstanding.

    The first post on Den of Geek was Our Favorite Things at Tokyo Game Show 2025.

  • Our Favorite Things at Tokyo Game Show 2025

    Our Favorite Things at Tokyo Game Show 2025

    Tokyo Game Show, an international function and integration place where activity developers from around the world, but obviously most strongly from Japan, offer glimpses of the future during the week’s last weekend in September, is one of the last major video game exhibitions of the year. And we were on the ground to cover the event in-person, checking ]… ]

    The second post Our Favourite Things at Tokyo Game Show 2025 appeared on Den of Geek.

    Tokyo Game Show, an international function and integration place where activity developers from all over the world, but obviously most strongly from Japan, offer hints of the future during the week’s last weekend in September, is one of the year’s last key video game exhibitions. And we were on the surface to cover the event in-person, checking out various hands-on previews, gaining access to walk peeks from future titles, and interviews with some developers. Simply put, TGS 2025 was a user’s vision, honoring a field that has attracted both artistic talent and fans from all over the world for a three-day event in Japan.

    Here are the highlights of the things we enjoyed watching the most at this year’s Tokyo Game Show that should be high on your radar going forward. Also keep an eye out for more exclusive Tokyo Game Show 2025 coverage, including several notable interviews, here at Den of Geek!

    cnx. cmd. push ( function ( ) {cnx ( {playerId:” 106e33c0-3911-473c-b599-b1426db57530″, }). render ( “0270c398a82f44f49c23c16122516796” ),

    MIRESI: Invisible Future Reveals a Time-Bending Fantasy Tale.

    The first hands-on demo we got to play at Tokyo Game Show 2025 was MIRESI: Invisible Future, which is published by Smilegate. The game was developed by CONTROL9 and centers primarily on time travel and using future knowledge to pass through an impending cataclysm. In turn-based arena combat, the player and a group of characters with specializations in various kinds of magic form a team to battle monsters.

    Beyond the combat, the MIRESI demo also included a look at the game’s story, with the player going back to a more peaceful time before the calamity ensued. In frenetic combat that allows players to rewind turns and events and unleash ultimate attacks, this allows the players to get to know their companions before all chaos breaks loose once more. MIRESI: Invisible Future, which is scheduled to be released on PC and mobile in 2026, has a strong replayability appeal because players compete against various characters to save the world before it is forever, as its demo brilliantly illustrates.

    Chaos Zero Nightmare Brings Card-Based RPG Action

    The upcoming fantasy RPG Chaos Zero Nightmare, which is optimized for mobile platforms, was the second hands-on we had. With its branching level progression, turn-based combat, and small party focus, the game, which was developed by Super Creative, is reminiscent of the well-known RPG Darkest Dungeon. Where Chaos Zero Nightmare distinguishes itself and excels is its combat system, revolving around extensively developed card-based mechanics.

    One of the most enjoyable demos we played at TGS 2025 was Chaos Zero Nightmare, which was scheduled for an Oct. 22 release on PC and mobile. The card system was engaging and user-friendly, and the accompanying animation as attacks took shape strengthened the appeal overall. For those looking for an anime-inspired twist on games like Darkest Dungeon or Slay the Spire, Chaos Zero Nightmare is ready to provide a fresh experience to the dark fantasy genre.

    Its Newest Challenger in Street Fighter 6 is highlighted.

    As Capcom’s fighting game moves through its third season of DLC, Street Fighter 6 still has plenty of tricks up its sleeve as it comes more than two years after its launch. Capcom offered several different Street Fighter 6-centric demos at its TGS 2025 booth, which proved popular with gamers. The real highlight, however, was the first playable demo of the new DLC character Crimson Viper, aside from sharing the game’s crossplay capabilities and Nintendo Switch 2 build.

    C. Viper, who was first portrayed in Street Fighter IV in 2008, hasn’t appeared in any Capcom games since 2011’s Ultimate Marvel vs. Capcom 3. Capcom’s TGS presentation and demo revealed that the shadowy figure has since been working under an alias when she’s reintroduced in Street Fighter 6‘s story mode to confront the player character. On October 15, you can purchase Crime Viper in Street Fighter 6 either as a standalone item or as a component of the Season 3 DLC pass.

    Sega Recreates a Yakuza Classic with a Twist.

    The Yakuza/Like a Dragon franchise has become a cornerstone of Sega’s gaming properties over the past several years as it &#8217, s been embraced by global audiences. Sega announced that the third game will finally receive its own remake after remaking the series ‘ first two games using contemporary graphical and gameplay engines. The original PlayStation 3 release, Yakuza Kiwami 3, from 2009, will be renamed Yakuza Kiwami 3 for the PlayStation 5 and 4, Xbox 360 Series X|S, Nintendo Switch 2, and PC.

    Set for release on Feb. 12, 2026, Yakuza Kiwami 3 will also come bundled with the brand-new spinoff game, Dark Ties. The full-length game is more action-focused than the Yakuza 3 antagonist Yoshitaka Mine, with the usual minigames included in Mine’s side story. Gamers who preorder Yakuza Kiwami 3 will also be able to add fan-favorite Ichiban Kasuga to their motorcycle gang as an added bonus.

    Ghost of Yōtei Gets One Last Sneak Peek

    In the final months leading up to the PlayStation 5’s launch, Ghost of Tsushima, one of the best games to end the PlayStation 4 era, was one of the best. One of the most eagerly awaited titles of 2025 is its standalone sequel Ghost of Ytei, which was showcased at TGS prior to its Oct. 2 release. Taking place over 300 years after the events of the previous game, Ghost of Yōtei follows new protagonist Atsu as she pursues the six samurai who destroyed her family and nearly killed her.

    At its TGS 2025 booth, PlayStation Studios highlighted a number of upcoming and new games, including Marvel Tkon: Fighting Souls, but Ghost of Ytei unquestionably had us the most excited. Ghost of Ytei expands on these and makes full use of the technical capabilities on the PS5 hardware while maintaining the combat system and celebration of Japanese culture from Tsushima. Judging from what we saw, Ghost of Yōtei doubles down on samurai spectacle to great effect.

    Ninja Gaiden 4 Is Available This October.

    With an impressive remaster of Ninja Gaiden 2 and the side-scroller Ninja Gaiden: Ragebound, which was released earlier this year, 2025 has been a fantastic year for Ninja Gaiden fans. Koei Tecmo unveiled the upcoming Ninja Gaiden 4 in all of its bloody glory at TGS 2025, unveiling what developers Team Ninja and PlatinumGames have been working on, including the series ‘ new protagonist Yakumo. This included a look at the Master Ninja difficulty setting, which raises the shinobi-fueled fury to a fever pitch.

    While Yakumo definitely shares a few gameplay elements with Ryu Hayabusa, the series ‘ iconic protagonist, it’s obvious that he also has his own distinctive combat skills. Most notably, Yakumo can control both his own blood and spilled enemy blood to battle hordes of opponents to sanguine effect. Ninja Gaiden 4 marks the culmination of Koei Tecmo’s beloved franchise’s successful year as it enjoys its modern resurgence and is scheduled for release on October 21 for the PlayStation 5, Xbox Series X|S, and PC.

    Surprise Guest Stars Join CrossWorlds of Sonic Racing

    While Sega’s all-star racing title Sonic Racing: CrossWorlds may have just dropped earlier this month, the game’s roster had some big guests announced at TGS 2025. The paid DLC for the video game already had confirmed the roster for Minecraft and SpongeBob Squarepants, but Sega recently released a new trailer featuring two unexpected additions. Mega Man and Proto Man will both be playable racers in a special collaboration with Capcom, both of which can be purchased separately or as a season pass or digital deluxe edition.

    In addition to the Capcom franchise adding two new racers to the mix, the collaboration includes a car based on Mega Man’s loyal canine buddy Rush and track based on the castle hideout of his longtime nemesis Doctor Wily. This completes a packed DLC season with content based on Pac-Man, Bandai Namco’s Pac-Man, and Mutant Mayhem, the voice of the Teenage Mutant Ninja Turtles, and Avatar Legends. Sonic Racing: CrossWorlds lives up to its name with surprising outside properties that start the race while highlighting Sega’s extensive library of fan-favorite characters.

    Capcom Remasters Mega Man Star Force

    At TGS 2025, Capcom gave in-depth analyses of several compilations of classic games for contemporary consoles, mentioning Mega Man. The Mega Man Star Force Legacy Collection, which compiles the seven games that make up one of the Blue Bomber’s more underappreciated eras, is one of the biggest surprises. Originally released for the Nintendo DS, the Star Force titles have this particular iteration of Mega Man use Battle Cards to defeat his enemies in a grid-like arena setting.

    The collection, which is scheduled for release in 2026, includes a dual-screen layout as well as other quality-of-life adjustments to make the games more accessible to modern audiences. Capcom also announced the release of a significant new update to its Phoenix Wright: Ace Attorney Trilogy Collection, which includes an expanded language selection, expanded options for players, and a guided system to assist stumped players. These announcements continue to cement Capcom’s reputation as a publisher leading in game preservation, remastering classic titles while retaining what made them great.

    The second post Our Favourite Things at Tokyo Game Show 2025 appeared on Den of Geek.

  • Asynchronous Design Critique: Getting Feedback

    Asynchronous Design Critique: Getting Feedback

    “Any comment?” is probably one of the worst ways to ask for feedback. It’s vague and open ended, and it doesn’t provide any indication of what we’re looking for. Getting good feedback starts earlier than we might expect: it starts with the request. 

    It might seem counterintuitive to start the process of receiving feedback with a question, but that makes sense if we realize that getting feedback can be thought of as a form of design research. In the same way that we wouldn’t do any research without the right questions to get the insights that we need, the best way to ask for feedback is also to craft sharp questions.

    Design critique is not a one-shot process. Sure, any good feedback workflow continues until the project is finished, but this is particularly true for design because design work continues iteration after iteration, from a high level to the finest details. Each level needs its own set of questions.

    And finally, as with any good research, we need to review what we got back, get to the core of its insights, and take action. Question, iteration, and review. Let’s look at each of those.

    The question

    Being open to feedback is essential, but we need to be precise about what we’re looking for. Just saying “Any comment?”, “What do you think?”, or “I’d love to get your opinion” at the end of a presentation—whether it’s in person, over video, or through a written post—is likely to get a number of varied opinions or, even worse, get everyone to follow the direction of the first person who speaks up. And then… we get frustrated because vague questions like those can turn a high-level flows review into people instead commenting on the borders of buttons. Which might be a hearty topic, so it might be hard at that point to redirect the team to the subject that you had wanted to focus on.

    But how do we get into this situation? It’s a mix of factors. One is that we don’t usually consider asking as a part of the feedback process. Another is how natural it is to just leave the question implied, expecting the others to be on the same page. Another is that in nonprofessional discussions, there’s often no need to be that precise. In short, we tend to underestimate the importance of the questions, so we don’t work on improving them.

    The act of asking good questions guides and focuses the critique. It’s also a form of consent: it makes it clear that you’re open to comments and what kind of comments you’d like to get. It puts people in the right mental state, especially in situations when they weren’t expecting to give feedback.

    There isn’t a single best way to ask for feedback. It just needs to be specific, and specificity can take many shapes. A model for design critique that I’ve found particularly useful in my coaching is the one of stage versus depth.

    Stage” refers to each of the steps of the process—in our case, the design process. In progressing from user research to the final design, the kind of feedback evolves. But within a single step, one might still review whether some assumptions are correct and whether there’s been a proper translation of the amassed feedback into updated designs as the project has evolved. A starting point for potential questions could derive from the layers of user experience. What do you want to know: Project objectives? User needs? Functionality? Content? Interaction design? Information architecture? UI design? Navigation design? Visual design? Branding?

    Here’re a few example questions that are precise and to the point that refer to different layers:

    • Functionality: Is automating account creation desirable?
    • Interaction design: Take a look through the updated flow and let me know whether you see any steps or error states that I might’ve missed.
    • Information architecture: We have two competing bits of information on this page. Is the structure effective in communicating them both?
    • UI design: What are your thoughts on the error counter at the top of the page that makes sure that you see the next error, even if the error is out of the viewport? 
    • Navigation design: From research, we identified these second-level navigation items, but once you’re on the page, the list feels too long and hard to navigate. Are there any suggestions to address this?
    • Visual design: Are the sticky notifications in the bottom-right corner visible enough?

    The other axis of specificity is about how deep you’d like to go on what’s being presented. For example, we might have introduced a new end-to-end flow, but there was a specific view that you found particularly challenging and you’d like a detailed review of that. This can be especially useful from one iteration to the next where it’s important to highlight the parts that have changed.

    There are other things that we can consider when we want to achieve more specific—and more effective—questions.

    A simple trick is to remove generic qualifiers from your questions like “good,” “well,” “nice,” “bad,” “okay,” and “cool.” For example, asking, “When the block opens and the buttons appear, is this interaction good?” might look specific, but you can spot the “good” qualifier, and convert it to an even better question: “When the block opens and the buttons appear, is it clear what the next action is?”

    Sometimes we actually do want broad feedback. That’s rare, but it can happen. In that sense, you might still make it explicit that you’re looking for a wide range of opinions, whether at a high level or with details. Or maybe just say, “At first glance, what do you think?” so that it’s clear that what you’re asking is open ended but focused on someone’s impression after their first five seconds of looking at it.

    Sometimes the project is particularly expansive, and some areas may have already been explored in detail. In these situations, it might be useful to explicitly say that some parts are already locked in and aren’t open to feedback. It’s not something that I’d recommend in general, but I’ve found it useful to avoid falling again into rabbit holes of the sort that might lead to further refinement but aren’t what’s most important right now.

    Asking specific questions can completely change the quality of the feedback that you receive. People with less refined critique skills will now be able to offer more actionable feedback, and even expert designers will welcome the clarity and efficiency that comes from focusing only on what’s needed. It can save a lot of time and frustration.

    The iteration

    Design iterations are probably the most visible part of the design work, and they provide a natural checkpoint for feedback. Yet a lot of design tools with inline commenting tend to show changes as a single fluid stream in the same file, and those types of design tools make conversations disappear once they’re resolved, update shared UI components automatically, and compel designs to always show the latest version—unless these would-be helpful features were to be manually turned off. The implied goal that these design tools seem to have is to arrive at just one final copy with all discussions closed, probably because they inherited patterns from how written documents are collaboratively edited. That’s probably not the best way to approach design critiques, but even if I don’t want to be too prescriptive here: that could work for some teams.

    The asynchronous design-critique approach that I find most effective is to create explicit checkpoints for discussion. I’m going to use the term iteration post for this. It refers to a write-up or presentation of the design iteration followed by a discussion thread of some kind. Any platform that can accommodate this structure can use this. By the way, when I refer to a “write-up or presentation,” I’m including video recordings or other media too: as long as it’s asynchronous, it works.

    Using iteration posts has many advantages:

    • It creates a rhythm in the design work so that the designer can review feedback from each iteration and prepare for the next.
    • It makes decisions visible for future review, and conversations are likewise always available.
    • It creates a record of how the design changed over time.
    • Depending on the tool, it might also make it easier to collect feedback and act on it.

    These posts of course don’t mean that no other feedback approach should be used, just that iteration posts could be the primary rhythm for a remote design team to use. And other feedback approaches (such as live critique, pair designing, or inline comments) can build from there.

    I don’t think there’s a standard format for iteration posts. But there are a few high-level elements that make sense to include as a baseline:

    1. The goal
    2. The design
    3. The list of changes
    4. The questions

    Each project is likely to have a goal, and hopefully it’s something that’s already been summarized in a single sentence somewhere else, such as the client brief, the product manager’s outline, or the project owner’s request. So this is something that I’d repeat in every iteration post—literally copy and pasting it. The idea is to provide context and to repeat what’s essential to make each iteration post complete so that there’s no need to find information spread across multiple posts. If I want to know about the latest design, the latest iteration post will have all that I need.

    This copy-and-paste part introduces another relevant concept: alignment comes from repetition. So having posts that repeat information is actually very effective toward making sure that everyone is on the same page.

    The design is then the actual series of information-architecture outlines, diagrams, flows, maps, wireframes, screens, visuals, and any other kind of design work that’s been done. In short, it’s any design artifact. For the final stages of work, I prefer the term blueprint to emphasize that I’ll be showing full flows instead of individual screens to make it easier to understand the bigger picture. 

    It can also be useful to label the artifacts with clear titles because that can make it easier to refer to them. Write the post in a way that helps people understand the work. It’s not too different from organizing a good live presentation. 

    For an efficient discussion, you should also include a bullet list of the changes from the previous iteration to let people focus on what’s new, which can be especially useful for larger pieces of work where keeping track, iteration after iteration, could become a challenge.

    And finally, as noted earlier, it’s essential that you include a list of the questions to drive the design critique in the direction you want. Doing this as a numbered list can also help make it easier to refer to each question by its number.

    Not all iterations are the same. Earlier iterations don’t need to be as tightly focused—they can be more exploratory and experimental, maybe even breaking some of the design-language guidelines to see what’s possible. Then later, the iterations start settling on a solution and refining it until the design process reaches its end and the feature ships.

    I want to highlight that even if these iteration posts are written and conceived as checkpoints, by no means do they need to be exhaustive. A post might be a draft—just a concept to get a conversation going—or it could be a cumulative list of each feature that was added over the course of each iteration until the full picture is done.

    Over time, I also started using specific labels for incremental iterations: i1, i2, i3, and so on. This might look like a minor labelling tip, but it can help in multiple ways:

    • Unique—It’s a clear unique marker. Within each project, one can easily say, “This was discussed in i4,” and everyone knows where they can go to review things.
    • Unassuming—It works like versions (such as v1, v2, and v3) but in contrast, versions create the impression of something that’s big, exhaustive, and complete. Iterations must be able to be exploratory, incomplete, partial.
    • Future proof—It resolves the “final” naming problem that you can run into with versions. No more files named “final final complete no-really-its-done.” Within each project, the largest number always represents the latest iteration.

    To mark when a design is complete enough to be worked on, even if there might be some bits still in need of attention and in turn more iterations needed, the wording release candidate (RC) could be used to describe it: “with i8, we reached RC” or “i12 is an RC.”

    The review

    What usually happens during a design critique is an open discussion, with a back and forth between people that can be very productive. This approach is particularly effective during live, synchronous feedback. But when we work asynchronously, it’s more effective to use a different approach: we can shift to a user-research mindset. Written feedback from teammates, stakeholders, or others can be treated as if it were the result of user interviews and surveys, and we can analyze it accordingly.

    This shift has some major benefits that make asynchronous feedback particularly effective, especially around these friction points:

    1. It removes the pressure to reply to everyone.
    2. It reduces the frustration from swoop-by comments.
    3. It lessens our personal stake.

    The first friction point is feeling a pressure to reply to every single comment. Sometimes we write the iteration post, and we get replies from our team. It’s just a few of them, it’s easy, and it doesn’t feel like a problem. But other times, some solutions might require more in-depth discussions, and the amount of replies can quickly increase, which can create a tension between trying to be a good team player by replying to everyone and doing the next design iteration. This might be especially true if the person who’s replying is a stakeholder or someone directly involved in the project who we feel that we need to listen to. We need to accept that this pressure is absolutely normal, and it’s human nature to try to accommodate people who we care about. Sometimes replying to all comments can be effective, but if we treat a design critique more like user research, we realize that we don’t have to reply to every comment, and in asynchronous spaces, there are alternatives:

    • One is to let the next iteration speak for itself. When the design evolves and we post a follow-up iteration, that’s the reply. You might tag all the people who were involved in the previous discussion, but even that’s a choice, not a requirement. 
    • Another is to briefly reply to acknowledge each comment, such as “Understood. Thank you,” “Good points—I’ll review,” or “Thanks. I’ll include these in the next iteration.” In some cases, this could also be just a single top-level comment along the lines of “Thanks for all the feedback everyone—the next iteration is coming soon!”
    • Another is to provide a quick summary of the comments before moving on. Depending on your workflow, this can be particularly useful as it can provide a simplified checklist that you can then use for the next iteration.

    The second friction point is the swoop-by comment, which is the kind of feedback that comes from someone outside the project or team who might not be aware of the context, restrictions, decisions, or requirements—or of the previous iterations’ discussions. On their side, there’s something that one can hope that they might learn: they could start to acknowledge that they’re doing this and they could be more conscious in outlining where they’re coming from. Swoop-by comments often trigger the simple thought “We’ve already discussed this…”, and it can be frustrating to have to repeat the same reply over and over.

    Let’s begin by acknowledging again that there’s no need to reply to every comment. If, however, replying to a previously litigated point might be useful, a short reply with a link to the previous discussion for extra details is usually enough. Remember, alignment comes from repetition, so it’s okay to repeat things sometimes!

    Swoop-by commenting can still be useful for two reasons: they might point out something that still isn’t clear, and they also have the potential to stand in for the point of view of a user who’s seeing the design for the first time. Sure, you’ll still be frustrated, but that might at least help in dealing with it.

    The third friction point is the personal stake we could have with the design, which could make us feel defensive if the review were to feel more like a discussion. Treating feedback as user research helps us create a healthy distance between the people giving us feedback and our ego (because yes, even if we don’t want to admit it, it’s there). And ultimately, treating everything in aggregated form allows us to better prioritize our work.

    Always remember that while you need to listen to stakeholders, project owners, and specific advice, you don’t have to accept every piece of feedback. You have to analyze it and make a decision that you can justify, but sometimes “no” is the right answer. 

    As the designer leading the project, you’re in charge of that decision. Ultimately, everyone has their specialty, and as the designer, you’re the one who has the most knowledge and the most context to make the right decision. And by listening to the feedback that you’ve received, you’re making sure that it’s also the best and most balanced decision.

    Thanks to Brie Anne Demkiw and Mike Shelton for reviewing the first draft of this article.

  • Designing for the Unexpected

    Designing for the Unexpected

    I’m not sure when I first heard this quote, but it’s something that has stayed with me over the years. How do you create services for situations you can’t imagine? Or design products that work on devices yet to be invented?

    Flash, Photoshop, and responsive design

    When I first started designing websites, my go-to software was Photoshop. I created a 960px canvas and set about creating a layout that I would later drop content in. The development phase was about attaining pixel-perfect accuracy using fixed widths, fixed heights, and absolute positioning.

    Ethan Marcotte’s talk at An Event Apart and subsequent article “Responsive Web Design” in A List Apart in 2010 changed all this. I was sold on responsive design as soon as I heard about it, but I was also terrified. The pixel-perfect designs full of magic numbers that I had previously prided myself on producing were no longer good enough.

    The fear wasn’t helped by my first experience with responsive design. My first project was to take an existing fixed-width website and make it responsive. What I learned the hard way was that you can’t just add responsiveness at the end of a project. To create fluid layouts, you need to plan throughout the design phase.

    A new way to design

    Designing responsive or fluid sites has always been about removing limitations, producing content that can be viewed on any device. It relies on the use of percentage-based layouts, which I initially achieved with native CSS and utility classes:

    .column-span-6 {
      width: 49%;
      float: left;
      margin-right: 0.5%;
      margin-left: 0.5%;
    }
    
    
    .column-span-4 {
      width: 32%;
      float: left;
      margin-right: 0.5%;
      margin-left: 0.5%;
    }
    
    .column-span-3 {
      width: 24%;
      float: left;
      margin-right: 0.5%;
      margin-left: 0.5%;
    }

    Then with Sass so I could take advantage of @includes to re-use repeated blocks of code and move back to more semantic markup:

    .logo {
      @include colSpan(6);
    }
    
    .search {
      @include colSpan(3);
    }
    
    .social-share {
      @include colSpan(3);
    }

    Media queries

    The second ingredient for responsive design is media queries. Without them, content would shrink to fit the available space regardless of whether that content remained readable (The exact opposite problem occurred with the introduction of a mobile-first approach).

    Media queries prevented this by allowing us to add breakpoints where the design could adapt. Like most people, I started out with three breakpoints: one for desktop, one for tablets, and one for mobile. Over the years, I added more and more for phablets, wide screens, and so on. 

    For years, I happily worked this way and improved both my design and front-end skills in the process. The only problem I encountered was making changes to content, since with our Sass grid system in place, there was no way for the site owners to add content without amending the markup—something a small business owner might struggle with. This is because each row in the grid was defined using a div as a container. Adding content meant creating new row markup, which requires a level of HTML knowledge.

    Row markup was a staple of early responsive design, present in all the widely used frameworks like Bootstrap and Skeleton.

    1 of 7
    2 of 7
    3 of 7
    4 of 7
    5 of 7
    6 of 7
    7 of 7

    Another problem arose as I moved from a design agency building websites for small- to medium-sized businesses, to larger in-house teams where I worked across a suite of related sites. In those roles I started to work much more with reusable components. 

    Our reliance on media queries resulted in components that were tied to common viewport sizes. If the goal of component libraries is reuse, then this is a real problem because you can only use these components if the devices you’re designing for correspond to the viewport sizes used in the pattern library—in the process not really hitting that “devices that don’t yet exist”  goal.

    Then there’s the problem of space. Media queries allow components to adapt based on the viewport size, but what if I put a component into a sidebar, like in the figure below?

    Container queries: our savior or a false dawn?

    Container queries have long been touted as an improvement upon media queries, but at the time of writing are unsupported in most browsers. There are JavaScript workarounds, but they can create dependency and compatibility issues. The basic theory underlying container queries is that elements should change based on the size of their parent container and not the viewport width, as seen in the following illustrations.

    One of the biggest arguments in favor of container queries is that they help us create components or design patterns that are truly reusable because they can be picked up and placed anywhere in a layout. This is an important step in moving toward a form of component-based design that works at any size on any device.

    In other words, responsive components to replace responsive layouts.

    Container queries will help us move from designing pages that respond to the browser or device size to designing components that can be placed in a sidebar or in the main content, and respond accordingly.

    My concern is that we are still using layout to determine when a design needs to adapt. This approach will always be restrictive, as we will still need pre-defined breakpoints. For this reason, my main question with container queries is, How would we decide when to change the CSS used by a component? 

    A component library removed from context and real content is probably not the best place for that decision. 

    As the diagrams below illustrate, we can use container queries to create designs for specific container widths, but what if I want to change the design based on the image size or ratio?

    In this example, the dimensions of the container are not what should dictate the design; rather, the image is.

    It’s hard to say for sure whether container queries will be a success story until we have solid cross-browser support for them. Responsive component libraries would definitely evolve how we design and would improve the possibilities for reuse and design at scale. But maybe we will always need to adjust these components to suit our content.

    CSS is changing

    Whilst the container query debate rumbles on, there have been numerous advances in CSS that change the way we think about design. The days of fixed-width elements measured in pixels and floated div elements used to cobble layouts together are long gone, consigned to history along with table layouts. Flexbox and CSS Grid have revolutionized layouts for the web. We can now create elements that wrap onto new rows when they run out of space, not when the device changes.

    .wrapper {
      display: grid;
      grid-template-columns: repeat(auto-fit, 450px);
      gap: 10px;
    }

    The repeat() function paired with auto-fit or auto-fill allows us to specify how much space each column should use while leaving it up to the browser to decide when to spill the columns onto a new line. Similar things can be achieved with Flexbox, as elements can wrap over multiple rows and “flex” to fill available space. 

    .wrapper {
      display: flex;
      flex-wrap: wrap;
      justify-content: space-between;
    }
    
    .child {
      flex-basis: 32%;
      margin-bottom: 20px;
    }

    The biggest benefit of all this is you don’t need to wrap elements in container rows. Without rows, content isn’t tied to page markup in quite the same way, allowing for removals or additions of content without additional development.

    This is a big step forward when it comes to creating designs that allow for evolving content, but the real game changer for flexible designs is CSS Subgrid. 

    Remember the days of crafting perfectly aligned interfaces, only for the customer to add an unbelievably long header almost as soon as they’re given CMS access, like the illustration below?

    Subgrid allows elements to respond to adjustments in their own content and in the content of sibling elements, helping us create designs more resilient to change.

    .wrapper {
      display: grid;
      grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
         grid-template-rows: auto 1fr auto;
      gap: 10px;
    }
    
    .sub-grid {
      display: grid;
      grid-row: span 3;
      grid-template-rows: subgrid; /* sets rows to parent grid */
    }

    CSS Grid allows us to separate layout and content, thereby enabling flexible designs. Meanwhile, Subgrid allows us to create designs that can adapt in order to suit morphing content. Subgrid at the time of writing is only supported in Firefox but the above code can be implemented behind an @supports feature query. 

    Intrinsic layouts 

    I’d be remiss not to mention intrinsic layouts, the term created by Jen Simmons to describe a mixture of new and old CSS features used to create layouts that respond to available space. 

    Responsive layouts have flexible columns using percentages. Intrinsic layouts, on the other hand, use the fr unit to create flexible columns that won’t ever shrink so much that they render the content illegible.

    fr units is a way to say I want you to distribute the extra space in this way, but…don’t ever make it smaller than the content that’s inside of it.

    —Jen Simmons, “Designing Intrinsic Layouts”

    Intrinsic layouts can also utilize a mixture of fixed and flexible units, allowing the content to dictate the space it takes up.

    What makes intrinsic design stand out is that it not only creates designs that can withstand future devices but also helps scale design without losing flexibility. Components and patterns can be lifted and reused without the prerequisite of having the same breakpoints or the same amount of content as in the previous implementation. 

    We can now create designs that adapt to the space they have, the content within them, and the content around them. With an intrinsic approach, we can construct responsive components without depending on container queries.

    Another 2010 moment?

    This intrinsic approach should in my view be every bit as groundbreaking as responsive web design was ten years ago. For me, it’s another “everything changed” moment. 

    But it doesn’t seem to be moving quite as fast; I haven’t yet had that same career-changing moment I had with responsive design, despite the widely shared and brilliant talk that brought it to my attention. 

    One reason for that could be that I now work in a large organization, which is quite different from the design agency role I had in 2010. In my agency days, every new project was a clean slate, a chance to try something new. Nowadays, projects use existing tools and frameworks and are often improvements to existing websites with an existing codebase. 

    Another could be that I feel more prepared for change now. In 2010 I was new to design in general; the shift was frightening and required a lot of learning. Also, an intrinsic approach isn’t exactly all-new; it’s about using existing skills and existing CSS knowledge in a different way. 

    You can’t framework your way out of a content problem

    Another reason for the slightly slower adoption of intrinsic design could be the lack of quick-fix framework solutions available to kick-start the change. 

    Responsive grid systems were all over the place ten years ago. With a framework like Bootstrap or Skeleton, you had a responsive design template at your fingertips.

    Intrinsic design and frameworks do not go hand in hand quite so well because the benefit of having a selection of units is a hindrance when it comes to creating layout templates. The beauty of intrinsic design is combining different units and experimenting with techniques to get the best for your content.

    And then there are design tools. We probably all, at some point in our careers, used Photoshop templates for desktop, tablet, and mobile devices to drop designs in and show how the site would look at all three stages.

    How do you do that now, with each component responding to content and layouts flexing as and when they need to? This type of design must happen in the browser, which personally I’m a big fan of. 

    The debate about “whether designers should code” is another that has rumbled on for years. When designing a digital product, we should, at the very least, design for a best- and worst-case scenario when it comes to content. To do this in a graphics-based software package is far from ideal. In code, we can add longer sentences, more radio buttons, and extra tabs, and watch in real time as the design adapts. Does it still work? Is the design too reliant on the current content?

    Personally, I look forward to the day intrinsic design is the standard for design, when a design component can be truly flexible and adapt to both its space and content with no reliance on device or container dimensions.

    Content first 

    Content is not constant. After all, to design for the unknown or unexpected we need to account for content changes like our earlier Subgrid card example that allowed the cards to respond to adjustments to their own content and the content of sibling elements.

    Thankfully, there’s more to CSS than layout, and plenty of properties and values can help us put content first. Subgrid and pseudo-elements like ::first-line and ::first-letter help to separate design from markup so we can create designs that allow for changes.

    Instead of old markup hacks like this—

    First line of text with different styling...

    —we can target content based on where it appears.

    .element::first-line {
      font-size: 1.4em;
    }
    
    .element::first-letter {
      color: red;
    }

    Much bigger additions to CSS include logical properties, which change the way we construct designs using logical dimensions (start and end) instead of physical ones (left and right), something CSS Grid also does with functions like min(), max(), and clamp().

    This flexibility allows for directional changes according to content, a common requirement when we need to present content in multiple languages. In the past, this was often achieved with Sass mixins but was often limited to switching from left-to-right to right-to-left orientation.

    In the Sass version, directional variables need to be set.

    $direction: rtl;
    $opposite-direction: ltr;
    
    $start-direction: right;
    $end-direction: left;

    These variables can be used as values—

    body {
      direction: $direction;
      text-align: $start-direction;
    }

    —or as properties.

    margin-#{$end-direction}: 10px;
    padding-#{$start-direction}: 10px;

    However, now we have native logical properties, removing the reliance on both Sass (or a similar tool) and pre-planning that necessitated using variables throughout a codebase. These properties also start to break apart the tight coupling between a design and strict physical dimensions, creating more flexibility for changes in language and in direction.

    margin-block-end: 10px;
    padding-block-start: 10px;

    There are also native start and end values for properties like text-align, which means we can replace text-align: right with text-align: start.

    Like the earlier examples, these properties help to build out designs that aren’t constrained to one language; the design will reflect the content’s needs.

    Fixed and fluid 

    We briefly covered the power of combining fixed widths with fluid widths with intrinsic layouts. The min() and max() functions are a similar concept, allowing you to specify a fixed value with a flexible alternative. 

    For min() this means setting a fluid minimum value and a maximum fixed value.

    .element {
      width: min(50%, 300px);
    }

    The element in the figure above will be 50% of its container as long as the element’s width doesn’t exceed 300px.

    For max() we can set a flexible max value and a minimum fixed value.

    .element {
      width: max(50%, 300px);
    }

    Now the element will be 50% of its container as long as the element’s width is at least 300px. This means we can set limits but allow content to react to the available space. 

    The clamp() function builds on this by allowing us to set a preferred value with a third parameter. Now we can allow the element to shrink or grow if it needs to without getting to a point where it becomes unusable.

    .element {
      width: clamp(300px, 50%, 600px);
    }

    This time, the element’s width will be 50% (the preferred value) of its container but never less than 300px and never more than 600px.

    With these techniques, we have a content-first approach to responsive design. We can separate content from markup, meaning the changes users make will not affect the design. We can start to future-proof designs by planning for unexpected changes in language or direction. And we can increase flexibility by setting desired dimensions alongside flexible alternatives, allowing for more or less content to be displayed correctly.

    Situation first

    Thanks to what we’ve discussed so far, we can cover device flexibility by changing our approach, designing around content and space instead of catering to devices. But what about that last bit of Jeffrey Zeldman’s quote, “…situations you haven’t imagined”?

    It’s a very different thing to design for someone seated at a desktop computer as opposed to someone using a mobile phone and moving through a crowded street in glaring sunshine. Situations and environments are hard to plan for or predict because they change as people react to their own unique challenges and tasks.

    This is why choice is so important. One size never fits all, so we need to design for multiple scenarios to create equal experiences for all our users.

    Thankfully, there is a lot we can do to provide choice.

    Responsible design 

    “There are parts of the world where mobile data is prohibitively expensive, and where there is little or no broadband infrastructure.”

    I Used the Web for a Day on a 50 MB Budget

    Chris Ashton

    One of the biggest assumptions we make is that people interacting with our designs have a good wifi connection and a wide screen monitor. But in the real world, our users may be commuters traveling on trains or other forms of transport using smaller mobile devices that can experience drops in connectivity. There is nothing more frustrating than a web page that won’t load, but there are ways we can help users use less data or deal with sporadic connectivity.

    The srcset attribute allows the browser to decide which image to serve. This means we can create smaller ‘cropped’ images to display on mobile devices in turn using less bandwidth and less data.

    Image alt text

    The preload attribute can also help us to think about how and when media is downloaded. It can be used to tell a browser about any critical assets that need to be downloaded with high priority, improving perceived performance and the user experience. 

     
     

    There’s also native lazy loading, which indicates assets that should only be downloaded when they are needed.

    …

    With srcset, preload, and lazy loading, we can start to tailor a user’s experience based on the situation they find themselves in. What none of this does, however, is allow the user themselves to decide what they want downloaded, as the decision is usually the browser’s to make. 

    So how can we put users in control?

    The return of media queries 

    Media queries have always been about much more than device sizes. They allow content to adapt to different situations, with screen size being just one of them.

    We’ve long been able to check for media types like print and speech and features such as hover, resolution, and color. These checks allow us to provide options that suit more than one scenario; it’s less about one-size-fits-all and more about serving adaptable content. 

    As of this writing, the Media Queries Level 5 spec is still under development. It introduces some really exciting queries that in the future will help us design for multiple other unexpected situations.

    For example, there’s a light-level feature that allows you to modify styles if a user is in sunlight or darkness. Paired with custom properties, these features allow us to quickly create designs or themes for specific environments.

    @media (light-level: normal) {
      --background-color: #fff;
      --text-color: #0b0c0c;  
    }
    
    @media (light-level: dim) {
      --background-color: #efd226;
      --text-color: #0b0c0c;
    }

    Another key feature of the Level 5 spec is personalization. Instead of creating designs that are the same for everyone, users can choose what works for them. This is achieved by using features like prefers-reduced-data, prefers-color-scheme, and prefers-reduced-motion, the latter two of which already enjoy broad browser support. These features tap into preferences set via the operating system or browser so people don’t have to spend time making each site they visit more usable. 

    Media queries like this go beyond choices made by a browser to grant more control to the user.

    Expect the unexpected

    In the end, the one thing we should always expect is for things to change. Devices in particular change faster than we can keep up, with foldable screens already on the market.

    We can’t design the same way we have for this ever-changing landscape, but we can design for content. By putting content first and allowing that content to adapt to whatever space surrounds it, we can create more robust, flexible designs that increase the longevity of our products. 

    A lot of the CSS discussed here is about moving away from layouts and putting content at the heart of design. From responsive components to fixed and fluid units, there is so much more we can do to take a more intrinsic approach. Even better, we can test these techniques during the design phase by designing in-browser and watching how our designs adapt in real-time.

    When it comes to unexpected situations, we need to make sure our products are usable when people need them, whenever and wherever that might be. We can move closer to achieving this by involving users in our design decisions, by creating choice via browsers, and by giving control to our users with user-preference-based media queries. 

    Good design for the unexpected should allow for change, provide choice, and give control to those we serve: our users themselves.

  • Voice Content and Usability

    Voice Content and Usability

    We’ve been conversing for a long time. Whether to present information, perform transactions, or just to check in on one another, people have yammered aside, chattering and gesticulating, through spoken discussion for many generations. Only recently have we begun to write our discussions, and only recently have we outsourced them to the system, a system that exhibits a significantly higher affection for written letter than for the vernacular rigors of spoken language.

    Laptops have trouble because between spoken and written speech, talk is more primitive. Machines must wrestle with the complexity of human statement, including the pauses and pauses, the gestures and brain speech, and the word selection and spoken dialect variations that can impede even the most skillfully crafted human-computer interaction. In the human-to-human situation, spoken language also has the opportunity of face-to-face call, where we can easily interpret visual interpersonal cues.

    In contrast, written language develops its own fossil record of dated terms and phrases as we report it and keep utilization long after they are no longer needed in spoken communication ( for example, the welcome” To whom it may concern” ). Because it tends to be more consistent, smooth, and proper, written word is necessarily far easier for devices to interpret and know.

    Spoken language is not a luxury in this regard. Besides the nonverbal cues that decorate conversations with emphasis and emotional context, there are also verbal cues and vocal behaviors that modulate conversation in nuanced ways: how something is said, not what. Our spoken language conveys much more than the written word can ever contain, whether it’s rapid-fire, low-pitched, high-decibel, sarcastic, stilted, or sighing. So when it comes to voice interfaces—the machines we conduct spoken conversations with—we face exciting challenges as designers and content strategists.

    Voice Compositions

    We interact with voice interfaces for a variety of reasons, but according to Michael McTear, Zoraida Callejas, and David Griol in The Conversational Interface, those motivations by and large mirror the reasons we initiate conversations with other people, too ( ). We typically strike up a discussion in the following ways:

    • we need something done ( such as a transaction ),
    • we seek knowledge of something ( some kind of information ), or
    • we are social beings and want someone to talk to ( conversation for conversation’s sake ).

    A single conversation from beginning to end that achieves some outcome for the user, starting with the voice interface’s first greeting and ending with the user exiting the interface, also fits into these three categories, which I refer to as transactional, informational, and prosocial. Note here that a conversation in our human sense—a chat between people that leads to some result and lasts an arbitrary length of time—could encompass multiple transactional, informational, and prosocial voice interactions in succession. In other words, a voice interaction is a conversation, but it is not always just one voice interaction.

    Purely prosocial conversations are more gimmicky than captivating in most voice interfaces, because machines don’t yet have the capacity to really want to know how we’re doing and to do the sort of glad-handing humans crave. Additionally, there is a debate about whether users actually prefer organic human conversations that start with prosocial voiceovers and then seamlessly transition to other types. In fact, in Voice User Interface Design, Michael Cohen, James Giangola, and Jennifer Balogh recommend sticking to users ‘ expectations by mimicking how they interact with other voice interfaces rather than trying too hard to be human—potentially alienating them in the process ( ).

    That leaves two different types of conversations we can have with one another that a voice interface can also have easily, such as one that focuses on a transactional voice interaction ( buying iced tea ) and another on learning something new ( discuss a musical ).

    Transactional voice interactions

    When you order a Hawaiian pizza with extra pineapple, you’re typically having a conversation and a voice interaction when you’re tapping buttons on a food delivery app. Even when we walk up to the counter and place an order, the conversation quickly pivots from an initial smattering of neighborly small talk to the real mission at hand: ordering a pizza ( generously topped with pineapple, as it should be ).

    Alison: Hey, how are things going?

    Burhan: Hi, welcome to Crust Deluxe! It’s chilly outside. How can I help you?

    Alison, can I get a pineapple-onion pizza in Hawaii?

    Burhan: Sure, what size?

    Alison: Big.

    Burhan: Anything else?

    Alison: No, that’s it.

    Burhan: Something to drink?

    Alison, I’ll have a bottle of Coke.

    Burhan: You got it. That will cost$ 13.55 and take about fifteen minutes.

    Each progressive disclosure in this transactional conversation reveals more and more of the desired outcome of the transaction: a service rendered or a product delivered. Conversations that are transactional have certain characteristics: they are direct, precise, and cost-effective. They quickly dispense with pleasantries.

    Informational voice interactions

    Meanwhile, some conversations are primarily about obtaining information. Alison might visit Crust Deluxe with the sole intention of placing an order, but she might not want to leave with a pizza at all. She might be just as interested in whether they serve halal or kosher dishes, gluten-free options, or something else. We’re after much more than just a prosocial mini-conversation at the beginning, even though we do it once more to establish politeness.

    Alison: Hey, how are things going?

    Burhan: Hi, welcome to Crust Deluxe! It’s chilly outside. How can I help you?

    Alison: Can I ask a few questions?

    Burhan: Of course! Go right ahead.

    Do you have any halal options on the menu, Alison?

    Burhan: Absolutely! On request, we can make any pie halal. We also have lots of vegetarian, ovo-lacto, and vegan options. Do you have any other dietary restrictions in mind?

    Alison: What about gluten-free pizzas?

    Burhan: For both our deep-dish and thin-crust pizzas, we can definitely make a gluten-free crust for you. Anything else I can answer for you?

    Alison: That’s it for now. Good to know. Thank you!

    Burhan: Anytime, come back soon!

    This dialogue is radically different. Here, the goal is to get a certain set of facts. Informational conversations are research expeditions to gather data, news, or facts, or they are investigative quests for the truth. Voice interactions that are informational might be more long-winded than transactional conversations by necessity. In order for the customer to understand the key takeaways, responses are typically longer, more in-depth, and carefully communicated.

    Voice Interfaces

    Voice interfaces, in essence, use speech to assist users in accomplishing their objectives. But simply because an interface has a voice component doesn’t mean that every user interaction with it is mediated through voice. We’re most concerned with pure voice interfaces, which depend entirely on spoken conversation and lack any visual component, making multimodal voice interfaces much more nuanced and challenging to deal with because they can lean on visual components like screens as crutches.

    Though voice interfaces have long been integral to the imagined future of humanity in science fiction, only recently have those lofty visions become fully realized in genuine voice interfaces.

    IVR ( interactive voice response ) systems

    Though written conversational interfaces have been fixtures of computing for many decades, voice interfaces first emerged in the early 1990s with text-to-speech ( TTS ) dictation programs that recited written text aloud, as well as speech-enabled in-car systems that gave directions to a user-provided address. We became familiar with the first real voice interfaces that could actually be spoken with the help of interactive voice response ( IVR ) systems, which were developed as an alternative to overburdened customer service representatives.

    IVR systems allowed organizations to reduce their reliance on call centers but soon became notorious for their clunkiness. Similar to the corporate world, these systems were primarily created as metaphorical switchboards to direct customers to a real phone agent (” Say Reservations to book a flight or check an itinerary” ), and chances are you’ll have a conversation with one when you call an airline or hotel conglomerate. Despite their functional issues and users ‘ frustration with their inability to speak to an actual human right away, IVR systems proliferated in the early 1990s across a variety of industries (, PDF).

    IVR systems have a reputation for having less scintillating conversation than we’re used to in real life ( or even in science fiction ), but they are great for highly repetitive, monotonous conversations that typically don’t veer from a single format.

    Screen readers

    The screen reader, a program that converts visual information into synthesized speech, was a development that accompanied the development of IVR systems. For Blind or visually impaired website users, it’s the predominant method of interacting with text, multimedia, or form elements. The most recent version of a voice-over-text format of content delivery is probably the one that is closest to it.

    Among the first screen readers known by that moniker was the Screen Reader for the BBC Micro and NEEC Portable developed by the Research Centre for the Education of the Visually Handicapped (RCEVH) at the University of Birmingham in 1986 ( ). The first IBM Screen Reader for text-based computers was created by Jim Thatcher in the same year, which was later recreated for a computer with graphical user interfaces ( GUIs ) ( ).

    With the rapid growth of the web in the 1990s, the demand for accessible tools for websites exploded. Screen readers started facilitating quick interactions with web pages that ostensibly allow disabled users to traverse the page as an aural and temporal space rather than a visual and physical one with the introduction of semantic HTML and especially ARIA roles in 2008, enabling speedy interactions with the pages. In other words, screen readers for the web “provide mechanisms that translate visual design constructs—proximity, proportion, etc. in A List Apart, writes Aaron Gustafson, “into useful information.” ” At least they do when documents are authored thoughtfully” ( ).

    There is a big draw for screen readers: they’re challenging to use and relentlessly verbose, despite being incredibly instructive for voice interface designers. The visual structures of websites and web navigation don’t translate well to screen readers, sometimes resulting in unwieldy pronouncements that name every manipulable HTML element and announce every formatting change. Working with web-based interfaces is a cognitive burden for many screen reader users.

    In Wired, accessibility advocate and voice engineer Chris Maury considers why the screen reader experience is ill-suited to users relying on voice:

    I disliked the operation of Screen Readers from the beginning. Why are they designed the way they are? It makes no sense to present information visually before converting it to audio only after that. All of the time and energy that goes into creating the perfect user experience for an app is wasted, or even worse, adversely impacting the experience for blind users. ( )

    In many cases, well-designed voice interfaces can speed users to their destination better than long-winded screen reader monologues. After all, users of the visual interface have the advantage of freely scurrying around the viewport to find information without getting too close to it. Blind users, meanwhile, are obligated to listen to every utterance synthesized into speech and therefore prize brevity and efficiency. Users with disabilities who have long had no choice but to use clumsy screen readers might benefit from more streamlined user interfaces, especially more advanced voice assistants.

    Voice assistants

    Many of us immediately associate voice assistants with the popular subset of voice interfaces found in living rooms, smart homes, and offices with the film A Space Odyssey or with Majel Barrett’s voice as the omniscient computer from Star Trek. Voice assistants are akin to personal concierges that can answer questions, schedule appointments, conduct searches, and perform other common day-to-day tasks. And because of their assistive potential, they are quickly gaining more and more attention from accessibility advocates.

    Before the earliest IVR systems found success in the enterprise, Apple published a demonstration video in 1987 depicting the Knowledge Navigator, a voice assistant that could transcribe spoken words and recognize human speech to a great degree of accuracy. Then, in 2001, Tim Berners-Lee and others created their vision for a” semantic web agent” that would carry out routine tasks like” checking calendars, making appointments, and finding locations” ( hinter paywall ). It wasn’t until 2011 that Apple’s Siri finally entered the picture, making voice assistants a tangible reality for consumers.

    There are a lot of variations in the programmability and customization of some voice assistants compared to others ( Fig. 1 ). As a result of the breadth of voice assistants available today ( Fig. 1 ). At one extreme, everything except vendor-provided features is locked down, for example, at the time of their release, the core functionality of Apple’s Siri and Microsoft’s Cortana couldn’t be extended beyond their existing capabilities. There are no other means by which developers can interact with Siri at a low level, aside from predefined categories of tasks like sending messages, hailing rideshares, making restaurant reservations, and other things, which are still unavoidable today.

    At the opposite end of the spectrum, voice assistants like Amazon Alexa and Google Home offer a core foundation on which developers can build custom voice interfaces. For this reason, developers who feel stifled by the limitations of Siri and Cortana are increasingly using programmable voice assistants that allow for customization and extensibility. Amazon offers the Alexa Skills Kit, a developer framework for building custom voice interfaces for Amazon Alexa, while Google Home offers the ability to program arbitrary Google Assistant skills. Users can choose from among the thousands of custom-built skills available today in the Google Assistant and Amazon Alexa ecosystems.

    As businesses like Amazon, Apple, Microsoft, and Google continue to occupy their positions, they are also selling and open-sourcing an unheard array of tools and frameworks for designers and developers, aiming to make creating voice interfaces as simple as possible, even without code.

    Often by necessity, voice assistants like Amazon Alexa tend to be monochannel—they’re tightly coupled to a device and can’t be accessed on a computer or smartphone instead. In contrast, many development platforms, such as Google’s Dialogflow, have omnichannel capabilities that allow users to create a single conversational interface that then becomes a voice interface, textual chatbot, and IVR system upon deployment. I don’t prescribe any specific implementation approaches in this design-focused book, but in Chapter 4 we’ll get into some of the implications these variables might have on the way you build out your design artifacts.

    Voice content

    Simply put, voice content is content delivered through voice. Voice content must be free-flowing and organic, contextless and concise in order to preserve what makes human conversation so compelling in the first place. Everything written content is not.

    Our world is replete with voice content in various forms: screen readers reciting website content, voice assistants rattling off a weather forecast, and automated phone hotline responses governed by IVR systems. We’re most concerned with the audiobook content being delivered as a requirement rather than an option.

    For many of us, our first foray into informational voice interfaces will be to deliver content to users. One issue is that any content we already have isn’t in any way suitable for this new environment. So how do we make the content trapped on our websites more conversational? And how do we create fresh copy that works with voice movements?

    Lately, we’ve begun slicing and dicing our content in unprecedented ways. Websites are, in many ways, colossal vaults of what I call macrocontent: lengthy prose that can last for miles in a browser window, like microfilm viewers of newspaper archives. Back in 2002, well before the present-day ubiquity of voice assistants, technologist Anil Dash defined microcontent as permalinked pieces of content that stay legible regardless of environment, such as email or text messages:

    An example of microcontent can be a day’s weather forecast [sic], an airplane flight’s arrival and departure times, an abstract from a lengthy publication, or a single instant message. ( )

    I would update Dash’s definition of microcontent to include all instances of bite-sized content that transcends written communiqués. After all, today we encounter microcontent in interfaces where a small snippet of copy is displayed alone, unmoored from the browser, like a textbot confirmation of a restaurant reservation. The best way to learn how to stretch your content to the limits of its potential is through microcontent, which will inform both established and new delivery methods.

    As microcontent, voice content is unique because it’s an example of how content is experienced in time rather than in space. We can instantly see when the next train is coming from a digital sign underground, but voice interfaces keep our attention occupied for so long that screen reader users are all too familiar.

    Because microcontent is fundamentally made up of isolated blobs with no relation to the channels where they’ll eventually end up, we need to ensure that our microcontent truly performs well as voice content—and that means focusing on the two most important traits of robust voice content: voice content legibility and voice content discoverability.

    Our voice content’s legibility and discoverability in general both depend on how it manifests in terms of perceived space and time.