Blog

  • The Wax and the Wane of the Web

    The Wax and the Wane of the Web

    When you begin to believe you have all figured out, everyone does change, in my opinion. Simply as you start to get the hang of injections, diapers, and ordinary sleep, it’s time for solid foods, potty training, and nighttime sleep. When those are determined, school and occasional sleeps are in order. The cycle goes on and on.

    The same holds true for those of us who are currently employed in design and development. Having worked on the web for about three years at this point, I’ve seen the typical wax and wane of concepts, strategies, and systems. Every day we as developers and designers get into a routine pattern, a brand-new concept or technology emerges to shake things up and completely alter our planet.

    How we got below

    I built my first website in the mid-’90s. Design and development on the web back then was a free-for-all, with few established norms. For any layout aside from a single column, we used table elements, often with empty cells containing a single pixel spacer GIF to add empty space. We styled text with numerous font tags, nesting the tags every time we wanted to vary the font style. And we had only three or four typefaces to choose from: Arial, Courier, or Times New Roman. When Verdana and Georgia came out in 1996, we rejoiced because our options had nearly doubled. The only safe colors to choose from were the 216 “web safe” colors known to work across platforms. The few interactive elements (like contact forms, guest books, and counters) were mostly powered by CGI scripts (predominantly written in Perl at the time). Achieving any kind of unique look involved a pile of hacks all the way down. Interaction was often limited to specific pages in a site.

    the development of online requirements

    At the turn of the century, a new cycle started. Crufty code littered with table layouts and font tags waned, and a push for web standards waxed. Newer technologies like CSS got more widespread adoption by browsers makers, developers, and designers. This shift toward standards didn’t happen accidentally or overnight. It took active engagement between the W3C and browser vendors and heavy evangelism from folks like the Web Standards Project to build standards. A List Apart and books like Designing with Web Standards by Jeffrey Zeldman played key roles in teaching developers and designers why standards are important, how to implement them, and how to sell them to their organizations. And approaches like progressive enhancement introduced the idea that content should be available for all browsers—with additional enhancements available for more advanced browsers. Meanwhile, sites like the CSS Zen Garden showcased just how powerful and versatile CSS can be when combined with a solid semantic HTML structure.

    Server-side language like PHP, Java, and.NET took Perl as the primary back-end computers, and the cgi-bin was tossed in the garbage bin. With these improved server-side equipment, the first period of internet programs started with content-management methods (especially those used in blogs like Blogger, Grey Matter, Movable Type, and WordPress ) In the mid-2000s, AJAX opened gates for sequential interaction between the front end and back finish. Pages was now revise their content without having to reload it. A grain of Script frameworks like Prototype, YUI, and ruby arose to aid developers develop more credible client-side conversation across browsers that had wildly varying levels of standards support. Techniques like picture alternative enable the use of fonts by skilled developers and developers. And technology like Flash made it possible to include movies, sports, and even more engagement.

    These new methods, requirements, and technologies greatly reenergized the sector. Web style flourished as creators and designers explored more different styles and designs. However, we also depend on numerous hacks. Early CSS was a huge improvement over table-based layouts when it came to basic layout and text styling, but its limitations at the time meant that designers and developers still relied heavily on images for complex shapes ( such as rounded or angled corners ) and tiled backgrounds for the appearance of full-length columns (among other hacks ). All kinds of nested floats or absolute positioning ( or both ) were necessary for complicated layouts. Display and photo substitute for specialty styles was a great start toward varying the designs from the big five, but both tricks introduced convenience and efficiency issues. Additionally, JavaScript libraries made it simple for anyone to add a dash of interaction to pages, even at the expense of double or even quadrupling the download size of basic websites.

    The web as software platform

    The balance between the front end and the back end continued to improve, leading to the development of the current web application era. Between expanded server-side programming languages ( which kept growing to include Ruby, Python, Go, and others ) and newer front-end tools like React, Vue, and Angular, we could build fully capable software on the web. Along with these tools, there were additional options, such as shared package libraries, build automation, and collaborative version control. What was once primarily an environment for linked documents became a realm of infinite possibilities.

    Mobile devices also increased in their capabilities, and they gave us access to internet in our pockets at the same time. Mobile apps and responsive design opened up opportunities for new interactions anywhere and any time.

    This fusion of potent mobile devices and potent development tools contributed to the growth of social media and other centralized tools for people to use and interact with. As it became easier and more common to connect with others directly on Twitter, Facebook, and even Slack, the desire for hosted personal sites waned. Social media made connections on a global scale, with both positive and negative outcomes.

    Want a much more extensive history of how we got here, with some other takes on ways that we can improve? ” Of Time and the Web” was written by Jeremy Keith. Or check out the” Web Design History Timeline” at the Web Design Museum. Additionally, Neal Agarwal takes a fascinating tour of” Internet Artifacts.”

    Where we are now

    It seems like we’ve reached yet another significant turning point in recent years. As social-media platforms fracture and wane, there’s been a growing interest in owning our own content again. From the tried-and-true classic of hosting plain HTML files to static site generators and content management systems of all kinds, there are many different ways to create websites. The fracturing of social media also comes with a cost: we lose crucial infrastructure for discovery and connection. The IndieWeb‘s Webmentions, RSS, ActivityPub, and other tools can assist with this, but they’re still largely underdeveloped and difficult to use for the less geeky. We can build amazing personal websites and add to them regularly, but without discovery and connection, it can sometimes feel like we may as well be shouting into the void.

    Browser support for CSS, JavaScript, and other web components has increased, particularly with initiatives like Interop. New technologies gain support across the board in a fraction of the time that they used to. When I first learn about a new feature, I frequently discover that its coverage is already over 80 % when I check the browser support. Nowadays, the barrier to using newer techniques often isn’t browser support but simply the limits of how quickly designers and developers can learn what’s available and how to adopt it.

    With a few commands and a few lines of code, we can currently prototype almost any concept. All the tools that we now have available make it easier than ever to start something new. However, as we upgrade and maintain these frameworks, we eventually pay the upfront costs that these frameworks may initially save in terms of our technical debt.

    If we rely on third-party frameworks, adopting new standards can sometimes take longer since we may have to wait for those frameworks to adopt those standards. These frameworks, which previously made it easier to adopt new techniques sooner, have since evolved into obstacles. These same frameworks often come with performance costs too, forcing users to wait for scripts to load before they can read or interact with pages. And when scripts fail ( whether due to poor code, network issues, or other environmental factors ), there is frequently no other option, leaving users with blank or broken pages.

    Where do we go from here?

    Hacks of today help to shape standards for tomorrow. And there’s nothing inherently wrong with embracing hacks —for now—to move the present forward. Problems only arise when we refuse to acknowledge that they are hacks or when we refuse to take their place. So what can we do to create the future we want for the web?

    Build for the long haul. Optimize for performance, for accessibility, and for the user. weigh the costs associated with those user-friendly tools. They may make your job a little easier today, but how do they affect everything else? What does each user pay? To future developers? To adoption of standards? Sometimes the convenience may be worth it. Sometimes it’s just a hack that you’ve gotten used to. And sometimes it’s holding you back from even better options.

    Start with standards. Standards continue to evolve over time, but browsers have done a remarkably good job of continuing to support older standards. Not all third-party frameworks are the same. Sites built with even the hackiest of HTML from the’ 90s still work just fine today. Even after a few years, the same can’t be said about websites created with frameworks.

    Design with care. Consider the effects of each choice, whether it is your craft, which is code, pixels, or processes. The convenience of many a modern tool comes at the cost of not always understanding the underlying decisions that have led to its design and not always considering the impact that those decisions can have. Use the time saved by modern tools to think more carefully and make decisions with care rather than rushing to “move fast and break things.”

    Always be learning. If you constantly learn, you also develop. Sometimes it may be hard to pinpoint what’s worth learning and what’s just today’s hack. Even if you were to concentrate solely on learning standards, you might end up focusing on something that won’t matter next year. ( Remember XHTML? ) However, ongoing learning opens up new neural connections, and the techniques you learn in one day may be useful for guiding future experiments.

    Play, experiment, and be weird! The ultimate experiment is this web that we’ve created. It’s the single largest human endeavor in history, and yet each of us can create our own pocket within it. Be brave and try something new. Build a playground for ideas. Create absurd experiments in your own crazy science lab. Start your own small business. There is no better place for being more creative, risk-taking, and expressing our creativity.

    Share and amplify. Share what you think has worked for you as you go through testing, playing, and learning. Write on your own website, post on whichever social media site you prefer, or shout it from a TikTok. Write something for A List Apart! But take the time to amplify others too: find new voices, learn from them, and share what they’ve taught you.

    Go ahead and create a masterpiece.

    As designers and developers for the web ( and beyond ), we’re responsible for building the future every day, whether that may take the shape of personal websites, social media tools used by billions, or anything in between. Let’s incorporate our values into the products we produce, and let’s improve the world for everyone. Create that thing that only you are uniquely qualified to make. Then, share it, improve it, re-create it, or create something new. Learn. Make. Share. Grow. Rinse and repeat. Everything will change whenever you believe you’ve mastered the web.

  • To Ignite a Personalization Practice, Run this Prepersonalization Workshop

    To Ignite a Personalization Practice, Run this Prepersonalization Workshop

    Photo this. You’ve joined a club at your business that’s designing innovative product features with an focus on technology or AI. Or your business has really implemented a personalization website. Either way, you’re designing with statistics. Then what? When it comes to designing for personalization, there are many warning stories, no immediately achievement, and some guidelines for the baffled.

    Between the dream of getting it right and the fear of it going wrong—like when we encounter “persofails” in the vein of a company constantly imploring daily consumers to buy more toilet seats—the personalization gap is true. It’s an particularly confusing place to be a modern professional without a map, a map, or a strategy.

    For those of you venturing into customisation, there’s no Lonely Planet and some tour guides because powerful personalization is so specific to each group’s skills, systems, and market place.

    But you can ensure that your team has packed its bags sensibly.

    There’s a DIY formula to increase your chances for success. At minimum, you’ll defuse your boss’s irrational exuberance. Before the party you’ll need to effectively prepare.

    We call it prepersonalization.

    Behind the music

    Consider Spotify’s DJ feature, which debuted this past year.

    We’re used to seeing the polished final result of a personalization feature. Before the year-end award, the making-of backstory, or the behind-the-scenes victory lap, a personalized feature had to be conceived, budgeted, and prioritized. Before any personalization feature goes live in your product or service, it lives amid a backlog of worthy ideas for expressing customer experiences more dynamically.

    So how do you know where to place your personalization bets? How do you design consistent interactions that won’t trip up users or—worse—breed mistrust? We’ve found that for many budgeted programs to justify their ongoing investments, they first needed one or more workshops to convene key stakeholders and internal customers of the technology. Make yours count.

    ​ From Big Tech to fledgling startups, we’ve seen the same evolution up close with our clients. In our experiences with working on small and large personalization efforts, a program’s ultimate track record—and its ability to weather tough questions, work steadily toward shared answers, and organize its design and technology efforts—turns on how effectively these prepersonalization activities play out.

    Time and again, we’ve seen effective workshops separate future success stories from unsuccessful efforts, saving countless time, resources, and collective well-being in the process.

    A personalization practice involves a multiyear effort of testing and feature development. It’s not a switch-flip moment in your tech stack. It’s best managed as a backlog that often evolves through three steps:

    1. customer experience optimization ( CXO, also known as A/B testing or experimentation )
    2. always-on automations ( whether rules-based or machine-generated )
    3. mature features or standalone product development ( such as Spotify’s DJ experience )

    This is why we created our progressive personalization framework and why we’re field-testing an accompanying deck of cards: we believe that there’s a base grammar, a set of “nouns and verbs” that your organization can use to design experiences that are customized, personalized, or automated. You won’t need these cards. But we strongly recommend that you create something similar, whether that might be digital or physical.

    Set your kitchen timer

    How long does it take to cook up a prepersonalization workshop? The surrounding assessment activities that we recommend including can ( and often do ) span weeks. For the core workshop, we recommend aiming for two to three days. Here’s a summary of our broader approach along with details on the essential first-day activities.

    The full arc of the wider workshop is threefold:

      Kickstart: This sets the terms of engagement as you focus on the opportunity as well as the readiness and drive of your team and your leadership..
    1. Plan your work: This is the heart of the card-based workshop activities where you specify a plan of attack and the scope of work.
    2. Work your plan: This phase is all about creating a competitive environment for team participants to individually pitch their own pilots that each contain a proof-of-concept project, its business case, and its operating model.

    Give yourself at least a day, split into two large time blocks, to power through a concentrated version of those first two phases.

    Kickstart: Whet your appetite

    We call the first lesson the “landscape of connected experience“. It explores the personalization possibilities in your organization. A connected experience, in our parlance, is any UX requiring the orchestration of multiple systems of record on the backend. This could be a content-management system combined with a marketing-automation platform. It could be a digital-asset manager combined with a customer-data platform.

    Spark conversation by naming consumer examples and business-to-business examples of connected experience interactions that you admire, find familiar, or even dislike. This should cover a representative range of personalization patterns, including automated app-based interactions ( such as onboarding sequences or wizards ), notifications, and recommenders. We have a catalog of these in the cards. Here’s a list of 142 different interactions to jog your thinking.

    This is all about setting the table. What are the possible paths for the practice in your organization? If you want a broader view, here’s a long-form primer and a strategic framework.

    Assess each example that you discuss for its complexity and the level of effort that you estimate that it would take for your team to deliver that feature ( or something similar ). In our cards, we divide connected experiences into five levels: functions, features, experiences, complete products, and portfolios. Size your own build here. This will help to focus the conversation on the merits of ongoing investment as well as the gap between what you deliver today and what you want to deliver in the future.

    Next, have your team plot each idea on the following 2×2 grid, which lays out the four enduring arguments for a personalized experience. This is critical because it emphasizes how personalization can not only help your external customers but also affect your own ways of working. It’s also a reminder ( which is why we used the word argument earlier ) of the broader effort beyond these tactical interventions.

    Each team member should vote on where they see your product or service putting its emphasis. Naturally, you can’t prioritize all of them. The intention here is to flesh out how different departments may view their own upsides to the effort, which can vary from one to the next. Documenting your desired outcomes lets you know how the team internally aligns across representatives from different departments or functional areas.

    The third and final kickstart activity is about naming your personalization gap. Is your customer journey well documented? Will data and privacy compliance be too big of a challenge? Do you have content metadata needs that you have to address? ( We’re pretty sure that you do: it’s just a matter of recognizing the relative size of that need and its remedy. ) In our cards, we’ve noted a number of program risks, including common team dispositions. Our Detractor card, for example, lists six stakeholder behaviors that hinder progress.

    Effectively collaborating and managing expectations is critical to your success. Consider the potential barriers to your future progress. Press the participants to name specific steps to overcome or mitigate those barriers in your organization. As studies have shown, personalization efforts face many common barriers.

    At this point, you’ve hopefully discussed sample interactions, emphasized a key area of benefit, and flagged key gaps? Good—you’re ready to continue.

    Hit that test kitchen

    Next, let’s look at what you’ll need to bring your personalization recipes to life. Personalization engines, which are robust software suites for automating and expressing dynamic content, can intimidate new customers. Their capabilities are sweeping and powerful, and they present broad options for how your organization can conduct its activities. This presents the question: Where do you begin when you’re configuring a connected experience?

    What’s important here is to avoid treating the installed software like it were a dream kitchen from some fantasy remodeling project ( as one of our client executives memorably put it ). These software engines are more like test kitchens where your team can begin devising, tasting, and refining the snacks and meals that will become a part of your personalization program’s regularly evolving menu.

    The ultimate menu of the prioritized backlog will come together over the course of the workshop. And creating “dishes” is the way that you’ll have individual team stakeholders construct personalized interactions that serve their needs or the needs of others.

    The dishes will come from recipes, and those recipes have set ingredients.

    Verify your ingredients

    Like a good product manager, you’ll make sure—andyou’ll validate with the right stakeholders present—that you have all the ingredients on hand to cook up your desired interaction ( or that you can work out what needs to be added to your pantry ). These ingredients include the audience that you’re targeting, content and design elements, the context for the interaction, and your measure for how it’ll come together.

    This isn’t just about discovering requirements. Documenting your personalizations as a series of if-then statements lets the team:

    1. compare findings toward a unified approach for developing features, not unlike when artists paint with the same palette,
    2. specify a consistent set of interactions that users find uniform or familiar,
    3. and develop parity across performance measurements and key performance indicators too.

    This helps you streamline your designs and your technical efforts while you deliver a shared palette of core motifs of your personalized or automated experience.

    Compose your recipe

    What ingredients are important to you? Think of a who-what-when-why construct:

    • Who are your key audience segments or groups?
    • What kind of content will you give them, in what design elements, and under what circumstances?
    • And for which business and user benefits?

    We first developed these cards and card categories five years ago. We regularly play-test their fit with conference audiences and clients. And we still encounter new possibilities. But they all follow an underlying who-what-when-why logic.

    Here are three examples for a subscription-based reading app, which you can generally follow along with right to left in the cards in the accompanying photo below.

    1. Nurture personalization: When a guest or an unknown visitor interacts with a product title, a banner or alert bar appears that makes it easier for them to encounter a related title they may want to read, saving them time.
    2. Welcome automation: When there’s a newly registered user, an email is generated to call out the breadth of the content catalog and to make them a happier subscriber.
    3. Winback automation: Before their subscription lapses or after a recent failed renewal, a user is sent an email that gives them a promotional offer to suggest that they reconsider renewing or to remind them to renew.

    A useful preworkshop activity may be to think through a first draft of what these cards might be for your organization, although we’ve also found that this process sometimes flows best through cocreating the recipes themselves. Start with a set of blank cards, and begin labeling and grouping them through the design process, eventually distilling them to a refined subset of highly useful candidate cards.

    You can think of the later stages of the workshop as moving from recipes toward a cookbook in focus—like a more nuanced customer-journey mapping. Individual” cooks” will pitch their recipes to the team, using a common jobs-to-be-done format so that measurability and results are baked in, and from there, the resulting collection will be prioritized for finished design and delivery to production.

    Better kitchens require better architecture

    Simplifying a customer experience is a complicated effort for those who are inside delivering it. Beware anyone who says otherwise. With that being said,” Complicated problems can be hard to solve, but they are addressable with rules and recipes“.

    When personalization becomes a laugh line, it’s because a team is overfitting: they aren’t designing with their best data. Like a sparse pantry, every organization has metadata debt to go along with its technical debt, and this creates a drag on personalization effectiveness. Your AI’s output quality, for example, is indeed limited by your IA. Spotify’s poster-child prowess today was unfathomable before they acquired a seemingly modest metadata startup that now powers its underlying information architecture.

    You can definitely stand the heat …

    Personalization technology opens a doorway into a confounding ocean of possible designs. Only a disciplined and highly collaborative approach will bring about the necessary focus and intention to succeed. So banish the dream kitchen. Instead, hit the test kitchen to save time, preserve job satisfaction and security, and safely dispense with the fanciful ideas that originate upstairs of the doers in your organization. There are meals to serve and mouths to feed.

    This workshop framework gives you a fighting shot at lasting success as well as sound beginnings. Wiring up your information layer isn’t an overnight affair. But if you use the same cookbook and shared recipes, you’ll have solid footing for success. We designed these activities to make your organization’s needs concrete and clear, long before the hazards pile up.

    While there are associated costs toward investing in this kind of technology and product design, your ability to size up and confront your unique situation and your digital capabilities is time well spent. Don’t squander it. The proof, as they say, is in the pudding.

  • User Research Is Storytelling

    User Research Is Storytelling

    Always since I was a child, I’ve been fascinated with videos. I loved the heroes and the excitement—but most of all the stories. I wanted to be an actor. And I believed that I’d get to do the things that Indiana Jones did and go on exciting activities. I also dreamed up suggestions for videos that my friends and I could render and sun in. But they never went any farther. I did, however, end up working in user experience ( UX). Today, I realize that there’s an element of drama to UX— I hadn’t actually considered it before, but consumer analysis is story. And to get the most out of consumer research, you need to show a good account where you bring stakeholders—the solution team and choice makers—along and getting them interested in learning more.

    Think of your favorite film. More than likely it follows a three-act construction that’s frequently seen in story: the layout, the fight, and the quality. The second act shows what exists now, and it helps you get to know the figures and the challenges and problems that they face. Act two introduces the turmoil, where the action is. Here, difficulties grow or get worse. And the third and final work is the solution. This is where the issues are resolved and the figures learn and change. I believe that this architecture is also a great way to think about customer study, and I think that it can be particularly helpful in explaining person exploration to others.

    Use story as a framework to complete research

    It’s sad to say, but many have come to see studies as being dispensable. If finances or timelines are small, analysis tends to be one of the first points to go. Instead of investing in study, some goods professionals rely on manufacturers or—worse—their personal judgment to make the “right” options for users based on their experience or accepted best practices. That may get clubs some of the way, but that strategy is so easily miss out on solving people ‘ real problems. To remain user-centered, this is something we should avoid. User research elevates design. It keeps it on track, pointing to problems and opportunities. Being aware of the issues with your product and reacting to them can help you stay ahead of your competitors.

    In the three-act structure, each act corresponds to a part of the process, and each part is critical to telling the whole story. Let’s look at the different acts and how they align with user research.

    Act one: setup

    The setup is all about understanding the background, and that’s where foundational research comes in. Foundational research ( also called generative, discovery, or initial research ) helps you understand users and identify their problems. You’re learning about what exists today, the challenges users have, and how the challenges affect them—just like in the movies. To do foundational research, you can conduct contextual inquiries or diary studies ( or both! ), which can help you start to identify problems as well as opportunities. It doesn’t need to be a huge investment in time or money.

    Erika Hall writes about minimum viable ethnography, which can be as simple as spending 15 minutes with a user and asking them one thing:”‘ Walk me through your day yesterday.’ That’s it. Present that one request. Shut up and listen to them for 15 minutes. Do your damndest to keep yourself and your interests out of it. Bam, you’re doing ethnography”. According to Hall, “]This ] will probably prove quite illuminating. In the highly unlikely case that you didn’t learn anything new or useful, carry on with enhanced confidence in your direction”.

    This makes total sense to me. And I love that this makes user research so accessible. You don’t need to prepare a lot of documentation, you can just recruit participants and do it! This can yield a wealth of information about your users, and it’ll help you better understand them and what’s going on in their lives. That’s really what act one is all about: understanding where users are coming from.

    Jared Spool talks about the importance of foundational research and how it should form the bulk of your research. If you can draw from any additional user data that you can get your hands on, such as surveys or analytics, that can supplement what you’ve heard in the foundational studies or even point to areas that need further investigation. Together, all this data paints a clearer picture of the state of things and all its shortcomings. And that’s the beginning of a compelling story. It’s the point in the plot where you realize that the main characters—or the users in this case—are facing challenges that they need to overcome. Like in the movies, this is where you start to build empathy for the characters and root for them to succeed. And hopefully stakeholders are now doing the same. Their sympathy may be with their business, which could be losing money because users can’t complete certain tasks. Or maybe they do empathize with users ‘ struggles. Either way, act one is your initial hook to get the stakeholders interested and invested.

    Once stakeholders begin to understand the value of foundational research, that can open doors to more opportunities that involve users in the decision-making process. And that can guide product teams toward being more user-centered. This benefits everyone—users, the product, and stakeholders. It’s like winning an Oscar in movie terms—it often leads to your product being well received and successful. And this can be an incentive for stakeholders to repeat this process with other products. Storytelling is the key to this process, and knowing how to tell a good story is the only way to get stakeholders to really care about doing more research.

    This brings us to act two, where you iteratively evaluate a design or concept to see whether it addresses the issues.

    Act two: conflict

    Act two is all about digging deeper into the problems that you identified in act one. This usually involves directional research, such as usability tests, where you assess a potential solution ( such as a design ) to see whether it addresses the issues that you found. The issues could include unmet needs or problems with a flow or process that’s tripping users up. Like act two in a movie, more issues will crop up along the way. It’s here that you learn more about the characters as they grow and develop through this act.

    Usability tests should typically include around five participants according to Jakob Nielsen, who found that that number of users can usually identify most of the problems:” As you add more and more users, you learn less and less because you will keep seeing the same things again and again… After the fifth user, you are wasting your time by observing the same findings repeatedly but not learning much new”.

    There are parallels with storytelling here too, if you try to tell a story with too many characters, the plot may get lost. Having fewer participants means that each user’s struggles will be more memorable and easier to relay to other stakeholders when talking about the research. This can help convey the issues that need to be addressed while also highlighting the value of doing the research in the first place.

    Researchers have run usability tests in person for decades, but you can also conduct usability tests remotely using tools like Microsoft Teams, Zoom, or other teleconferencing software. This approach has become increasingly popular since the beginning of the pandemic, and it works well. You can think of in-person usability tests like going to a play and remote sessions as more like watching a movie. There are advantages and disadvantages to each. In-person usability research is a much richer experience. Stakeholders can experience the sessions with other stakeholders. You also get real-time reactions—including surprise, agreement, disagreement, and discussions about what they’re seeing. Much like going to a play, where audiences get to take in the stage, the costumes, the lighting, and the actors ‘ interactions, in-person research lets you see users up close, including their body language, how they interact with the moderator, and how the scene is set up.

    If in-person usability testing is like watching a play—staged and controlled—then conducting usability testing in the field is like immersive theater where any two sessions might be very different from one another. You can take usability testing into the field by creating a replica of the space where users interact with the product and then conduct your research there. Or you can go out to meet users at their location to do your research. With either option, you get to see how things work in context, things come up that wouldn’t have in a lab environment—and conversion can shift in entirely different directions. As researchers, you have less control over how these sessions go, but this can sometimes help you understand users even better. Meeting users where they are can provide clues to the external forces that could be affecting how they use your product. In-person usability tests provide another level of detail that’s often missing from remote usability tests.

    That’s not to say that the “movies” —remote sessions—aren’t a good option. Remote sessions can reach a wider audience. They allow a lot more stakeholders to be involved in the research and to see what’s going on. And they open the doors to a much wider geographical pool of users. But with any remote session there is the potential of time wasted if participants can’t log in or get their microphone working.

    The benefit of usability testing, whether remote or in person, is that you get to see real users interact with the designs in real time, and you can ask them questions to understand their thought processes and grasp of the solution. This can help you not only identify problems but also glean why they’re problems in the first place. Furthermore, you can test hypotheses and gauge whether your thinking is correct. By the end of the sessions, you’ll have a much clearer picture of how usable the designs are and whether they work for their intended purposes. Act two is the heart of the story—where the excitement is—but there can be surprises too. This is equally true of usability tests. Often, participants will say unexpected things, which change the way that you look at things —and these twists in the story can move things in new directions.

    Unfortunately, user research is sometimes seen as expendable. And too often usability testing is the only research process that some stakeholders think that they ever need. In fact, if the designs that you’re evaluating in the usability test aren’t grounded in a solid understanding of your users ( foundational research ), there’s not much to be gained by doing usability testing in the first place. That’s because you’re narrowing the focus of what you’re getting feedback on, without understanding the users ‘ needs. As a result, there’s no way of knowing whether the designs might solve a problem that users have. It’s only feedback on a particular design in the context of a usability test.

    On the other hand, if you only do foundational research, while you might have set out to solve the right problem, you won’t know whether the thing that you’re building will actually solve that. This illustrates the importance of doing both foundational and directional research.

    In act two, stakeholders will—hopefully—get to watch the story unfold in the user sessions, which creates the conflict and tension in the current design by surfacing their highs and lows. And in turn, this can help motivate stakeholders to address the issues that come up.

    Act three: resolution

    While the first two acts are about understanding the background and the tensions that can propel stakeholders into action, the third part is about resolving the problems from the first two acts. While it’s important to have an audience for the first two acts, it’s crucial that they stick around for the final act. That means the whole product team, including developers, UX practitioners, business analysts, delivery managers, product managers, and any other stakeholders that have a say in the next steps. It allows the whole team to hear users ‘ feedback together, ask questions, and discuss what’s possible within the project’s constraints. And it lets the UX research and design teams clarify, suggest alternatives, or give more context behind their decisions. So you can get everyone on the same page and get agreement on the way forward.

    This act is mostly told in voiceover with some audience participation. The researcher is the narrator, who paints a picture of the issues and what the future of the product could look like given the things that the team has learned. They give the stakeholders their recommendations and their guidance on creating this vision.

    Nancy Duarte in the Harvard Business Review offers an approach to structuring presentations that follow a persuasive story. ” The most effective presenters use the same techniques as great storytellers: By reminding people of the status quo and then revealing the path to a better way, they set up a conflict that needs to be resolved”, writes Duarte. ” That tension helps them persuade the audience to adopt a new mindset or behave differently”.

    This type of structure aligns well with research results, and particularly results from usability tests. It provides evidence for “what is “—the problems that you’ve identified. And “what could be “—your recommendations on how to address them. And so on and so forth.

    You can reinforce your recommendations with examples of things that competitors are doing that could address these issues or with examples where competitors are gaining an edge. Or they can be visual, like quick mockups of how a new design could look that solves a problem. These can help generate conversation and momentum. And this continues until the end of the session when you’ve wrapped everything up in the conclusion by summarizing the main issues and suggesting a way forward. This is the part where you reiterate the main themes or problems and what they mean for the product—the denouement of the story. This stage gives stakeholders the next steps and hopefully the momentum to take those steps!

    While we are nearly at the end of this story, let’s reflect on the idea that user research is storytelling. All the elements of a good story are there in the three-act structure of user research:

      Act one: You meet the protagonists ( the users ) and the antagonists ( the problems affecting users ). This is the beginning of the plot. In act one, researchers might use methods including contextual inquiry, ethnography, diary studies, surveys, and analytics. The output of these methods can include personas, empathy maps, user journeys, and analytics dashboards.
      Act two: Next, there’s character development. There’s conflict and tension as the protagonists encounter problems and challenges, which they must overcome. In act two, researchers might use methods including usability testing, competitive benchmarking, and heuristics evaluation. The output of these can include usability findings reports, UX strategy documents, usability guidelines, and best practices.
      Act three: The protagonists triumph and you see what a better future looks like. In act three, researchers may use methods including presentation decks, storytelling, and digital media. The output of these can be: presentation decks, video clips, audio clips, and pictures.

    The researcher has multiple roles: they’re the storyteller, the director, and the producer. The participants have a small role, but they are significant characters ( in the research ). And the stakeholders are the audience. But the most important thing is to get the story right and to use storytelling to tell users ‘ stories through research. By the end, the stakeholders should walk away with a purpose and an eagerness to resolve the product’s ills.

    So the next time that you’re planning research with clients or you’re speaking to stakeholders about research that you’ve done, think about how you can weave in some storytelling. Ultimately, user research is a win-win for everyone, and you just need to get stakeholders interested in how the story ends.

  • From Beta to Bedrock: Build Products that Stick.

    From Beta to Bedrock: Build Products that Stick.

    As a product builder over too many years to mention, I’ve lost count of the number of times I’ve seen promising ideas go from zero to hero in a few weeks, only to fizzle out within months.

    Financial products, which is the field I work in, are no exception. With people’s real hard-earned money on the line, user expectations running high, and a crowded market, it’s tempting to throw as many features at the wall as possible and hope something sticks. But this approach is a recipe for disaster. Here’s why:

    The pitfalls of feature-first development

    When you start building a financial product from the ground up, or are migrating existing customer journeys from paper or telephony channels onto online banking or mobile apps, it’s easy to get caught up in the excitement of creating new features. You might think, “If I can just add one more thing that solves this particular user problem, they’ll love me!” But what happens when you inevitably hit a roadblock because the narcs (your security team!) don’t like it? When a hard-fought feature isn’t as popular as you thought, or it breaks due to unforeseen complexity?

    This is where the concept of Minimum Viable Product (MVP) comes in. Jason Fried’s book Getting Real and his podcast Rework often touch on this idea, even if he doesn’t always call it that. An MVP is a product that provides just enough value to your users to keep them engaged, but not so much that it becomes overwhelming or difficult to maintain. It sounds like an easy concept but it requires a razor sharp eye, a ruthless edge and having the courage to stick by your opinion because it is easy to be seduced by “the Columbo Effect”… when there’s always “just one more thing…” that someone wants to add.

    The problem with most finance apps, however, is that they often become a reflection of the internal politics of the business rather than an experience solely designed around the customer. This means that the focus is on delivering as many features and functionalities as possible to satisfy the needs and desires of competing internal departments, rather than providing a clear value proposition that is focused on what the people out there in the real world want. As a result, these products can very easily bloat to become a mixed bag of confusing, unrelated and ultimately unlovable customer experiences—a feature salad, you might say.

    The importance of bedrock

    So what’s a better approach? How can we build products that are stable, user-friendly, and—most importantly—stick?

    That’s where the concept of “bedrock” comes in. Bedrock is the core element of your product that truly matters to users. It’s the fundamental building block that provides value and stays relevant over time.

    In the world of retail banking, which is where I work, the bedrock has got to be in and around the regular servicing journeys. People open their current account once in a blue moon but they look at it every day. They sign up for a credit card every year or two, but they check their balance and pay their bill at least once a month.

    Identifying the core tasks that people want to do and then relentlessly striving to make them easy to do, dependable, and trustworthy is where the gravy’s at.

    But how do you get to bedrock? By focusing on the “MVP” approach, prioritizing simplicity, and iterating towards a clear value proposition. This means cutting out unnecessary features and focusing on delivering real value to your users.

    It also means having some guts, because your colleagues might not always instantly share your vision to start with. And controversially, sometimes it can even mean making it clear to customers that you’re not going to come to their house and make their dinner. The occasional “opinionated user interface design” (i.e. clunky workaround for edge cases) might sometimes be what you need to use to test a concept or buy you space to work on something more important.

    Practical strategies for building financial products that stick

    So what are the key strategies I’ve learned from my own experience and research?

    1. Start with a clear “why”: What problem are you trying to solve? For whom? Make sure your mission is crystal clear before building anything. Make sure it aligns with your company’s objectives, too.
    2. Focus on a single, core feature and obsess on getting that right before moving on to something else: Resist the temptation to add too many features at once. Instead, choose one that delivers real value and iterate from there.
    3. Prioritize simplicity over complexity: Less is often more when it comes to financial products. Cut out unnecessary bells and whistles and keep the focus on what matters most.
    4. Embrace continuous iteration: Bedrock isn’t a fixed destination—it’s a dynamic process. Continuously gather user feedback, refine your product, and iterate towards that bedrock state.
    5. Stop, look and listen: Don’t just test your product as part of your delivery process—test it repeatedly in the field. Use it yourself. Run A/B tests. Gather user feedback. Talk to people who use it, and refine accordingly.

    The bedrock paradox

    There’s an interesting paradox at play here: building towards bedrock means sacrificing some short-term growth potential in favour of long-term stability. But the payoff is worth it—products built with a focus on bedrock will outlast and outperform their competitors, and deliver sustained value to users over time.

    So, how do you start your journey towards bedrock? Take it one step at a time. Start by identifying those core elements that truly matter to your users. Focus on building and refining a single, powerful feature that delivers real value. And above all, test obsessively—for, in the words of Abraham Lincoln, Alan Kay, or Peter Drucker (whomever you believe!!), “The best way to predict the future is to create it.”

  • An Holistic Framework for Shared Design Leadership

    An Holistic Framework for Shared Design Leadership

    Picture this: You’re in a meeting room at your tech company, and two people are having what looks like the same conversation about the same design problem. One is talking about whether the team has the right skills to tackle it. The other is diving deep into whether the solution actually solves the user’s problem. Same room, same problem, completely different lenses.

    This is the beautiful, sometimes messy reality of having both a Design Manager and a Lead Designer on the same team. And if you’re wondering how to make this work without creating confusion, overlap, or the dreaded “too many cooks” scenario, you’re asking the right question.

    The traditional answer has been to draw clean lines on an org chart. The Design Manager handles people, the Lead Designer handles craft. Problem solved, right? Except clean org charts are fantasy. In reality, both roles care deeply about team health, design quality, and shipping great work. 

    The magic happens when you embrace the overlap instead of fighting it—when you start thinking of your design org as a design organism.

    The Anatomy of a Healthy Design Team

    Here’s what I’ve learned from years of being on both sides of this equation: think of your design team as a living organism. The Design Manager tends to the mind (the psychological safety, the career growth, the team dynamics). The Lead Designer tends to the body (the craft skills, the design standards, the hands-on work that ships to users).

    But just like mind and body aren’t completely separate systems, so, too, do these roles overlap in important ways. You can’t have a healthy person without both working in harmony. The trick is knowing where those overlaps are and how to navigate them gracefully.

    When we look at how healthy teams actually function, three critical systems emerge. Each requires both roles to work together, but with one taking primary responsibility for keeping that system strong.

    The Nervous System: People & Psychology

    Primary caretaker: Design Manager
    Supporting role: Lead Designer

    The nervous system is all about signals, feedback, and psychological safety. When this system is healthy, information flows freely, people feel safe to take risks, and the team can adapt quickly to new challenges.

    The Design Manager is the primary caretaker here. They’re monitoring the team’s psychological pulse, ensuring feedback loops are healthy, and creating the conditions for people to grow. They’re hosting career conversations, managing workload, and making sure no one burns out.

    But the Lead Designer plays a crucial supporting role. They’re providing sensory input about craft development needs, spotting when someone’s design skills are stagnating, and helping identify growth opportunities that the Design Manager might miss.

    Design Manager tends to:

    • Career conversations and growth planning
    • Team psychological safety and dynamics
    • Workload management and resource allocation
    • Performance reviews and feedback systems
    • Creating learning opportunities

    Lead Designer supports by:

    • Providing craft-specific feedback on team member development
    • Identifying design skill gaps and growth opportunities
    • Offering design mentorship and guidance
    • Signaling when team members are ready for more complex challenges

    The Muscular System: Craft & Execution

    Primary caretaker: Lead Designer
    Supporting role: Design Manager

    The muscular system is about strength, coordination, and skill development. When this system is healthy, the team can execute complex design work with precision, maintain consistent quality, and adapt their craft to new challenges.

    The Lead Designer is the primary caretaker here. They’re setting design standards, providing craft coaching, and ensuring that shipping work meets the quality bar. They’re the ones who can tell you if a design decision is sound or if we’re solving the right problem.

    But the Design Manager plays a crucial supporting role. They’re ensuring the team has the resources and support to do their best craft work, like proper nutrition and recovery time for an athlete.

    Lead Designer tends to:

    • Definition of design standards and system usage
    • Feedback on what design work meets the standard
    • Experience direction for the product
    • Design decisions and product-wide alignment
    • Innovation and craft advancement

    Design Manager supports by:

    • Ensuring design standards are understood and adopted across the team
    • Confirming experience direction is being followed
    • Supporting practices and systems that scale without bottlenecking
    • Facilitating design alignment across teams
    • Providing resources and removing obstacles to great craft work

    The Circulatory System: Strategy & Flow

    Shared caretakers: Both Design Manager and Lead Designer

    The circulatory system is about how information, decisions, and energy flow through the team. When this system is healthy, strategic direction is clear, priorities are aligned, and the team can respond quickly to new opportunities or challenges.

    This is where true partnership happens. Both roles are responsible for keeping the circulation strong, but they’re bringing different perspectives to the table.

    Lead Designer contributes:

    • User needs are met by the product
    • Overall product quality and experience
    • Strategic design initiatives
    • Research-based user needs for each initiative

    Design Manager contributes:

    • Communication to team and stakeholders
    • Stakeholder management and alignment
    • Cross-functional team accountability
    • Strategic business initiatives

    Both collaborate on:

    • Co-creation of strategy with leadership
    • Team goals and prioritization approach
    • Organizational structure decisions
    • Success measures and frameworks

    Keeping the Organism Healthy

    The key to making this partnership sing is understanding that all three systems need to work together. A team with great craft skills but poor psychological safety will burn out. A team with great culture but weak craft execution will ship mediocre work. A team with both but poor strategic circulation will work hard on the wrong things.

    Be Explicit About Which System You’re Tending

    When you’re in a meeting about a design problem, it helps to acknowledge which system you’re primarily focused on. “I’m thinking about this from a team capacity perspective” (nervous system) or “I’m looking at this through the lens of user needs” (muscular system) gives everyone context for your input.

    This isn’t about staying in your lane. It’s about being transparent as to which lens you’re using, so the other person knows how to best add their perspective.

    Create Healthy Feedback Loops

    The most successful partnerships I’ve seen establish clear feedback loops between the systems:

    Nervous system signals to muscular system: “The team is struggling with confidence in their design skills” → Lead Designer provides more craft coaching and clearer standards.

    Muscular system signals to nervous system: “The team’s craft skills are advancing faster than their project complexity” → Design Manager finds more challenging growth opportunities.

    Both systems signal to circulatory system: “We’re seeing patterns in team health and craft development that suggest we need to adjust our strategic priorities.”

    Handle Handoffs Gracefully

    The most critical moments in this partnership are when something moves from one system to another. This might be when a design standard (muscular system) needs to be rolled out across the team (nervous system), or when a strategic initiative (circulatory system) needs specific craft execution (muscular system).

    Make these transitions explicit. “I’ve defined the new component standards. Can you help me think through how to get the team up to speed?” or “We’ve agreed on this strategic direction. I’m going to focus on the specific user experience approach from here.”

    Stay Curious, Not Territorial

    The Design Manager who never thinks about craft, or the Lead Designer who never considers team dynamics, is like a doctor who only looks at one body system. Great design leadership requires both people to care about the whole organism, even when they’re not the primary caretaker.

    This means asking questions rather than making assumptions. “What do you think about the team’s craft development in this area?” or “How do you see this impacting team morale and workload?” keeps both perspectives active in every decision.

    When the Organism Gets Sick

    Even with clear roles, this partnership can go sideways. Here are the most common failure modes I’ve seen:

    System Isolation

    The Design Manager focuses only on the nervous system and ignores craft development. The Lead Designer focuses only on the muscular system and ignores team dynamics. Both people retreat to their comfort zones and stop collaborating.

    The symptoms: Team members get mixed messages, work quality suffers, morale drops.

    The treatment: Reconnect around shared outcomes. What are you both trying to achieve? Usually it’s great design work that ships on time from a healthy team. Figure out how both systems serve that goal.

    Poor Circulation

    Strategic direction is unclear, priorities keep shifting, and neither role is taking responsibility for keeping information flowing.

    The symptoms: Team members are confused about priorities, work gets duplicated or dropped, deadlines are missed.

    The treatment: Explicitly assign responsibility for circulation. Who’s communicating what to whom? How often? What’s the feedback loop?

    Autoimmune Response

    One person feels threatened by the other’s expertise. The Design Manager thinks the Lead Designer is undermining their authority. The Lead Designer thinks the Design Manager doesn’t understand craft.

    The symptoms: Defensive behavior, territorial disputes, team members caught in the middle.

    The treatment: Remember that you’re both caretakers of the same organism. When one system fails, the whole team suffers. When both systems are healthy, the team thrives.

    The Payoff

    Yes, this model requires more communication. Yes, it requires both people to be secure enough to share responsibility for team health. But the payoff is worth it: better decisions, stronger teams, and design work that’s both excellent and sustainable.

    When both roles are healthy and working well together, you get the best of both worlds: deep craft expertise and strong people leadership. When one person is out sick, on vacation, or overwhelmed, the other can help maintain the team’s health. When a decision requires both the people perspective and the craft perspective, you’ve got both right there in the room.

    Most importantly, the framework scales. As your team grows, you can apply the same system thinking to new challenges. Need to launch a design system? Lead Designer tends to the muscular system (standards and implementation), Design Manager tends to the nervous system (team adoption and change management), and both tend to circulation (communication and stakeholder alignment).

    The Bottom Line

    The relationship between a Design Manager and Lead Designer isn’t about dividing territories. It’s about multiplying impact. When both roles understand they’re tending to different aspects of the same healthy organism, magic happens.

    The mind and body work together. The team gets both the strategic thinking and the craft excellence they need. And most importantly, the work that ships to users benefits from both perspectives.

    So the next time you’re in that meeting room, wondering why two people are talking about the same problem from different angles, remember: you’re watching shared leadership in action. And if it’s working well, both the mind and body of your design team are getting stronger.

  • Rush of Ikorr Brings Classic Mythology to Trading Card Games

    Rush of Ikorr Brings Classic Mythology to Trading Card Games

    For anyone looking to see classic mythological icons duke it out for divine supremacy, the fast-paced and fun to play trading card game Rush of Ikorr presents an epic opportunity to see myths and monsters throw down. Players pick from a variety of starter decks based on different global pantheons, drawn from mythological figures Ancient […]

    The post Rush of Ikorr Brings Classic Mythology to Trading Card Games appeared first on Den of Geek.

    Science fiction is very serious business, dealing with philosophical and social themes in ways that other genres simply can’t, asking questions about humanity and the nature of existence. Unfortunately, while science fiction is very, very serious, sometimes people have felt the need to make fun of it, even creating elaborate parodies. Recently we had a look at the complicated relationship Star Trek has had with the various works that have parodied it.

    But perhaps even more than Star Trek, the biggest target for folks looking for something the spoof has been Doctor Who. We don’t know why, all those sets looked really convincing to us, and the special effects are pretty impressive if you think about the budget constraints they’re working under.

    cnx.cmd.push(function() {
    cnx({
    playerId: “106e33c0-3911-473c-b599-b1426db57530”,

    }).render(“0270c398a82f44f49c23c16122516796”);
    });

    There is one reason though. One dark secret nestled in the heart of everyone who has ever decided to put on a comically long scarf and shake the screwdriver at some bins “for a laugh.”

    Every parody is secretly a completely sincere audition.

    And an even darker secret? Sometimes they work.

    The Lenny Henry Show

    Lenny Henry’s Doctor Who sketch in 1985 features a Doctor that wears a leather jacket and has a companion who fancies him, and sees him battling Cybermen led by an evil Cyber Thatcher in the far off year of 2010. While the leather jacket, Black Time Lord and implied TARDIS hanky panky are all extremely Nu Who, the Thatcher-parody Cybermen could be straight out of Andrew Cartmel’s era on the show.

    As parodies-that-are-secretly-auditions go, Henry hits all the right notes. He delivers technobabble, does weird stuff to the TARDIS console, and of course, runs up and down lots of corridors.

    And the work pays off, eventually.

    A mere 35 years after his Doctor Who sketch, Henry appeared in the show itself as the villain Daniel Barton in the story “Spyfall.”

    The comedian Alasdair Beckett-King is best known for his online sketches, including Every Single Scandinavian Crime Drama, Every Mind-Bending TV Show, and eventually, inevitably, Every Episode of Popular Time Travel Show.

    “I was quite nervous about doing Doctor Who because I don’t have an encyclopaedic knowledge of the lore,” Beckett-King says. “Usually I write sketches on my own, but for that one I recruited my comedy pals Declan Kennedy and Angus Dunican, who gave me lots of gags. I think I was most excited about spoofing the new-Who era visual effects, and doing a dodgy impression of Dan Starkey’s Strax.”

    The thing is, the way a comedian approaches playing a parodic version of the Doctor is not all that different from an actor taking on the lead role in the show. In an interview with the Radio Times, Tom Baker said of playing the Doctor, “It’s just me trying to be amusing, or trying to be heroic in an amusing way.”

    Meanwhile, when Beckett-King performed his sketch he says, “I suppose I did end up playing the Doctor as quite like myself, more due to a lack of acting range than a deliberate attempt to place my stamp on the character.”

    He adds, “I had no choice about doing a generic Doctor, because I can’t really do Tom Baker, except occasionally when aiming for Patrick Stewart and missing. But I think veering between the generic and the specific is part of the fun of a parody: trying to do a supermarket own-brand version of the thing you’re spoofing and still hit all the familiar notes: a scarf, a jaunty hat, a vaguely professorial insouciance.”

    Not long after Every Episode of Popular Time Travel Show went out, Beckett-King found himself in the BBC produced audio series Doctor Who: Redacted.

    “Who says MANIFESTING doesn’t work? Me, I say that,” Beckett-King laughs. “I don’t know why I was cast, but I do wonder if the sketch was part of the reason. I played an alien foetus nicknamed ‘The Floater’ who was trying to kill the Doctor, in spite of being an interdimensional turd in a jar. I respect the hustle. It was a comic character, but I tried to approach it the way I generally approach spoofs – by playing it straight as I could.”

    Inspector Spacetime

    Inspector Spacetime started off as a one-note gag in the sitcom Community (created by Dan Harmon of Rick and Morty, if you want to talk “stuff that really wishes it were Doctor Who”). The character Abed becomes bereft at learning that one of his new favourite shows dies after six episodes (it’s British), only to then discover “Inspector Spacetime,” a series about a detective who travels through space and time in a phone box fighting robotic bins called “Blorgons.”

    Nobody from the show-within-a-show has appeared on Doctor Who (yet), but Abed does meet an Inspector Spacetime superfan played by Matt Lucas … who goes on to become the Doctor’s companion Nardole.

    Doctor Who Night

    Let’s talk about Doctor Who’s “Wilderness Years,” the 16 years between Sylvester McCoy’s final story, “Survival” and Christopher Eccleston grabbing Billie Piper’s hand at the start of “Rose,” with only Paul McGann’s movie in between.

    Why should we talk about lengthy Doctor Who hiatuses? No reason. No reason at all. Because obviously Doctor Who is alive and well and we’ve got a UNIT miniseries coming out in 2026 and producer Jane Tranter has said “it will keep going, one way or another” even if Russell T Davies is off writing for Channel 4 and searching Google’s News tab for “Doctor Who” mostly brings up articles about medical malpractice… we’re fine! We are fine.

    Anyway, during the last (sorry, I mean, only) Wilderness Years, a brief crack of light in the darkness was BBC 2’s “Doctor Who Night” on November 13, 1999. It featured documentaries, introductory segments filmed by an ambiguously-in-character Tom Baker (cue a slew of fan theories that he’s the “Curator” from “The Day of the Doctor”), a disappointing paucity of actual Doctor Who episodes (they only managed the final episode of “The Daleks” and a rerun of Paul McGann’s move), and a selection of short sketches starring Mark Gatiss and David Walliams.

    Those sketches included “The Pitch of Fear,” which imagined Sydney Newman pitching Doctor Who as a show that would run for 26 years, “The Kidnappers,” the weakest of the three that saw Gatiss and Walliams playing obsessive fans who’ve kidnapped Peter Davison, and finally, “The Web of Caves.” This is the only outright Who parody of the three, and is obviously the one where they’re having the most fun. It’s shot in black and white, in a Quarry, with Walliams as an ineffectual Doctor Who baddie. Gatiss plays the Doctor, again, not as an outright impression of any one incarnation, but as an audition for his own spin. When he steps out of the TARDIS and says, “Where have you bought me to this time old girl,” he’s not performing a sketch, he’s living out a fantasy.

    And sure enough, when Doctor Who came back, Mark Gatiss was involved, writing several episodes of the show and appearing in it as Professor Richard Lazarus of “The Lazarus Experiment,” while Walliams would later turn up as the cowardly, oppressor appeasing alien Gibbis in “The God Complex.”

    Curse of the Fatal Death

    1999 was in many ways a highpoint of the Wilderness Years. In addition to getting “Doctor Who Night,” fans were also treated to a Comic Relief sketch “The Curse of the Fatal Death.” Once again, the Doctor here is not an impression of an existing Doctor, but a new “Ninth” Doctor, played by Rowan Atkinson with just a tiny whiff of Blackadder. It has plenty of gags, but those gags come with production values at the more polished end of the classic series, and real sense that everyone involved just really wanted to make some Doctor Who.

    “I’m pretty certain the first Who I ever saw was the Comic Relief parody with Rowan Atkinson, and based on that I wanted to grow up to wear tank tops and be Doctor Who,” Beckett-King recalls. “I still think of the Doctor as ‘Doctor Who’, to the irritation of Whovians everywhere. So, I came to Who through parody, like I came to Citizen Kane via The Simpsons.”

    As far as future CV performance goes, Curse of the Fatal Death may be the most successful Doctor Who parody ever. The Doctor dies and regenerates multiple times over the course of the episode, and among others he turns into Hugh Grant, who got offered the role for real when Russell T Davies revived the show.

    Grant has said, “I was offered the role of the Doctor a few years back and was highly flattered. The danger with those things is that it’s only when you see it on screen that you think, ‘Damn, that was good, why did I say no?’ But then, knowing me, I’d probably make a mess of it.”

    Another incarnation, Richard E. Grant would later go on to play the Ninth Doctor in the animated revival “Scream of the Shalka,” although some enjoyed that more than others. Russell T Davies has told Doctor Who Magazine, “I thought he was terrible. I thought he took the money and ran, to be honest. It was a lazy performance. He was never on our list to play the Doctor.”

    Yet Richard E. Grant returned to play the Great Intelligence in season seven, and when the episode “Rogue,” under Davies’ second tenure as showrunner, revealed all the Doctor’s past incarnations, Richard E. Grant’s face was in there.

    But the big success story from “Curse of the Fatal Death” was the writer, one Steven Moffat, and here’s where things get weird. Because obviously Moffat eventually went on to write some of the best beloved episodes of the Doctor Who 2005 revival, and then became the showrunner himself.

    And if you watch Curse of the Fatal Death having seen Moffat’s series of Doctor Who, you start to notice certain things. Like that the Doctor faces the Master and the Daleks at the same time, which the Doctor wouldn’t actually do at the same time until “The Magician’s Apprentice /The Witch’s Familiar,” written by Moffat. Both even feature a joke about why the Daleks would have chairs.

    And the plotline features a lot of characters going backwards in time to set events up that they can take advantage of in the present, something fans have come to know as “Timey Wimey,” a phrase coined by, and used to describe, a lot of Steven Moffat’s Doctor Who plots.

    In “The Curse of the Fatal Death,” The Doctor uses up their final regenerations, and then the universe, unable to do without him, allows the Doctor to regenerate into a Thirteenth, female incarnation (Joanna Lumley). Under Steven Moffat, the Doctor would use up their final regenerations, then realising the universe is unable to do without him, Gallifrey allows the Doctor to regenerate into a Thirteenth, female incarnation (Jodie Whittaker). “The Curse of the Fatal Death” isn’t just an audition for writing Doctor Who, it’s practically a speed run of everything Moffat wanted to do with it.

    The post The Doctor Who Parodies That Were Actually Auditions appeared first on Den of Geek.

  • The Doctor Who Parodies That Were Actually Auditions

    The Doctor Who Parodies That Were Actually Auditions

    In a way that other genres doesn’t, science fiction poses questions about mankind and the nature of existence while dealing with intellectual and social issues. However, some people have felt the need to make joy of science fiction, even writing complex parody, despite the seriousness of it. We just experienced a]…

    The first article on Den of Geek was The Doctor Who Parodies That Were Truly Interviews.

    Science fiction is very critical company, dealing with intellectual and cultural themes in ways that other genres just doesn’t, asking questions about society and the nature of living. However, some people have felt the need to make joy of science fiction, even writing complex parodies, despite the seriousness of it. Recently, we lately examined the complex connection Star Trek has had with the different functions that have parodied it.

    But perhaps even more than Star Trek, the biggest destination for people looking for something the parody has been Doctor Who. We don’t know why, but all those pieces looked actually convincing to us, and the specific results are quite impressive when you consider the financial constraints they’re operating under.

    cnx. command. cnx ( playerId:” 106e33c0-3911-473c-b599-b1426db57530 ), ) -push ( function ( ). render ( “0270c398a82f44f49c23c16122516796” ),

    There is one explanation though. Everyone who has ever chosen to “put on a awkwardly long scarf and shake the hammer” at some boxes” for a laugh” has a dark secret hidden in their hearts.

    Every pastiche is a quietly honest audition in disguise.

    And an even darker solution? They occasionally function.

    The Lenny Henry Show

    Lenny Henry’s Doctor Who picture in 1985 capabilities a Physician that wears a leather suit and has a friend who fancies him, and sees him battling Cybermen led by an evil Cyber Thatcher in the far off season of 2010. The Thatcher-parody Cybermen may be right out of Andrew Cartmel’s time on the show, but the leather coat, Black Time Lord, and implied TARDIS prior panky are all very The Who.

    Henry hits all the correct information in the world of parodies-that-are-secretly-auditions. He delivers technobabble, does unusual things to the TARDIS unit, and of course, runs up and down loads of passageways.

    And ultimately, the effort pays out.

    Henry first appeared in the present as the criminal Daniel Barton in the film” Spyfall” just 35 years after his Doctor Who picture.

    The artist Alasdair Beckett-King is best known for his online pictures, which include Every Single Scandinavian Crime Drama, Every Mind-Bending Television Show, and ultimately, Every Episode of the bestselling Time Travel Show.

    ” Doctor Who made me feel a little hesitant because I don’t have an extensive knowledge of the lore,” Beckett-King says. ” Generally I write pictures on my own, but for that single I recruited my humor kids Declan Kennedy and Angus Dunican, who gave me lots of jokes. I think I was most enthusiastic about doing a sketchy effect of Dan Starkey’s Strax and spoofing the new-Who period visual effects.

    The funny thing is that playing a cartoonish version of the Doctor is not all that unique from an artist taking the lead role in the present. In an interview with the Radio Times, Tom Baker said of playing the Doctor,” It&#8217, s really me trying to become interesting, or trying to be courageous in an interesting method”.

    In the meantime, Beckett-King states in his picture that” I suppose I did end up playing the Physician as very like myself, more as a result of a deliberate attempt to put my mark on the personality.”

    He continues,” I had no choice about doing a basic Doctor, because I didn’t really accomplish Tom Baker, except sometimes when aiming for Patrick Stewart and lost.” But I think veering between the basic and the unique is part of the fun of a parody: trying to do a supermarket own-brand version of the thing you’re spoofing and still reach all the common notes: a scarf, a jaunty hat, a faintly academic insouciance”.

    Not long after Every Popular Time Travel Show episode was released, Beckett-King was cast in the BBC-produced audio series Doctor Who: Redacted.

    Who asserts that manufacturing is ineffective? Me, I say that”, Beckett-King laughs. ” I don’t know why I was cast, but I wonder if the sketch played a role in the plot. Despite being an interdimensional turd in a jar, I played an alien foetus known as” The Floater” who was attempting to kill the Doctor. I respect the hustle. Although it was a comic character, I tried to approach it the way I usually do spoof plays, playing it as best I could.

    Inspector Spacetime

    Inspector Spacetime started off as a one-note gag in the sitcom Community ( created by Dan Harmon of Rick and Morty, if you want to talk” stuff that really wishes it were Doctor Who” ). Abed discovers” Inspector Spacetime,” a series about a detective who travels through space and time in a phone box fighting robotic bins called” Blorgons,” after learning that one of his new favorite shows dies after six episodes ( it’s British ).

    No one from the show-within-a-show has ever appeared on Doctor Who (yet ), but Abed does run into a Matt Lucas superfan who later becomes the Doctor’s companion Nardole.

    Doctor Who Night

    Let’s talk about Doctor Who’s” Wilderness Years,” the 16 years between Christopher Eccleston’s grabbing Billie Piper’s hand at the start of” Rose,” and Paul McGann’s film.

    Why should we discuss a protracted Doctor Who hiatus? No reason. No explanation at all. Because Doctor Who is undoubtedly alive and well, and there will be a UNIT miniseries in 2026, producer Jane Tranter has stated that “it will continue to grow, one way or the other,” even though Russell T. Davies is no longer writing for Channel 4 and Google’s News tab for” Doctor Who” frequently brings up articles about medical malpractice, we’re fine! We are fine.

    Anyway, on November 13, 1999, BBC 2’s” Doctor Who Night” was a brief flash of light in the darkness during the final ( sorry, I mean, only ) Wilderness Years. There were documentaries, introductions, and a host of fan theories that Tom Baker is the” Curator” from” The Day of the Doctor” ( they only managed the final episode of” The Daleks” and a rerun of Paul McGann’s move ), as well as a few short sketches starring Mark Gatiss and David Walliams.

    Those sketches included” The Pitch of Fear”, which imagined Sydney Newman pitching Doctor Who as a show that would run for 26 years,” The Kidnappers”, the weakest of the three that saw Gatiss and Walliams playing obsessive fans who’ve kidnapped Peter Davison, and finally,” The Web of Caves”. The three are obviously having the fun the most with this outright Who parody, which is the only one of their own. Walliams plays an ineffective Doctor Who baddie, and it was shot in black and white in a quarry. Gatiss plays the Doctor, again, not as an outright impression of any one incarnation, but as an audition for his own spin. When he exits the TARDIS and asks,” Where have you bought me to this time, old girl,” he is not performing a sketch, but he is actually living out a fantasy.

    And sure enough, Mark Gatiss was a part of Doctor Who when he returned, writing several episodes of the show as Professor Richard Lazarus from” The Lazarus Experiment,” and Walliams would later appear as the oppressive, cowardly alien Gibbis in” The God Complex.”

    Curse of the Fatal Death

    In many ways, the Wilderness Years were at their peak in 1999. Fans were also given a Comic Relief sketch titled” The Curse of the Fatal Death” along with” Doctor Who Night.” Once again, the Doctor here is not an impression of an existing Doctor, but a new” Ninth” Doctor, played by Rowan Atkinson with just a tiny whiff of Blackadder. There are plenty of jokes in there, but those jokes come with production values at the more refined end of the classic series and a sense that everyone involved was just really hoping to create some Doctor Who.

    According to Beckett-King,” I’m pretty certain the first Who I ever saw was the Comic Relief parody with Rowan Atkinson, and based on that, I wanted to grow up wearing tank tops and be Doctor Who.” ” I still think of the Doctor as ‘ Doctor Who’, to the irritation of Whovians everywhere. So I came to Who through parody, just like I did Citizen Kane through The Simpsons.

    Curse of the Fatal Death may be the most successful Doctor Who parody ever in terms of future CV performance. The Doctor dies and regenerates multiple times over the course of the episode, and among others he turns into Hugh Grant, who got offered the role for real when Russell T Davies revived the show.

    Grant has stated that I was offered the role of the Doctor a few years ago and was extremely pleased. The issue with those things is that only when you see them on screen that you think,” Damn, that was good, why did I say no?” &#8217, But then, knowing me, I&#8217, d probably make a mess of it. &#8221,

    Richard E. Grant would later reprise his role as the Ninth Doctor in the animated film” Scream of the Shalka,” but some people preferred that role over others. Russell T Davies has told Doctor Who Magazine,” I thought he was terrible. I believed, to be honest, that he took the money and ran. It was a sluggish performance. He was never on our list to play the Doctor. &#8221,

    However, when the episode” Rogue,” which was produced during Davies ‘ second tenure as showrunner, revealed all the Doctor’s previous incarnations, Richard E. Grant’s face was in there when he made a comeback as the Great Intelligence in season seven.

    But the big success story from” Curse of the Fatal Death “was the writer, one Steven Moffat, and here’s where things get weird. Since it is obvious that Moffat eventually left to write some of the most beloved Doctor Who episodes in the 2005 revival before taking on the role of showrunner.

    And if you watch Curse of the Fatal Death after watching Moffat’s Doctor Who series, you start to notice some things. Like that the Doctor faces the Master and the Daleks at the same time, which the Doctor wouldn’t actually do at the same time until” The Magician’s Apprentice /The Witch’s Familiar, “written by Moffat. Even a joke about the Daleks ‘ need for chairs is included in both of them.

    Fans have come to know as” Timey Wimey,” a phrase coined by and used to describe many of Steven Moffat’s Doctor Who plots, and which features a lot of characters going back in time to set events up that they can exploit in the present.

    In” The Curse of the Fatal Death,” The Doctor uses up their final regenerations, and then the universe, unable to do without him, allows the Doctor to regenerate into a Thirteenth, female incarnation ( Joanna Lumley ). The Doctor would use up their final regenerations under Steven Moffat, and when the universe could not do without him, Gallifrey grants the Doctor a Thirteenth, female incarnation ( Jodie Whittaker ). The Curse of the Fatal Death is “practically a speed run of everything Moffat wanted to do with it,” according to the author. It isn’t just an audition for writing Doctor Who.

    The first article on Den of Geek was The Doctor Who Parodies That Were Truly Interviews.

  • The Conjuring Box Office, Warner Bros, and the Value of a Diverse Slate

    The Conjuring Box Office, Warner Bros, and the Value of a Diverse Slate

    As you probably already know, Warner Bros. Pictures is having a very good drop that follows a fantastic flower and a fantastic summer. That’s because the studio’s fourth and supposedly final mainline Conjuring picture, The Conjuring: Last Rites, proved that there is still life in the demonic thing, even as trades ]… ]

    The Conjuring Box Office, Warner Bros, and the Diverse Slate initially appeared on Den of Geek.

    Science fiction is a very serious business that addresses intellectual and social issues in ways that different genres just doesn’t, while also posing questions about society and the nature of existence. However, while science fiction is very, very severe, often citizens have felt the need to make fun of it, yet creating elaborate caricatures. Recently, we just examined the complex connection Star Trek has had with the different functions that have parodied it.

    The main goal for people looking for something the pastiche has, perhaps even more than Star Trek, been Doctor Who. We don’t understand why, all those pieces looked actually convincing to us, and the special effects are quite amazing if you think about the budget considerations they’re working under.

    cnx. powershell. push ( function ( ) {cnx ( {playerId:” 106e33c0-3911-473c-b599-b1426db57530″, }). render ( “0270c398a82f44f49c23c16122516796” ),

    However, there is one explanation. One dark underground nestled in the heart of everyone who has actually decided to put on a awkwardly long robe and shake the hammer at some boxes” for a laugh”.

    Every parody is a secretly sincere audition in disguise.

    And what’s the secret behind it all? Sometimes they work.

    The Lenny Henry Show

    A Doctor in a leather jacket and with a companion who fancies him, as depicted in a 1985 Lenny Henry Doctor Who sketch, battles Cybermen led by an evil Cyber Thatcher in the distant year of 2010 and in the far future. While the leather jacket, Black Time Lord and implied TARDIS hanky panky are all extremely Nu Who, the Thatcher-parody Cybermen could be straight out of Andrew Cartmel’s era on the show.

    Henry hits all the right notes in the world of parodies-that-are-secretly-auditions. He writes technobabble, sends strange things to the TARDIS console, and runs up and down numerous corridors.

    And the work pays off, eventually.

    Henry first appeared in the show as the villain Daniel Barton in the story” Spyfall” just 35 years after his Doctor Who sketch.

    The comedian Alasdair Beckett-King is best known for his online sketches, including Every Single Scandinavian Crime Drama, Every Mind-Bending TV Show, and eventually, inevitably, Every episode of the well-known time travel television program.

    ” Doctor Who made me feel a little hesitant because I don’t have an encyclopaedic knowledge of the lore,” Beckett-King says. ” Usually I write sketches on my own, but for that one I teamed up with my comedy pals, Declan Kennedy and Angus Dunican, who gave me a lot of jokes. I think I was most excited about spoofing the new-Who era visual effects, and doing a dodgy impression of Dan Starkey’s Strax”.

    The funny thing is that playing a parodic version of the Doctor is not all that different from an actor taking the lead role in the show. Tom Baker said in an interview with the Radio Times,” It&#8217, s just me trying to be amusing, or trying to be heroic in an amusing way,” when he spoke of playing the Doctor.

    Meanwhile, when Beckett-King performed his sketch he says,” I suppose I did end up playing the Doctor as quite like myself, more due to a lack of acting range than a deliberate attempt to place my stamp on the character”.

    He continues,” I had no choice about doing a generic Doctor, because I can’t really do Tom Baker, except occasionally when aiming for Patrick Stewart and missing.” However, I believe that trying to do a supermarket own-brand version of the thing you’re spoofing with all the familiar elements like a scarf, a jaunty hat, and a vaguely professorial insouciance is a part of the fun of a parody.

    Not long after Every episode of the well-known time travel television program went out, Beckett-King found himself in the BBC produced audio series Doctor Who: Redacted.

    ” Who says manipulating isn’t effective? Beckett-King laughs,” Me, I say that.” ” I don’t know why I was cast, but I do wonder if the sketch was part of the reason. Despite being an interdimensional turd in a jar, I played an alien foetus known as” The Floater” who was attempting to kill the Doctor. I appreciate the hustle. It was a comic character, but I tried to approach it the way I generally approach spoofs – by playing it straight as I could”.

    Inspector Spacetime

    If you want to talk” stuff that really wishes it was Doctor Who,” watch Inspector Spacetime on the sitcom Community ( created by Dan Harmon of Rick and Morty ). The character Abed becomes bereft at learning that one of his new favourite shows dies after six episodes ( it’s British ), only to then discover” Inspector Spacetime”, a series about a detective who travels through space and time in a phone box fighting robotic bins called” Blorgons”.

    No one from the show-within-a-show has ever appeared on Doctor Who (yet ), but Abed does run into an Inspector Spacetime superfan Matt Lucas, who later becomes the Doctor’s companion Nardole.

    Doctor Who Night

    Let’s talk about Doctor Who’s” Wilderness Years”, the 16 years between Sylvester McCoy’s final story,” Survival” and Christopher Eccleston grabbing Billie Piper’s hand at the start of” Rose”, with only Paul McGann’s movie in between.

    Why should we discuss protracted Doctor Who hiatuses? No justification. No reason at all. Because Doctor Who is undoubtedly alive and well, and there will be a UNIT miniseries in 2026, and producer Jane Tranter has stated that “it will continue to grow, one way or the other,” even though Russell T. Davies is no longer writing for Channel 4 and searching Google’s News tab for” Doctor Who” frequently brings up articles about medical malpractice, we’re fine. We are all fine.

    Anyway, during the last ( sorry, I mean, only ) Wilderness Years, a brief crack of light in the darkness was BBC 2’s” Doctor Who Night” on November 13, 1999. There were documentaries, introductions, and a host of fan theories that Tom Baker is the” Curator” from” The Day of the Doctor” ( cue a slew of fan theories that he’s the” Curator” from” The Day of the Doctor” ), as well as a few sketchy short sketches starring Mark Gatiss and David Walliams.

    The Pitch of Fear, which depicted Sydney Newman pitching Doctor Who as a 26-year-old show,” The Kidnappers,” which saw Gatiss and Walliams playing obsessive fans who’ve kidnapped Peter Davison, and” The Web of Caves,” were just a few sketches from those sketches. This is the only outright Who parody of the three, and is obviously the one where they’re having the most fun. Walliams plays an ineffective Doctor Who baddie, and it was shot in black and white in a quarry. Gatiss portrays the Doctor once more as an audition for his own style, not as an outright representation of any one person. When he steps out of the TARDIS and says,” Where have you bought me to this time old girl”, he’s not performing a sketch, he’s living out a fantasy.

    And sure enough, Mark Gatiss was involved when Doctor Who returned, writing several episodes of the show and appearing as Professor Richard Lazarus of” The Lazarus Experiment,” Walliams would later appear as the cowardly, oppressor appeasing alien Gibbis in” The God Complex.”

    Curse of the Fatal Death

    1999 was in many ways a highpoint of the Wilderness Years. Fans were also given a Comic Relief sketch titled” The Curse of the Fatal Death” along with” Doctor Who Night.” Once again, Rowan Atkinson’s portrayal of the Doctor in this case is a new” Ninth” Doctor, with just a small hint of Blackadder. It has plenty of gags, but those gags come with production values at the more polished end of the classic series, and real sense that everyone involved just really wanted to make some Doctor Who.

    According to Beckett-King,” I’m pretty certain the first Who I ever saw was the Comic Relief parody with Rowan Atkinson, and based on that, I wanted to grow up wearing tank tops and be Doctor Who.” ” To the annoyance of Whovians everywhere, I still think of the Doctor as” Doctor Who”. So, I came to Who through parody, like I came to Citizen Kane via The Simpsons“.

    Curse of the Fatal Death may be the most successful Doctor Who parody ever in terms of future CV performances. The Doctor repeatedly regenerates throughout the episode, including turning into Hugh Grant, who was given the role when Russell T. Davies revived the show.

    Grant has said, &#8220, I was offered the role of the Doctor a few years back and was highly flattered. The issue with those things is that only when you see them on screen do you realize,” Damn, that was good, why did I say no?” But knowing me, I’d probably make a mess of it. &#8221,

    Richard E. Grant would later reprise his role as the Ninth Doctor in the animated film” Scream of the Shalka,” but some people preferred that role more than others. Russell T Davies once stated to Doctor Who Magazine,” I thought he was terrible. I thought he took the money and ran, to be honest. It was a sluggish performance. He was never put on our wish list to play the Doctor. &#8221,

    However, Richard E. Grant made a comeback as the Great Intelligence in season seven, and Richard E. Grant’s face was revealed when the episode” Rogue,” which was produced by Davies in his second year as showrunner, was revealed.

    However, Steven Moffat, the author of” Curse of the Fatal Death,” was the big success story, and this is where things start to get strange. Because obviously Moffat eventually went on to write some of the best beloved episodes of the Doctor Who 2005 revival, and then became the showrunner himself.

    And if you watch Curse of the Fatal Death after watching Moffat’s Doctor Who series, you start to notice some things. Like that the Doctor and the Daleks are confronted at the same time, which the Doctor wouldn’t actually do until” The Magician’s Apprentice/The Witch’s Familiar,” written by Moffat. Both even feature a joke about why the Daleks would have chairs.

    And the plotline includes a lot of characters going back in time to create situations that they can exploit in the future, which fans have come to know as” Timey Wimey,” a term used to describe many of Steven Moffat’s Doctor Who plots.

    The Doctor uses up their final regenerations in” The Curse of the Fatal Death,” and the universe, unable to do without him, grants the Doctor a Thirteenth, female incarnation ( Joanna Lumley ). Under Steven Moffat, the Doctor would use up their final regenerations, then realising the universe is unable to do without him, Gallifrey allows the Doctor to regenerate into a Thirteenth, female incarnation ( Jodie Whittaker )”. The Fatal Death’s Curse is “practically a speed run of everything Moffat wanted to do with it.” It isn’t just an audition for writing Doctor Who.

    The first post on Den of Geek was The Doctor Who Parodies That Were Actually Auditions.

  • Asynchronous Design Critique: Giving Feedback

    Asynchronous Design Critique: Giving Feedback

    One of the most powerful gentle abilities we have at our disposal is the ability to work together to improve our designs while developing our own abilities and perspectives, regardless of how it is used or what it might be called.

    Feedback is also one of the most underestimated equipment, and generally by assuming that we’re now great at it, we settle, forgetting that it’s a skill that can be trained, grown, and improved. Bad feedback can cause conflict in jobs, lower motivation, and negatively impact faith and teamwork over the long term. Quality comments can be a revolutionary force.

    Practicing our knowledge is absolutely a good way to enhance, but the learning gets yet faster when it’s paired with a good base that programs and focuses the exercise. What are some fundamental components of providing effective opinions? And how can comments be adjusted for rural and distributed job settings?

    On the web, we may find a long history of sequential comments: code was written and discussed on mailing lists since the beginning of open source. Currently, engineers engage on pull calls, developers post in their favourite design tools, project managers and sprint masters exchange ideas on tickets, and so on.

    Design analysis is often the label used for a type of input that’s provided to make our job better, jointly. So it generally adheres to many of the concepts with suggestions, but it also has some differences.

    The material

    The content of the feedback serves as the foundation for every effective criticism, so we need to start there. There are many versions that you can use to design your information. The one that I personally like best—because it’s obvious and actionable—is this one from Lara Hogan.

    This formula is typically used to provide feedback to people, but it also fits really well in a design criticism because it finally addresses one of the main inquiries that we work on: What? Where? Why? How? Imagine that you’re giving some comments about some pattern function that spans several screens, like an onboard movement: there are some pages shown, a stream blueprint, and an outline of the decisions made. You notice a flaw in the situation. If you keep the three components of the equation in mind, you’ll have a mental unit that can help you become more precise and effective.

    A comment that appears to be acceptable at first glance could be included in some feedback, as it only appears to partially fulfill the requirements. But does it?

    Not confident about the keys ‘ patterns and hierarchy—it feels off. May you alter them?

    Observation for style feedback doesn’t only mean pointing out which part of the software your input refers to, but it also refers to offering a viewpoint that’s as specific as possible. Do you offer the user’s viewpoint? Your expert perspective? A business perspective? From the perspective of the project manager? A first-time user’s perspective?

    When I see these two buttons, I anticipate one to go forward and the other to go back.

    Impact is about the why. Just pointing out a UI element might sometimes be enough if the issue may be obvious, but more often than not, you should add an explanation of what you’re pointing out.

    When I see these two buttons, I anticipate one to go forward and the other to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow.

    The question approach is meant to provide open guidance by eliciting the critical thinking in the designer receiving the feedback. Notably, in Lara’s equation she provides a second approach: request, which instead provides guidance toward a specific solution. While that’s a viable option for general feedback, in my experience, going back to the question approach typically leads to the best solutions because designers are generally more at ease with having an open space to experiment with.

    The difference between the two can be exemplified with, for the question approach:

    When I see these two buttons, I anticipate one to go forward and the other to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Would it make sense to unify them?

    Or, for the request approach:

    When I see these two buttons, I anticipate one to go forward and the other to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same pair of forward and back buttons.

    At this point in some situations, it might be useful to integrate with an extra why: why you consider the given suggestion to be better.

    When I see these two buttons, I anticipate one to go forward and the other to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same two forward and back buttons so that users don’t get confused.

    Choosing the question approach or the request approach can also at times be a matter of personal preference. I spent a while working on improving my feedback, conducting anonymous feedback reviews and sharing feedback with others. After a few rounds of this work and a year later, I got a positive response: my feedback came across as effective and grounded. Until I changed teams. Quite unexpected, my next round of criticism from one particular person wasn’t very positive. The reason is that I had previously tried not to be prescriptive in my advice—because the people who I was previously working with preferred the open-ended question format over the request style of suggestions. However, there was one person in this other team who now preferred specific guidance. So I adapted my feedback for them to include requests.

    One comment that I heard come up a few times is that this kind of feedback is quite long, and it doesn’t seem very efficient. No, but also yes. Let’s explore both sides.

    No, this kind of feedback is effective because the length is a byproduct of clarity, and giving this kind of feedback can provide precisely enough information for a sound fix. Also if we zoom out, it can reduce future back-and-forth conversations and misunderstandings, improving the overall efficiency and effectiveness of collaboration beyond the single comment. Imagine that in the example above the feedback were instead just,” Let’s make sure that all screens have the same two forward and back buttons”. Since the designer receiving this feedback wouldn’t have much to go by, they might just implement the change. In later iterations, the interface might change or they might introduce new features—and maybe that change might not make sense anymore. The designer might assume that the change is about consistency without the explanation, but what if it wasn’t? So there could now be an underlying concern that changing the buttons would be perceived as a regression.

    Yes, this style of feedback is not always efficient because the points in some comments don’t always need to be exhaustive, sometimes because certain changes may be obvious (” The font used doesn’t follow our guidelines” ) and sometimes because the team may have a lot of internal knowledge such that some of the whys may be implied.

    Therefore, the above equation serves as a mnemonic to reflect and enhance the practice rather than a strict template for feedback. Even after years of active work on my critiques, I still from time to time go back to this formula and reflect on whether what I just wrote is effective.

    The atmosphere

    Well-grounded content is the foundation of feedback, but that’s not really enough. The soft skills of the person who’s providing the critique can multiply the likelihood that the feedback will be well received and understood. It has been demonstrated that only positive feedback can lead to sustained change in people, and tone alone can determine whether content is rejected or welcomed.

    Since our goal is to be understood and to have a positive working environment, tone is essential to work on. I’ve tried to summarize the necessary soft skills over the years using a formula that resembles that of the content receptivity equation.

    Respectful feedback comes across as grounded, solid, and constructive. It’s the kind of feedback that, whether it’s positive or negative, is perceived as useful and fair.

    Timing refers to the moment when the feedback occurs. To-the-point feedback doesn’t have much hope of being well received if it’s given at the wrong time. When a new feature’s entire high-level information architecture is about to go live, it might still be relevant if the questioning raises a significant blocker that no one saw, but those concerns are much more likely to have to wait for a later revision. So in general, attune your feedback to the stage of the project. Early iteration? Iteration later? Polishing work in progress? Each of these needs varies. The right timing will make it more likely that your feedback will be well received.

    Attitude is the equivalent of intent, and in the context of person-to-person feedback, it can be referred to as radical candor. Before writing, it’s important to make sure the person we’re writing will actually benefit them and improve the overall project. This might be a hard reflection at times because maybe we don’t want to admit that we don’t really appreciate that person. Hopefully that’s not the case, but it can happen, and that’s okay. Acknowledging and owning that can help you make up for that: how would I write if I really cared about them? How can I avoid being passive aggressive? How can I encourage constructive behavior?

    Form is relevant especially in a diverse and cross-cultural work environments because having great content, perfect timing, and the right attitude might not come across if the way that we write creates misunderstandings. There could be many reasons for this: some words might cause particular reactions, some non-native speakers might not understand all the nuances of some sentences, and other times our brains might be different and we might perceive the world differently. Neurodiversity must be taken into account. Whatever the reason, it’s important to review not just what we write but how.

    A few years back, I was asking for some feedback on how I give feedback. I was given some helpful advice, but I also found a surprise in my comment. They pointed out that when I wrote” Oh, ]… ]”, I made them feel stupid. That wasn’t my intention at all! I felt really bad, and I just realized that I provided feedback to them for months, and every time I might have made them feel stupid. I was horrified … but also thankful. I quickly changed my situation by adding “oh” to my list of replaced words (your choice between aText, TextExpander, or others ) so that when I typed “oh,” it was immediately deleted.

    Something to highlight because it’s quite frequent—especially in teams that have a strong group spirit—is that people tend to beat around the bush. It’s important to keep in mind that having a positive attitude doesn’t necessarily mean passing judgment on the feedback; rather, it simply means that you give it constructive and respectful feedback, whether it be difficult or positive. The nicest thing that you can do for someone is to help them grow.

    We have a great advantage in giving feedback in written form: it can be reviewed by another person who isn’t directly involved, which can help to reduce or remove any bias that might be there. The best, most insightful moments for me came when I shared a comment and asked a trusted person how it sounds, how can I do it better, or even” How would you have written it”? I discovered that by seeing the two versions side by side, I’ve learned a lot.

    The format

    Asynchronous feedback also has a significant inherent benefit: we can devote more time to making sure that the suggestions ‘ clarity of communication and actionability fulfill two main objectives.

    Let’s imagine that someone shared a design iteration for a project. You are reviewing it and leaving a comment. There are many ways to accomplish this, and context is of course important, but let’s try to think about some things that might be worthwhile to take into account.

    In terms of clarity, start by grounding the critique that you’re about to give by providing context. This includes specifically describing where you’re coming from: do you have a thorough understanding of the project, or is this your first time seeing it? Are you coming from a high-level perspective, or are you figuring out the details? Are there regressions? Which user’s point of view are you addressing when offering your feedback? Is the design iteration at a point where it would be okay to ship this, or are there major things that need to be addressed first?

    Even if you’re giving feedback to a team that already has some project information, providing context is helpful. And context is absolutely essential when giving cross-team feedback. If I were to review a design that might be indirectly related to my work, and if I had no knowledge about how the project arrived at that point, I would say so, highlighting my take as external.

    We frequently concentrate on the negatives and attempt to list every improvement that could be made. That’s of course important, but it’s just as important—if not more—to focus on the positives, especially if you saw progress from the previous iteration. Although this may seem superfluous, it’s important to remember that design has a number of possible solutions to each problem. So pointing out that the design solution that was chosen is good and explaining why it’s good has two major benefits: it confirms that the approach taken was solid, and it helps to ground your negative feedback. In the longer term, sharing positive feedback can help prevent regressions on things that are going well because those things will have been highlighted as important. Positive feedback can also help, as an added bonus, prevent impostor syndrome.

    There’s one powerful approach that combines both context and a focus on the positives: frame how the design is better than the status quo ( compared to a previous iteration, competitors, or benchmarks ) and why, and then on that foundation, you can add what could be improved. There is a significant difference between a critique of a design that is already in good shape and one that isn’t quite there yet.

    Another way that you can improve your feedback is to depersonalize the feedback: the comments should always be about the work, never about the person who made it. It’s” This button isn’t well aligned” versus” You haven’t aligned this button well”. Just before sending, review your writing to make changes to this.

    In terms of actionability, one of the best approaches to help the designer who’s reading through your feedback is to split it into bullet points or paragraphs, which are easier to review and analyze one by one. You might also think about breaking up the feedback into sections or even across multiple comments if it is longer. Of course, adding screenshots or signifying markers of the specific part of the interface you’re referring to can also be especially useful.

    One approach that I’ve personally used effectively in some contexts is to enhance the bullet points with four markers using emojis. A red square indicates that it is something I consider blocking, a yellow diamond indicates that it should be changed, and a green circle indicates that it is fully confirmed. I also use a blue spiral � � for either something that I’m not sure about, an exploration, an open alternative, or just a note. However, I’d only use this strategy on teams where I’ve already established a high level of trust because the impact could be quite demoralizing if I had to deliver a lot of red squares, and I’d change how I’d communicate that a little.

    Let’s see how this would work by reusing the example that we used earlier as the first bullet point in this list:

    • 🔶 Navigation—When I see these two buttons, I anticipate one to go forward and the other to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same two forward and back buttons so that users don’t get confused.
    • � � Overall— I think the page is solid, and this is good enough to be our release candidate for a version 1.0.
    • � � Metrics—Good improvement in the buttons on the metrics area, the improved contrast and new focus style make them more accessible.
    • Button Style: Using the green accent in this context gives the impression that it’s a positive action because green is typically seen as a confirmation color. Do we need to explore a different color?
    • Considering the number of items on the page and the overall page hierarchy, it seems to me that the tiles should use Subtitle 2 instead of Subtitle 1. This will keep the visual hierarchy more consistent.
    • � � Background—Using a light texture works well, but I wonder whether it adds too much noise in this kind of page. What is the purpose behind using that?

    What about giving feedback directly in Figma or another design tool that allows in-place feedback? These are generally difficult to use because they conceal discussions and are harder to follow, but they can be very useful in the right context. Just make sure that each of the comments is separate so that it’s easier to match each discussion to a single task, similar to the idea of splitting mentioned above.

    One final note: say the obvious. Sometimes we might feel good or bad about something, so we don’t say it. Or sometimes we might have a doubt that we don’t express because the question might sound stupid. Say it, that’s fine. You might have to reword it a little bit to make the reader feel more comfortable, but don’t hold it back. Good feedback is transparent, even when it may be obvious.

    Another benefit of asynchronous feedback is that written feedback automatically monitors decisions. Especially in large projects,” Why did we do this”? There’s nothing better than open, transparent discussions that can be reviewed at any time, and this could be a question that arises from time to time. For this reason, I recommend using software that saves these discussions, without hiding them once they are resolved.

    Content, tone, and format. Although each of these subjects offers a useful model, focusing on improving eight of the subjects ‘ focus points, including observation, impact, question, timing, attitude, form, clarity, and actionability, is a lot of work to complete at once. One effective approach is to take them one by one: first identify the area that you lack the most (either from your perspective or from feedback from others ) and start there. Then the third, the third, and so on. At first you’ll have to put in extra time for every piece of feedback that you give, but after a while, it’ll become second nature, and your impact on the work will multiply.

    Thanks to Brie Anne Demkiw and Mike Shelton for reviewing the first draft of this article.

  • Asynchronous Design Critique: Getting Feedback

    Asynchronous Design Critique: Getting Feedback

    ” Any feedback?” is perhaps one of the worst ways to ask for opinions. It’s obscure and unreliable, and it doesn’t give a clear picture of what we’re looking for. Getting good opinions starts sooner than we might hope: it starts with the demand.

    When we realize that receiving input can be seen as a form of design study, it might seem counterintuitive to begin the process with a question. In the same way that we wouldn’t perform any studies without the correct questions to get the insight that we need, the best way to ask for feedback is also to build strong issues.

    Design criticism is not a one-time procedure. Sure, any great comments process continues until the project is finished, but this is especially true for layout because architecture work continues iteration after iteration, from a high level to the finest details. Each stage requires its unique set of questions.

    And suddenly, as with any great research, we need to examine what we got up, get to the base of its perspectives, and take action. Problem, generation, and analysis. This look at each of those.

    The query

    Being available to input is important, but we need to be specific about what we’re looking for. Any comments,” What do you think,” or” I’d love to hear your mind” at the end of a presentation are likely to garner a lot of different ideas, or worse, to make people follow the lead of the first speaker. And next… we get frustrated because vague issues like those you turn a high-level moves review into folks rather commenting on the borders of buttons. Which theme may be significant, so it might be difficult to get the team to choose the one you wanted to concentrate on.

    But how do we get into this scenario? It’s a combination of various aspects. One is that we don’t often consider asking as a part of the input approach. Another is how healthy it is to keep the issue open and assume that everyone else will agree. Another is that in nonprofessional debate, there’s usually no need to be that exact. In summary, we tend to undervalue the value of the concerns, so we don’t work to make them better.

    The work of asking good questions guidelines and focuses the criticism. It also serves as a form of acceptance, outlining your willingness to make comments and the types of comments you want to receive. It puts people in the right emotional position, especially in situations when they weren’t expecting to provide feedback.

    There isn’t a second best method to request comments. It simply needs to be certain, and precision may take several shapes. The period than depth model for design critique has been a particularly helpful tool for my coaching.

    Stage” refers to each of the steps of the process—in our event, the design process. The type of input changes as the customer research moves on to the final design. But within a single stage, one might also examine whether some assumptions are correct and whether there’s been a suitable language of the amassed input into updated designs as the job has evolved. The layers of user experience could serve as a starting point for potential questions. What do you want to know: Project objectives? user requirements? Functionality? Content? Interaction design? Information architecture UI design? navigation planning Visual design? Branding?

    Here’re a few example questions that are precise and to the point that refer to different layers:

    • Functionality: Is it desirable to automate account creation?
    • Interaction design: Take a look through the updated flow and let me know whether you see any steps or error states that I might’ve missed.
    • Information architecture: This page contains two competing pieces of information. Is the structure effective in communicating them both?
    • User interface design: What do you think about the top-most error counter, which ensures that you can see the next error even when the error is outside the viewport?
    • Navigation design: From research, we identified these second-level navigation items, but once you’re on the page, the list feels too long and hard to navigate. Are there any ways to deal with this?
    • Visual design: Are the sticky notifications in the bottom-right corner visible enough?

    How much of a presentation’s depth would be on the other axis of specificity. For example, we might have introduced a new end-to-end flow, but there was a specific view that you found particularly challenging and you’d like a detailed review of that. This can be especially helpful when switching between iterations because it’s crucial to highlight the changes made.

    There are other things that we can consider when we want to achieve more specific—and more effective—questions.

    A quick fix is to get rid of the generic qualifiers from questions like “good”, “well,” “nice,” “bad,” “okay,” and” cool.” For example, asking,” When the block opens and the buttons appear, is this interaction good”? is it possible to look specific, but you can spot the “good” qualifier and make the question” When the block opens and the buttons appear, is it clear what the next action is” look like?

    Sometimes we actually do want broad feedback. Although that’s uncommon, it can occur. In that sense, you might still make it explicit that you’re looking for a wide range of opinions, whether at a high level or with details. Or perhaps you should just say,” At first glance, what do you think”? so that it’s clear that what you’re asking is open ended but focused on someone’s impression after their first five seconds of looking at it.

    Sometimes the project is particularly broad, and some areas may have already been thoroughly explored. In these situations, it might be useful to explicitly say that some parts are already locked in and aren’t open to feedback. Although it’s not something I’d recommend in general, I’ve found it helpful in avoiding falling into rabbit holes like those that could lead to further refinement but aren’t what’s important right now.

    Asking specific questions can completely change the quality of the feedback that you receive. People with less refined criticism will now be able to provide more actionable feedback, and even expert designers will appreciate the clarity and effectiveness gained from concentrating solely on what’s needed. It can save a lot of time and frustration.

    The iteration

    Design iterations are probably the most visible part of the design work, and they provide a natural checkpoint for feedback. Many design tools have inline commenting, but many of those methods typically display changes as a single fluid stream in the same file. These methods cause conversations to vanish once they’re resolved, update shared UI components automatically, and require designs to always display the most recent version unless these would-be useful features were manually turned off. The implied goal that these design tools seem to have is to arrive at just one final copy with all discussions closed, probably because they inherited patterns from how written documents are collaboratively edited. That approach to design critiques is probably not the best approach, but some teams might benefit from it even if I don’t want to be too prescriptive.

    The asynchronous design-critique approach that I find most effective is to create explicit checkpoints for discussion. For this, I’ll use the term iteration post. It refers to a write-up or presentation of the design iteration followed by a discussion thread of some kind. Any platform that can accommodate this type of structure can use this. By the way, when I refer to a “write-up or presentation“, I’m including video recordings or other media too: as long as it’s asynchronous, it works.

    There are many benefits to using iteration posts:

    • It creates a rhythm in the design work so that the designer can review feedback from each iteration and prepare for the next.
    • It makes decisions accessible for upcoming review, and conversed conversations are also always available.
    • It creates a record of how the design changed over time.
    • It might also make it simpler to collect and act on feedback depending on the tool.

    These posts of course don’t mean that no other feedback approach should be used, just that iteration posts could be the primary rhythm for a remote design team to use. And from there, there can develop additional feedback techniques ( such as live critique, pair designing, or inline comments ).

    I don’t think there’s a standard format for iteration posts. However, there are a few high-level elements that make sense to include as a baseline:

    1. The goal
    2. The layout
    3. The list of changes
    4. The querys

    Each project is likely to have a goal, and hopefully it’s something that’s already been summarized in a single sentence somewhere else, such as the client brief, the product manager’s outline, or the project owner’s request. In every iteration post, I would copy and paste this, so I could do it again. The idea is to provide context and to repeat what’s essential to make each iteration post complete so that there’s no need to find information spread across multiple posts. The most recent iteration post will have everything I need if I want to know about the most recent design.

    This copy-and-paste part introduces another relevant concept: alignment comes from repetition. Therefore, repeating information in posts is actually very effective at ensuring that everyone is on the same page.

    The design is then the actual series of information-architecture outlines, diagrams, flows, maps, wireframes, screens, visuals, and any other kind of design work that’s been done. It’s any design object, to put it briefly. For the final stages of work, I prefer the term blueprint to emphasize that I’ll be showing full flows instead of individual screens to make it easier to understand the bigger picture.

    Because it makes it easier to refer to the objects, it might also be helpful to have clear names on them. Write the post in a way that helps people understand the work. It’s not much different from creating a strong live presentation.

    For an efficient discussion, you should also include a bullet list of the changes from the previous iteration to let people focus on what’s new, which can be especially useful for larger pieces of work where keeping track, iteration after iteration, could become a challenge.

    Finally, as mentioned earlier, it’s crucial that you include a list of the questions to help you guide the design critique in the desired direction. Doing this as a numbered list can also help make it easier to refer to each question by its number.

    Not every iteration is the same. Earlier iterations don’t need to be as tightly focused—they can be more exploratory and experimental, maybe even breaking some of the design-language guidelines to see what’s possible. Then, later, the iterations begin coming to a decision and improving it until the feature development is complete.

    I want to highlight that even if these iteration posts are written and conceived as checkpoints, by no means do they need to be exhaustive. A post might be a draft, just a concept to start a discussion, or it might be a cumulative list of all the features that have been added over the course of each iteration until the full picture is achieved.

    Over time, I also started using specific labels for incremental iterations: i1, i2, i3, and so on. Although this may seem like a minor labeling tip, it can be useful in many ways:

    • Unique—It’s a clear unique marker. Everyone knows where to go to review things, and it’s simple to say” This was discussed in i4″ with each project.
    • Unassuming—It works like versions ( such as v1, v2, and v3 ) but in contrast, versions create the impression of something that’s big, exhaustive, and complete. Exploratory, incomplete, or partial should be the definition of an argument.
    • Future proof—It resolves the “final” naming problem that you can run into with versions. No more files with the title “final final complete no-really-its-done” Within each project, the largest number always represents the latest iteration.

    The wording release candidate (RC ) could be used to indicate when a design is finished enough to be worked on, even if there are some areas that still need improvement and, in turn, require more iterations, such as” with i8 we reached RC” or “i12 is an RC” to indicate when it is finished.

    The review

    What typically occurs during a design critique is an open discussion, with a back and forth between parties that can be very productive. This approach is particularly effective during live, synchronous feedback. However, when we work asynchronously, using a different approach is more effective: we can adopt a user-research mindset. Written feedback from teammates, stakeholders, or others can be treated as if it were the result of user interviews and surveys, and we can analyze it accordingly.

    Asynchronous feedback is particularly effective around these friction points because of this shift’s significant benefits:

    1. It removes the pressure to reply to everyone.
    2. It lessens the annoyance of snoop-by comments.
    3. It lessens our personal stake.

    The first friction is being forced to respond to every comment. Sometimes we write the iteration post, and we get replies from our team. It’s simple, straightforward, and doesn’t cause any issues. But other times, some solutions might require more in-depth discussions, and the amount of replies can quickly increase, which can create a tension between trying to be a good team player by replying to everyone and doing the next design iteration. If the respondent is a stakeholder or a person directly involved in the project, this might be especially true. We need to accept that this pressure is absolutely normal, and it’s human nature to try to accommodate people who we care about. Responding to all comments at times can be effective, but when we consider a design critique more like user research, we realize that we don’t need to respond to every comment, and there are alternatives in asynchronous spaces:

      One is to let the next iteration speak for itself. The response is received when the design changes and a follow-up iteration is made. You might tag all the people who were involved in the previous discussion, but even that’s a choice, not a requirement.
    • Another option is to respond politely to acknowledge each comment, such as” Understood. Thank you”,” Good points— I’ll review”, or” Thanks. These will be included in the upcoming iteration. In some cases, this could also be just a single top-level comment along the lines of” Thanks for all the feedback everyone—the next iteration is coming soon”!
    • Another option is to quickly summarize the comments before moving on. Depending on your workflow, this can be particularly useful as it can provide a simplified checklist that you can then use for the next iteration.

    The swoop-by comment, which is the kind of feedback that comes from a member of the project or team who might not be aware of the context, restrictions, decisions, or requirements —or of the discussions from earlier iterations. On their side, there’s something that one can hope that they might learn: they could start to acknowledge that they’re doing this and they could be more conscious in outlining where they’re coming from. Swoop-by comments frequently prompt the simple thought,” We’ve already discussed this,” and it can be frustrating to have to keep saying the same thing over and over.

    Let’s begin by acknowledging again that there’s no need to reply to every comment. However, if responding to a previously litigated point might be helpful, a brief response with a link to the previous discussion for additional information is typically sufficient. Remember, alignment comes from repetition, so it’s okay to repeat things sometimes!

    Swoop-by commenting can still be useful for two reasons: first, they might point out something that isn’t clear, and second, they might have the power to fit in with a user’s perspective when they are seeing the design for the first time. Sure, you’ll still be frustrated, but that might at least help in dealing with it.

    The personal stake we might have in relation to the design could be the third friction point, which might cause us to feel defensive if the review turned out to be more of a discussion. Treating feedback as user research helps us create a healthy distance between the people giving us feedback and our ego ( because yes, even if we don’t want to admit it, it’s there ). In the end, putting everything in aggregate form helps us to prioritize our work more.

    Always remember that while you need to listen to stakeholders, project owners, and specific advice, you don’t have to accept every piece of feedback. You must examine it and come up with a rationale for your choice, but sometimes “no” is the best choice.

    As the designer leading the project, you’re in charge of that decision. In the end, everyone has their area of expertise, and as a designer, you are the one with the most background and knowledge to make the right choice. And by listening to the feedback that you’ve received, you’re making sure that it’s also the best and most balanced decision.

    Thanks to Mike Shelton and Brie Anne Demkiw for their initial review of this article.