” Any post” you might have? is perhaps one of the worst ways to ask for opinions. It’s obscure and unfocused, and it doesn’t give a clear picture of what we’re looking for. Great feedback begins sooner than we might anticipate: it begins with the request.
It might seem contradictory to start the process of receiving feedback with a problem, but that makes sense if we realize that getting feedback can be thought of as a form of pattern study. The best way to ask for feedback is also to build strong questions, just like we wouldn’t do any studies without the correct questions to get the insight we need.
Design criticism is not a one-time procedure. Sure, any great comments process continues until the project is finished, but this is especially true for layout because architecture work continues iteration after iteration, from a high level to the finest details. Each stage requires its unique set of questions.
Lastly, we need to review what we received, get to the heart of its conclusions, and take action, like with any great exploration. Problem, generation, and evaluation. Let’s take a look at each of those.
The query
Being available to input is important, but we need to be specific about what we’re looking for. Any comments,” What do you think,” or” I’d love to hear your mind” at the end of a presentation are likely to garner a lot of different ideas, or worse, to make people follow the lead of the first speaker. And finally, we become irritated because ambiguous queries like those can result in people who won’t comment on the boundaries of keys during a high-level flows evaluation. Which might be a savory matter, so it might be hard at that point to divert the crew to the topics that you had wanted to focus on.
How do we enter this circumstance, though? It’s a combination of various components. One is that we don’t often consider asking as a part of the input approach. Another is how healthy it is to keep the question open and assume that everyone else will agree. Another is that being extremely precise is frequently not necessary in non-professional debate. In short, we tend to underestimate the importance of the issues, so we don’t work on improving them.
Great questioning helps to guide and concentrate the criticism. It also serves as a form of acceptance, outlining your willingness to make comments and the types of comments you want to receive. It puts people in the right emotional state, especially in situations when they weren’t expecting to give opinions.
There isn’t a second best way to ask for opinions. Precision can take many forms, and it just needs to be that. A design for design critique that I’ve found especially helpful in my training is the one of stage over depth.
The term” level” refers to each of the stages of the process, in our case, the design phase. The type of input changes as the customer research moves on to the final design. But within a single stage, one might also examine whether some assumptions are correct and whether there’s been a suitable language of the amassed input into updated designs as the job has evolved. The levels of user experience may serve as a starting point for possible questions. What do you want to learn about venture goals? User requirements? Funnality? the glad Contact design? Data infrastructure Interface design Navigation style? physical architecture Brand?
Here’re a some example questions that are specific and to the place that refer to different levels:
- Features: Is it desired to automate accounts creation?
- Interaction style: Take a look at the updated flowing and let me know if there are any steps or failure states I may have missed.
- Information structures: We have two competing bits of information on this site. Does the architecture work to effectively communicate both of them?
- User interface design: What do you think about the problem desk at the top of the page, which makes sure you see the following error even if it is outside the viewport?
- Navigation style: From study, we identified these second-level routing items, but when you’re on the webpage, the list feels overly long and hard to understand. Are there any ways to deal with this?
- Are the thick alerts in the bottom-right corner of the page obvious enough?
The other plane of sensitivity is about how heavy you’d like to go on what’s being presented. For instance, we may have introduced a new end-to-end movement, but you might want to know more about a particular viewpoint you found especially hard. This can be particularly helpful from one generation to the next when it’s crucial to identify the areas that have changed.
There are other things that we can acquire when we want to accomplish more specific—and more effective—questions.
Eliminating generic finals from your questions like “good,” “well,” “nice,” “bad,” “okay,” and” cool” is a simple strategy. For instance, what is the question” When the wall opens and the switches appear, is this contact good”? may seem precise, but you can place the “good” tournament, and transfer it to an even better query:” When the wall opens and the buttons appear, is it clear what the next action is”?
Sometimes we do want a lot of feedback. Although that is uncommon, it is possible. In that sense, you might still make it explicit that you’re looking for a wide range of opinions, whether at a high level or with details. Or perhaps just say,” At first glance, what do you think”? so that it is obvious that what you’re asking is open ended but focused on a person’s impression after their first five seconds of inquiry.
Sometimes the project is particularly expansive, and some areas may have already been explored in detail. In these circumstances, it might be helpful to state explicitly that some parts are already locked in and aren’t accessible for feedback. Although it’s not something I’d recommend in general, I’ve found it helpful in avoiding getting back into rabbit holes like those that could lead to further refinement but aren’t currently what matters most.
Asking specific questions can completely change the quality of the feedback that you receive. Even experienced designers will appreciate the clarity and efficiency gained from concentrating solely on what is required, and those with less refined critique skills will now be able to offer more actionable feedback. It can save a lot of time and frustration.
The iteration
The most widely visible aspect of the design process is probably the design iteration, which serves as a natural feedback loop. Many design tools have inline commenting, but many of those methods typically display changes as a single fluid stream in the same file. These methods cause conversations to vanish once they’re resolved, update shared UI components automatically, and require designs to always display the most recent version unless these would-be useful features were manually turned off. The implied goal that these design tools seem to have is to arrive at just one final copy with all discussions closed, probably because they inherited patterns from how written documents are collaboratively edited. That approach to design critiques is probably not the best approach, but some teams might benefit from it even if I don’t want to be too prescriptive.
The asynchronous design-critique approach that I find most effective is to make explicit checkpoints for discussion. I’m going to use the term iteration post for this. It refers to a write-up or presentation of the design iteration that is followed by a discussion thread of some kind. Any platform that can accommodate this type of structure can use this. By the way, when I refer to a “write-up or presentation“, I’m including video recordings or other media too: as long as it’s asynchronous, it works.
There are many benefits to using iteration posts:
- It establishes a rhythm in the design process, allowing the designer to review the feedback from each iteration and get ready for the following.
- It makes decisions visible for future review, and conversations are likewise always available.
- It keeps track of how the design evolved over time.
- Depending on the tool, it might also make it simpler to collect and act on feedback.
These posts of course don’t mean that no other feedback approach should be used, just that iteration posts could be the primary rhythm for a remote design team to use. From there, there can be additional feedback techniques ( such as live critique, pair designing, or inline comments ).
There isn’t, in my opinion, a common format for iteration posts. But there are a few high-level elements that make sense to include as a baseline:
- The objective is to achieve
- The layout
- The list of changes
- The querys
Each project is likely to have a goal, and it should most likely be one that has already been summarized in one sentence elsewhere, such as the client brief, the product manager’s outline, or the request of the project owner. So this is something that I’d repeat in every iteration post—literally copy and pasting it. To avoid having to search through information from multiple posts, the goal is to provide context and repeat what is necessary to complete each iteration post. The most recent iteration post will provide all I need to know about the most recent design.
This copy-and-paste part introduces another relevant concept: alignment comes from repetition. Therefore, repeating information in posts helps to ensure that everyone is on the same page.
The actual series of information-architecture outlines, diagrams, flows, maps, wireframes, screens, visuals, and any other design work that has been done is what is then called the design. In short, it’s any design artifact. In the final stages of the project, I prefer to use the term “blank” to indicate that I’ll be displaying complete flows rather than individual screens to make it simpler to comprehend the larger picture.
It might also be helpful to have clear names on the artifacts so that it is easier to refer to them. Write the post in a way that helps people understand the work. It’s not very different from creating a strong live presentation.
For a successful discussion, you should also include a bullet list of the changes made in the previous iteration to help people concentrate on what’s changed. This can be especially useful for larger pieces of work where keeping track, iteration after iteration, may prove difficult.
And finally, as noted earlier, it’s essential that you include a list of the questions to drive the design critique in the direction you want. Creating a numbered list of questions can also help make it simpler to refer to each one by its number.
Not every iteration is the same. Earlier iterations don’t need to be as tightly focused—they can be more exploratory and experimental, maybe even breaking some of the design-language guidelines to see what’s possible. Then, later, the iterations begin coming to a decision and improving it until the design process is complete and the feature is ready.
Even if these iteration posts are written and intended as checkpoints, I want to point out that they are not by any means exhaustive. A post might be a draft—just a concept to get a conversation going—or it could be a cumulative list of each feature that was added over the course of each iteration until the full picture is done.
I also started using particular labels for incremental iterations over time, such as i1, i2, i3, and so on. Although this may seem like a minor labeling tip, it can be useful in many ways:
- Unique—It’s a clear unique marker. Everyone knows where to go to review things, and it’s simple to say” This was discussed in i4″ with each project.
- Unassuming—Versions of the same thing ( such as v1, v2, and v3 ) give the impression of something enormous, exhaustive, and complete. Iterations must be able to be exploratory, incomplete, partial.
- Future proof—It resolves the “final” naming issue that you might encounter with variations. No more files with the title “final final complete no-really-its-done” Within each project, the largest number always represents the latest iteration.
The wording release candidate (RC ) could be used to indicate when a design is finished enough to be worked on, even if there are some bits that still need work and, in turn, need more iterations:” with i8 we reached RC” or “i12 is an RC” to illustrate this.
The evaluation
What usually happens during a design critique is an open discussion, with a back and forth between people that can be very productive. This strategy is particularly successful when receiving live, synchronous feedback. However, when we work asynchronously, it is more effective to adopt a different strategy: we can adopt a user-research mindset. Written feedback from teammates, stakeholders, or others can be treated as if it were the result of user interviews and surveys, and we can analyze it accordingly.
This shift has some significant advantages, making asynchronous feedback particularly effective, especially around these friction points:
- It lessens the need to respond to everyone.
- It reduces the frustration from swoop-by comments.
- It lessens our personal stakes.
The first friction is being forced to respond to every comment. Sometimes we write the iteration post, and we get replies from our team. It’s simple, straightforward, and doesn’t cause any issues. Sometimes, however, some solutions may require more in-depth discussions, and the number of responses can quickly rise, which can cause tension between trying to be a good team player by responding to everyone and attempting the next design iteration. This might be especially true if the person who’s replying is a stakeholder or someone directly involved in the project who we feel that we need to listen to. It’s human nature to try to accommodate those we care about, and we need to accept that this pressure is completely normal. When we treat a design critique more like user research, we realize that we don’t need to respond to every comment, and there are alternatives: In asynchronous spaces, responding to all comments can be effective.
- One is to let the next iteration speak for itself. The response is received when the design changes and a follow-up iteration is made. You could tag everyone in the previous discussion, but that’s just a choice, not a requirement.
- Another is to briefly reply to acknowledge each comment, such as” Understood. Thank you,”” Good points— I’ll review,” or” Thanks. These will be included in the upcoming iteration. In some cases, this could also be just a single top-level comment along the lines of” Thanks for all the feedback everyone—the next iteration is coming soon”!
- One more thing is to quickly summarize the comments before proceeding. This may be particularly helpful if your workflow uses a simplified checklist to refer to for the following iteration.
The second friction point is the swoop-by comment, which is the kind of feedback that comes from someone outside the project or team who might not be aware of the context, restrictions, decisions, or requirements —or of the previous iterations ‘ discussions. One thing that one can hope that they might learn is that they could begin to acknowledge that they are doing this and that they could be more aware of where they are coming from. Swoop-by comments frequently prompt the simple thought,” We’ve already discussed this,” and it can be frustrating to have to keep saying the same thing over and over.
Let’s begin by acknowledging again that there’s no need to reply to every comment. However, a brief response with a link to the previous discussion for additional information is typically sufficient if responding to a previously litigated point might be helpful. Remember that repetition results in alignment; therefore, it’s acceptable to occasionally repeat things!
Swoop-by commenting can still be useful for two reasons: they might point out something that still isn’t clear, and they also have the potential to stand in for the point of view of a user who’s seeing the design for the first time. Yes, you’ll still be frustrated, but that might at least make things better for you.
The personal stake we might have in relation to the design could be the third friction point, which might cause us to feel defensive if the review turned out to be more of a discussion. Treating feedback as user research helps us create a healthy distance between the people giving us feedback and our ego ( because yes, even if we don’t want to admit it, it’s there ). And in the end, presenting everything in aggregated form helps us to prioritize our work more.
Remember to always remember that you don’t have to accept every piece of feedback, even though you need to listen to stakeholders, project owners, and specific advice. You have to analyze it and make a decision that you can justify, but sometimes “no” is the right answer.
You are in charge of making that choice as the project designer. In the end, everyone has their area of specialization, and the designer has the most background and knowledge to make the best choice. And by listening to the feedback that you’ve received, you’re making sure that it’s also the best and most balanced decision.
Thanks to Mike Shelton and Brie Anne Demkiw for their initial review of this article.











