” Any opinion” you might have? is perhaps one of the worst ways to ask for opinions. It’s obscure and unreliable, and it doesn’t give a clear picture of what we’re looking for. Great feedback begins sooner than we might anticipate: it begins with the request.
It might seem contradictory to start the process of receiving feedback with a problem, but that makes sense if we realize that getting feedback can be thought of as a form of design study. The best way to ask for feedback is to write down some insightful questions, just like we wouldn’t do any studies without the right questions to obtain the insight we need.
Design criticism is never a one-time procedure. Sure, any great comments process continues until the project is finished, but this is especially true for layout because architecture work continues iteration after iteration, from a high level to the finest details. Each stage requires its unique set of questions.
Lastly, we need to review what we received, get to the heart of its conclusions, and take action, like with any good research. Problem, generation, and evaluation. Let’s take a closer look at each of those.
The query
Being available to input is important, but we need to be specific about what we’re looking for. Any comments,” What do you think,” or” I’d love to hear your mind” at the end of a presentation are likely to garner a lot of different ideas, or worse, to make people follow the lead of the first speaker. And finally, we become irritated because ambiguous queries like those can result in people who won’t comment on the boundaries of keys during a high-level flows evaluation. Which might be a savory matter, so it might be hard at that point to divert the crew to the topics that you had wanted to focus on.
But how do we enter this circumstance? A number of elements are involved. One is that we don’t often consider asking as a part of the input method. Another is how healthy it is to assume that everyone else will agree with the problem and leave it alone. Another is that there’s frequently no need to be that specific in nonprofessional conversations. In short, we tend to underestimate the importance of the concerns, so we don’t work on improving them.
Great questioning helps to guide and concentrate the criticism. It’s even a form of acceptance because it specifies what kind of opinions you’d like to receive and how you’re open to them. It puts people in the right emotional position, especially in situations when they weren’t expecting to give opinions.
There isn’t a second best method to request suggestions. Sensitivity can take countless forms, and it just needs to be that. A design for design critique that I’ve found especially helpful in my training is the one of stage than depth.
The term” level” refers to each stage of the process, specifically the design phase. The type of input changes as the consumer research moves on to the final design. But within a single stage, one might also examine whether some assumptions are correct and whether there’s been a suitable language of the amassed input into updated designs as the job has evolved. The levels of customer experience may serve as a starting point for future inquiries. What are the project targets, in your opinion? User requirements? Funnality? the glad Contact design? Data layout Interface style Navigation style? Visual layout packaging?
Here’re a some example questions that are specific and to the place that refer to different levels:
- Features: Is it appealing to automate accounts creation?
- Contact design: Please review the updated movement and let me know if there are any steps or error points I may have missed.
- Information infrastructure: We have two competing bits of information on this site. Does the construction work to effectively communicate both of them?
- User interface design: What do you think about the top-of-the-page error counter, which makes sure you can see the future error even when the error is outside the viewport?
- Navigation style: From study, we identified these second-level routing items, but when you’re on the webpage, the list feels overly long and hard to understand. Are there any ways to deal with this?
- Are the thick alerts in the bottom-right corner of the page clearly apparent enough?
The other plane of sensitivity is about how heavy you’d like to go on what’s being presented. For instance, we may have introduced a new end-to-end movement, but you might want to know more about a particular viewpoint you found especially difficult. This can be particularly helpful from one generation to the next when it’s crucial to identify the areas that have changed.
There are other things that we can consider when we want to accomplish more specific—and more effective—questions.
A quick fix is to get rid of the general qualifiers from issues like “good,” “well,” “nice,” “bad,” “okay,” and” cool.” For instance, what is the question” When the wall opens and the switches appear, is this connection good”? may seem precise, but you can place the “good” tournament, and transfer it to an even better query:” When the wall opens and the buttons appear, is it clear what the next action is”?
Sometimes we do need a lot of opinions. That’s uncommon, but it can occur. In that feel, you may also make it obvious that you’re looking for a wide range of ideas, whether at a high level or with information. Or perhaps you should just say,” At first glance, what do you think”? so that it is obvious that what you’re asking is open ended but focused on a person’s impression after their first five seconds of inquiry.
Sometimes the project is particularly expansive, and some areas may have already been explored in detail. In these circumstances, it might be helpful to state explicitly that some parts are already locked in and aren’t accessible for feedback. Although it’s not something I’d recommend in general, I’ve found it helpful in avoiding getting back into rabbit holes like those that could lead to further refinement but aren’t currently what matters most.
Asking specific questions can completely change the quality of the feedback that you receive. People with less refined criticism will now be able to provide more actionable feedback, and even expert designers will appreciate the clarity and effectiveness gained from concentrating solely on what’s needed. It can save a lot of time and frustration.
The iteration
Design iterations are probably the most recognizable component of the design process, and they act as a natural checkpoint for feedback. Many design tools have inline commenting, but many of them only display changes as a single fluid stream in the same file. These types of design tools cause conversations to end after they are resolved, update shared UI components automatically, and require designers to always display the most recent version unless these would-be useful features were manually disabled. The implied goal that these design tools seem to have is to arrive at just one final copy with all discussions closed, probably because they inherited patterns from how written documents are collaboratively edited. That’s probably not the most effective way to go about designing critiques, but even if I don’t want to be too prescriptive, it might work for some teams.
Create explicit checkpoints for discussion is the asynchronous design-critique strategy that I believe works the best. I’m going to use the term iteration post for this. It refers to a design iteration write-up or presentation followed by some sort of discussion thread. This can be used on any platform that can accommodate this structure. By the way, when I refer to a “write-up or presentation“, I’m including video recordings or other media too: as long as it’s asynchronous, it works.
There are many benefits to using iteration posts:
- The layouter can review the feedback from each iteration and get ready for the next one by creating a rhythm in the design work.
- It makes decisions visible for future review, and conversations are likewise always available.
- It keeps track of how the design evolved over time.
- Depending on the tool, it might also make it simpler to collect and act on feedback.
These posts of course don’t mean that no other feedback approach should be used, just that iteration posts could be the primary rhythm for a remote design team to use. And from there, there can develop additional feedback techniques ( such as live critique, pair designing, or inline comments ).
There isn’t, in my opinion, a common format for iteration posts. But there are a few high-level elements that make sense to include as a baseline:
- The objective is to achieve
- The layout
- The list of changes
- The querys
A goal for each project is likely to be one that has already been condensed into a single sentence, such as the request for the project owner, the product manager, or the client brief. So this is something that I’d repeat in every iteration post—literally copy and pasting it. The goal is to provide context and repeat what is required to complete each iteration post, avoiding having to search for information in different posts. The most recent iteration post will have everything I need if I want to know about the most recent design.
This copy-and-paste part introduces another relevant concept: alignment comes from repetition. Therefore, repeating information in posts is actually very effective at ensuring that everyone is on the same page.
The actual series of information-architecture outlines, diagrams, flows, maps, wireframes, screens, visuals, and any other kind of design work that has been done is then the design. In short, it’s any design artifact. In the final stages of the project, I prefer to use the term “blank” to indicate that I’ll be displaying complete flows rather than individual screens to make it simpler to comprehend the larger picture.
It might also be helpful to have clear names on the artifacts so that it is easier to refer to them. Write the post in a way that helps people understand the work. It’s not much different from creating a strong live presentation.
A bullet list of the changes made in the previous iteration should also be included for an effective discussion so that attendees can concentrate on what’s changed. This can be especially useful for larger works of work where keeping track, iteration after iteration, might prove difficult.
And finally, as noted earlier, it’s essential that you include a list of the questions to drive the design critique in the direction you want. Creating a numbered list of questions can also make it simpler to refer to each one by its number.
Not every iteration is the same. Earlier iterations don’t need to be as tightly focused—they can be more exploratory and experimental, maybe even breaking some of the design-language guidelines to see what’s possible. Then, later, the iterations begin coming to a decision and improving it until the design process is complete and the feature is ready.
Even if these iterations posts are written and intended as checkpoints, they are not required to be exhaustive. A post might be a draft—just a concept to get a conversation going—or it could be a cumulative list of each feature that was added over the course of each iteration until the full picture is done.
I eventually started using particular labels for incremental iterations, such as i1, i2, i3, and so on. Although this may seem like a minor labeling tip, it can be useful in many ways:
- Unique—It’s a clear unique marker. Everyone knows where to go to review things, and it’s simple to say” This was discussed in i4″ with each project.
- Unassuming—It functions like versions ( such as v1, v2, and v3 ), but versions give the impression of something that is large, exhaustive, and complete. Iterations must be able to be exploratory, incomplete, partial.
- Future proof—It resolves the “final” naming issue that versions can have. No more files with the title “final final complete no-really-its-done” Within each project, the largest number always represents the latest iteration.
The wording release candidate (RC ) could be used to describe a design as complete enough to be worked on, even if there might be some bits that still need more attention and in turn, more iterations would be required, such as” with i8 we reached RC” or “i12 is an RC” to indicate when it is finished.
The evaluation
What usually happens during a design critique is an open discussion, with a back and forth between people that can be very productive. This strategy is particularly successful when synchronous feedback is being received live. However, when we work asynchronously, it is more effective to adopt a different strategy: we can adopt a user-research mindset. Written feedback from teammates, stakeholders, or others can be treated as if it were the result of user interviews and surveys, and we can analyze it accordingly.
Asynchronous feedback is particularly effective around these friction points because of this shift’s significant benefits:
- It makes it easier to respond to everyone.
- It reduces the frustration from swoop-by comments.
- It lowers the stakes we have in ourselves.
The first friction point is having to press yourself to respond to each and every comment. Sometimes we write the iteration post, and we get replies from our team. It’s simple, straightforward, and doesn’t cause any issues. However, there may be times when some solutions may require more in-depth discussions and the number of replies may quickly rise, which can create tension between trying to be a good team player by responding to everyone and attempting the next design iteration. This might be especially true if the person who’s replying is a stakeholder or someone directly involved in the project who we feel that we need to listen to. We must come to terms with the fact that this pressure is perfectly normal and that it’s human nature to try to accommodate those we care about. When responding to all comments, it can be effective, but when we consider a design critique more like user research, we realize that we don’t need to respond to every comment, and there are alternatives in asynchronous spaces:
- One is to let the next iteration speak for itself. When the design changes and we publish a follow-up iteration, that’s the response. You could tag everyone in the previous discussion, but that’s just a choice, not a requirement.
- Another is to briefly reply to acknowledge each comment, such as” Understood. Thank you,”” Good points— I’ll review,” or” Thanks. In the upcoming iteration, I’ll include these. In some cases, this could also be just a single top-level comment along the lines of” Thanks for all the feedback everyone—the next iteration is coming soon”!
- One more thing is to quickly summarize the comments before proceeding. This may be particularly helpful if your workflow uses a simplified checklist to refer to for the following iteration.
The second friction point is the swoop-by comment, which is the kind of feedback that comes from someone outside the project or team who might not be aware of the context, restrictions, decisions, or requirements —or of the previous iterations ‘ discussions. One thing that one can hope that they might learn is that they could begin to acknowledge that they are doing this and that they could be more aware of where they are coming from. Swoop-by comments frequently prompt the simple thought,” We’ve already discussed this,” and it can be frustrating to have to keep saying the same thing over and over.
Let’s begin by acknowledging again that there’s no need to reply to every comment. However, if responding to a previously litigated point might be helpful, a brief response with a link to the previous discussion for additional information is typically sufficient. Remember that repetition results in alignment, so it’s acceptable to repeat things occasionally!
Swoop-by commenting can still be useful for two reasons: they might point out something that still isn’t clear, and they also have the potential to stand in for the point of view of a user who’s seeing the design for the first time. Yes, you’ll still be frustrated, but that might at least make things better for you.
The personal stake we might have in the design could be the third friction point, which might cause us to feel defensive if the review turned into a discussion. Treating feedback as user research helps us create a healthy distance between the people giving us feedback and our ego ( because yes, even if we don’t want to admit it, it’s there ). In the end, putting everything in aggregate form helps us to prioritize our work more.
Remember to always remember that you don’t have to accept every piece of feedback, even though you need to listen to stakeholders, project owners, and specific advice. You have to analyze it and make a decision that you can justify, but sometimes “no” is the right answer.
You are in charge of making that choice as the designer leading the project. In the end, everyone has their area of specialization, and the designer has the most background and knowledge to make the best choice. And by listening to the feedback that you’ve received, you’re making sure that it’s also the best and most balanced decision.
Thanks to Mike Shelton and Brie Anne Demkiw for their initial review of this article.









