Blog

  • Lord of the Rings: The Rings of Power Season 3 Confirms Major Timeline Change

    Lord of the Rings: The Rings of Power Season 3 Confirms Major Timeline Change

    The Rings of Power is spoiler-free in this Lord of the Rings post. Lord of the Rings: The Rings of Power is actually returning for a second time on Prime Video, Amazon confirmed in an news. The renewal of the blockbuster epic fantasy series doesn’t come as a huge surprise, as season 3 has been ]… ]

    The second article on Den of Geek was Lord of the Rings: The Rings of Power Season 3 Confirms Major Timeline Change.

    This article contains spoilers for Captain America: Brave New World.

    Doomsday is coming. You know it. I know it. And today, thanks to the Leader, Captain America knows it also.

    In the hapless post-credit field for Captain America: Brave New World, criminal Samuel Sterns offers a reminder to Sam Wilson, one that points to the next Avengers movie and, probably, the close of the MCU as we know it.

    Captain America 4&#8217, s Post-Credit Field Explained

    Sam Wilson is no Steve Rogers, as the selling for Captain America: Brave New World loves to tell us. And that &#8217, s a good thing. Sam&#8217’s record as a consultant is used by Brave New World to transform him into a Captain America who prefers to talk things up with villains over punching them in the face.

    So it &#8217, s no surprise when Brave New World ends with Sam visiting Thaddeus Ross, no longer in Red Hulk kind, in jail on the Raft. Nor is it a wonder when he also drops by the body of Samuel Sterns, the movie&#8217, s great poor, a gamma-radiated supergenius called the Leader.

    Yet though he&#8217, s been defeated, Sterns cannot help but moan, getting a one-up over Sam by showing off his knowledge. Sterns congratulates Sam and his other soldiers on their efforts to end the world. But then he reveals the results of his estimates. &#8220, Do you think this is the only earth? &#8221, Sterns asks. &#8220, We&#8217, ill see what happens when you have to defend this position &#8230, from the others. &#8221,

    On one hand, Sterns &#8217, s reveal is a bit of a let down. We&#8217, have known that there are other universes. Pop tradition is choking on multiverses. Heck, this present period of the MCU is part of the Universe Story, a story that includes Spider-Man: No Method Home, Doctor Strange in the Multiverse of Madness, and Deadpool &amp, Wolverine. The Leader&#8217, s reveal seems clear and Sam seems decrease on the uptake around, even if there&#8217, s no cause for most people on the perfect truth Earth-616 to know about these kingdoms.

    On the other, Stern&#8217, s caution is n&#8217, t just that there are different worlds. Additionally, it &#8217 is important to note that these other worlds are a risk and that there are heroes who may fight against them. That makes more of a particular reference to the 2015 Marvel Comics Secret Wars storyline, which may serve as the inspiration for the upcoming two Avengers movies, Doomsday and Secret Wars.

    cnx. command. push ( function ( ) {cnx ( {playerId:” 106e33c0-3911-473c-b599-b1426db57530″, }). render ( “0270c398a82f44f49c23c16122516796” ), }),

    Key Wars: Unveiling the Key Wars

    Key Wars, which was written by Jonathan Hickman, really begins with his Fantastic Four operate in 2009, moves up through his work on Avengers and New Avengers in 2013, and wraps up with Key Wars in 2015. In a situation where Earths from two different challenges start to step into one another, Reed Richards describes an &#8220, Incursion, &#8221, interplanetary conflict. The Earths and their individual planets are destroyed after each invasion lasts eight days.

    The New Avengers sections of the Key Wars story deals with attempts by the Illuminati of Earth-616&#8212, a key group of prominent people that includes Richards, Iron Man, Namor, Black Panther, Doctor Strange, Black Bolt of the Inhumans, and Beast/Professor X of the X-Men&#8212, to save their world. Along the way, the team encounters both real-world heroes and villains who want to annihilate our planet to save their own. The fervent Illuminati later follow suit, attacking other realities to conserve Earth-616, despite initially abusing the idea.

    New Avengers is a grim story, one filled with a creeping horror that &#8217, s more threatening than anything the MCU has done before. But we&#8217, ve now heard allusion to attacks before, in Doctor Strange in the Multiverse of Madness. Not only do the Illuminati of Earth-838 featured in that film refer to incursions, but the post credit scene finds Clea ( Charlize Theron ) arriving to recruit Strange in a battle against the Incursions.

    Also, the title of the next Avengers video suggests that the MCU soldiers will deal with the attacks in a manner equivalent to that of the Marvel Comics &#8217, soldiers. Most of the heroes in Avengers and New Avengers ‘ final issues either accept their unavoidable ending or pass away trying to avoid it. Only one hero and one villain can see the solution. With the help of the all-powerful Molecule Man, Doctor Strange and Doctor Doom take the fight to the Beyonders, the god-like architects of the multiverse.

    The trio battles the Beyonders until they finally lose their incredible abilities. Strange hesitates to use such amazing power, but Doom does not. He recreates the universe to make himself God and uses it to save our universe.

    The final installment of the Secret Wars series follows Reed Richards and other heroes ‘ battles to restore the world to its former glory in the hands of Doom&#8217’s newly designed Marvel Universe.

    Preparing for Doomsday

    Given the Leader&#8217, s warning, it sounds like Avengers: Doomsday will show Marvel&#8217, s heroes fighting with those of other realities, all trying to protect their Earth from the incursions. And like the comics, Doctor Doom ( played by Robert Downey Jr. ) will make an outrageous choice to save his universe while remaking it in his own image.

    If that &#8217, s the case, Earth &#8217, s Mightiest Heroes will face their greatest battle yet in Avengers: Secret Wars. The Leader should have given Sam the warning right away so he can begin assembling his Avengers and possibly fixing things.

    Captain America: Brave New World is now in theaters.

    On Den of Geek, the first postCaptain America Brave New World Post-Credits Scene Just Set Up Avengers: Doomsday appeared.

  • Captain America Brave New World Post-Credits Scene Just Set Up Avengers: Doomsday

    Captain America Brave New World Post-Credits Scene Just Set Up Avengers: Doomsday

    This article contains spoilers for Captain America: Brave New World. Doomsday is coming. You know it. I know it. And today, thanks to the Leader, Captain America knows it also. In the lone post-credit scene for Captain America: Brave New World, villain Samuel Sterns offers a warning to Sam Wilson, one that points to the]… ]

    The first postCaptain America Brave New World Post-Credits Scene Only Set Up Avengers: Doomsday was a Den of Geek article.

    This article contains spoilers for Captain America: Brave New World.

    Doomsday is coming. You know it. I know it. And today, thanks to the Leader, Captain America knows it to.

    In the hapless post-credit field for Captain America: Brave New World, criminal Samuel Sterns offers a reminder to Sam Wilson, one that points to the next Avengers movie and, probably, the close of the MCU as we know it.

    Captain America 4&#8217, s Post-Credit Field Explained

    Sam Wilson is no Steve Rogers, as the selling for Captain America: Brave New World often reminds us. And that &#8217, s a good thing. Sam&#8217’s history as a consultant is used by Brave New World to transform him into a Captain America who prefers to talk things up with villains over punching them in the face.

    So it &#8217, s no surprise when Brave New World ends with Sam visiting Thaddeus Ross, no longer in Red Hulk kind, in jail on the Raft. Nor is it a wonder when he also drops by the body of Samuel Sterns, the movie&#8217, s great poor, a gamma-radiated supergenius called the Leader.

    Yet though he&#8217, s been defeated, Sterns cannot help but moan, getting a one-up over Sam by showing off his knowledge. Sterns congratulates Sam and his other soldiers on their efforts to end the world. But then he reveals the results of his equations. &#8220, Do you think this is the only earth? &#8221, Sterns asks. &#8220, We&#8217, ill see what happens when you have to defend this position &#8230, from the others. &#8221,

    On one hand, Sterns &#8217, s reveal is a bit of a let down. We&#8217, have known that there are other kingdoms. Pop tradition is choking on multiverses. Heck, this present period of the MCU is part of the Universe Story, a story that includes Spider-Man: No Method Home, Doctor Strange in the Multiverse of Madness, and Deadpool &amp, Wolverine. The Leader&#8217, s reveal seems clear and Sam seems decrease on the uptake around, even if there&#8217, s no cause for most people on the perfect truth Earth-616 to know about these kingdoms.

    On the other, Stern&#8217, s caution is n&#8217, t just that there are different worlds. Additionally, it &#8217 is noted that there are heroes who would protect these other kingdoms because they are a risk. That makes more of a particular reference to the 2015 Marvel Comics Secret Wars storyline, which will serve as the inspiration for the upcoming two Avengers movies, Doomsday and Key Wars.

    cnx. command. push ( function ( ) {cnx ( {playerId:” 106e33c0-3911-473c-b599-b1426db57530″, }). render ( “0270c398a82f44f49c23c16122516796” ), }),

    Key Wars: Unveiling the Surprise Wars

    Secret Wars, which was written by Jonathan Hickman, really begins with his Fantastic Four operate in 2009, moves up through his work on Avengers and New Avengers in 2013, and wraps up with Key Wars in 2015. In a crisis where Earths from two different realities start to step into one space, Reed Richards describes an &#8220, Incursion, &#8221, interplanetary conflict. The Earths and their respective worlds are destroyed after each invasion lasts eight days.

    The New Avengers sections of the Key Wars story deals with attempts by the Illuminati of Earth-616&#8212, a key group of prominent people that includes Richards, Iron Man, Namor, Black Panther, Doctor Strange, Black Bolt of the Inhumans, and Beast/Professor X of the X-Men&#8212, to save their world. Along the way, the group encounters both real-world heroes and villains who want to save their own planet. Although initially opposed to the idea, the determined Illuminati eventually follow suit and launch attacks on other worlds to save Earth-616.

    New Avengers is a bleak story, one filled with a creeping dread that &#8217, s more upsetting than anything the MCU has done before. But we&#8217, ve already heard reference to incursions before, in Doctor Strange in the Multiverse of Madness. Not only do the Illuminati of Earth-838 featured in that film refer to incursions, but the post credit scene finds Clea ( Charlize Theron ) arriving to recruit Strange in a battle against the Incursions.

    Furthermore, the title of the next Avengers movie suggests that the MCU heroes will deal with the incursions in a manner similar to that of the Marvel Comics &#8217, heroes. Most of the heroes in Avengers and New Avengers either accept their unavoidable ending or pass away trying to avoid it. One hero and one villain are able to see the solution. With the help of the all-powerful Molecule Man, Doctor Strange and Doctor Doom take the fight to the Beyonders, the god-like architects of the multiverse.

    The trio defeats the Beyonders to the point where they lose their incredible abilities. Doom does not accept such awesome power while Strange does, but he does. He rediscovers the universe and uses it to transform himself into God. He takes the power and saves ours.

    The final installment of the Secret Wars series follows Reed Richards and other heroes ‘ battles to restore things to how they were in Doom&#8217’s reign as God Emperor of a newly designed Marvel Universe.

    Preparing for Doomsday

    Given the Leader&#8217, s warning, it sounds like Avengers: Doomsday will show Marvel&#8217, s heroes fighting with those of other realities, all trying to protect their Earth from the incursions. And like in the comics, Robert Downey Jr., who plays Doctor Doom, will make an outrageous choice to remake his universe.

    If that &#8217, s the case, Earth &#8217, s Mightiest Heroes will face their greatest battle yet in Avengers: Secret Wars. The Leader should have given Sam the warning right away so he can begin assembling his Avengers and possibly fixing things.

    Captain America: Brave New World is now in theaters.

    The first postCaptain America Brave New World Post-Credits Scene Only Set Up Avengers: Doomsday was a Den of Geek article.

  • Beware the Cut ‘n’ Paste Persona

    Beware the Cut ‘n’ Paste Persona

    This Person Does Not Exist is a website that generates human faces with a machine learning algorithm. It takes real portraits and recombines them into fake human faces. We recently scrolled past a LinkedIn post stating that this website could be useful “if you are developing a persona and looking for a photo.” 

    We agree: the computer-generated faces could be a great match for personas—but not for the reason you might think. Ironically, the website highlights the core issue of this very common design method: the person(a) does not exist. Like the pictures, personas are artificially made. Information is taken out of natural context and recombined into an isolated snapshot that’s detached from reality. 

    But strangely enough, designers use personas to inspire their design for the real world. 

    Personas: A step back

    Most designers have created, used, or come across personas at least once in their career. In their article “Personas – A Simple Introduction,” the Interaction Design Foundation defines personas as “fictional characters, which you create based upon your research in order to represent the different user types that might use your service, product, site, or brand.” In their most complete expression, personas typically consist of a name, profile picture, quotes, demographics, goals, needs, behavior in relation to a certain service/product, emotions, and motivations (for example, see Creative Companion’s Persona Core Poster). The purpose of personas, as stated by design agency Designit, is “to make the research relatable, [and] easy to communicate, digest, reference, and apply to product and service development.”

    The decontextualization of personas

    Personas are popular because they make “dry” research data more relatable, more human. However, this method constrains the researcher’s data analysis in such a way that the investigated users are removed from their unique contexts. As a result, personas don’t portray key factors that make you understand their decision-making process or allow you to relate to users’ thoughts and behavior; they lack stories. You understand what the persona did, but you don’t have the background to understand why. You end up with representations of users that are actually less human.

    This “decontextualization” we see in personas happens in four ways, which we’ll explain below. 

    Personas assume people are static 

    Although many companies still try to box in their employees and customers with outdated personality tests (referring to you, Myers-Briggs), here’s a painfully obvious truth: people are not a fixed set of features. You act, think, and feel differently according to the situations you experience. You appear different to different people; you might act friendly to some, rough to others. And you change your mind all the time about decisions you’ve taken. 

    Modern psychologists agree that while people generally behave according to certain patterns, it’s actually a combination of background and environment that determines how people act and take decisions. The context—the environment, the influence of other people, your mood, the entire history that led up to a situation—determines the kind of person you are in each specific moment. 

    In their attempt to simplify reality, personas do not take this variability into account; they present a user as a fixed set of features. Like personality tests, personas snatch people away from real life. Even worse, people are reduced to a label and categorized as “that kind of person” with no means to exercise their innate flexibility. This practice reinforces stereotypes, lowers diversity, and doesn’t reflect reality. 

    Personas focus on individuals, not the environment

    In the real world, you’re designing for a context, not for an individual. Each person lives in a family, a community, an ecosystem, where there are environmental, political, and social factors you need to consider. A design is never meant for a single user. Rather, you design for one or more particular contexts in which many people might use that product. Personas, however, show the user alone rather than describe how the user relates to the environment. 

    Would you always make the same decision over and over again? Maybe you’re a committed vegan but still decide to buy some meat when your relatives are coming over. As they depend on different situations and variables, your decisions—and behavior, opinions, and statements—are not absolute but highly contextual. The persona that “represents” you wouldn’t take into account this dependency, because it doesn’t specify the premises of your decisions. It doesn’t provide a justification of why you act the way you do. Personas enact the well-known bias called fundamental attribution error: explaining others’ behavior too much by their personality and too little by the situation.

    As mentioned by the Interaction Design Foundation, personas are usually placed in a scenario that’s a “specific context with a problem they want to or have to solve”—does that mean context actually is considered? Unfortunately, what often happens is that you take a fictional character and based on that fiction determine how this character might deal with a certain situation. This is made worse by the fact that you haven’t even fully investigated and understood the current context of the people your persona seeks to represent; so how could you possibly understand how they would act in new situations? 

    Personas are meaningless averages

    As mentioned in Shlomo Goltz’s introductory article on Smashing Magazine, “a persona is depicted as a specific person but is not a real individual; rather, it is synthesized from observations of many people.” A well-known critique to this aspect of personas is that the average person does not exist, as per the famous example of the USA Air Force designing planes based on the average of 140 of their pilots’ physical dimensions and not a single pilot actually fitting within that average seat. 

    The same limitation applies to mental aspects of people. Have you ever heard a famous person say, “They took what I said out of context! They used my words, but I didn’t mean it like that.” The celebrity’s statement was reported literally, but the reporter failed to explain the context around the statement and didn’t describe the non-verbal expressions. As a result, the intended meaning was lost. You do the same when you create personas: you collect somebody’s statement (or goal, or need, or emotion), of which the meaning can only be understood if you provide its own specific context, yet report it as an isolated finding. 

    But personas go a step further, extracting a decontextualized finding and joining it with another decontextualized finding from somebody else. The resulting set of findings often does not make sense: it’s unclear, or even contrasting, because it lacks the underlying reasons on why and how that finding has arisen. It lacks meaning. And the persona doesn’t give you the full background of the person(s) to uncover this meaning: you would need to dive into the raw data for each single persona item to find it. What, then, is the usefulness of the persona?

    The relatability of personas is deceiving

    To a certain extent, designers realize that a persona is a lifeless average. To overcome this, designers invent and add “relatable” details to personas to make them resemble real individuals. Nothing captures the absurdity of this better than a sentence by the Interaction Design Foundation: “Add a few fictional personal details to make the persona a realistic character.” In other words, you add non-realism in an attempt to create more realism. You deliberately obscure the fact that “John Doe” is an abstract representation of research findings; but wouldn’t it be much more responsible to emphasize that John is only an abstraction? If something is artificial, let’s present it as such.

    It’s the finishing touch of a persona’s decontextualization: after having assumed that people’s personalities are fixed, dismissed the importance of their environment, and hidden meaning by joining isolated, non-generalizable findings, designers invent new context to create (their own) meaning. In doing so, as with everything they create, they introduce a host of biases. As phrased by Designit, as designers we can “contextualize [the persona] based on our reality and experience. We create connections that are familiar to us.” This practice reinforces stereotypes, doesn’t reflect real-world diversity, and gets further away from people’s actual reality with every detail added. 

    To do good design research, we should report the reality “as-is” and make it relatable for our audience, so everyone can use their own empathy and develop their own interpretation and emotional response.

    Dynamic Selves: The alternative to personas

    If we shouldn’t use personas, what should we do instead? 

    Designit has proposed using Mindsets instead of personas. Each Mindset is a “spectrum of attitudes and emotional responses that different people have within the same context or life experience.” It challenges designers to not get fixated on a single user’s way of being. Unfortunately, while being a step in the right direction, this proposal doesn’t take into account that people are part of an environment that determines their personality, their behavior, and, yes, their mindset. Therefore, Mindsets are also not absolute but change in regard to the situation. The question remains, what determines a certain Mindset?

    Another alternative comes from Margaret P., author of the article “Kill Your Personas,” who has argued for replacing personas with persona spectrums that consist of a range of user abilities. For example, a visual impairment could be permanent (blindness), temporary (recovery from eye surgery), or situational (screen glare). Persona spectrums are highly useful for more inclusive and context-based design, as they’re based on the understanding that the context is the pattern, not the personality. Their limitation, however, is that they have a very functional take on users that misses the relatability of a real person taken from within a spectrum. 

    In developing an alternative to personas, we aim to transform the standard design process to be context-based. Contexts are generalizable and have patterns that we can identify, just like we tried to do previously with people. So how do we identify these patterns? How do we ensure truly context-based design? 

    Understand real individuals in multiple contexts

    Nothing is more relatable and inspiring than reality. Therefore, we have to understand real individuals in their multi-faceted contexts, and use this understanding to fuel our design. We refer to this approach as Dynamic Selves.

    Let’s take a look at what the approach looks like, based on an example of how one of us applied it in a recent project that researched habits of Italians around energy consumption. We drafted a design research plan aimed at investigating people’s attitudes toward energy consumption and sustainable behavior, with a focus on smart thermostats. 

    1. Choose the right sample

    When we argue against personas, we’re often challenged with quotes such as “Where are you going to find a single person that encapsulates all the information from one of these advanced personas[?]” The answer is simple: you don’t have to. You don’t need to have information about many people for your insights to be deep and meaningful. 

    In qualitative research, validity does not derive from quantity but from accurate sampling. You select the people that best represent the “population” you’re designing for. If this sample is chosen well, and you have understood the sampled people in sufficient depth, you’re able to infer how the rest of the population thinks and behaves. There’s no need to study seven Susans and five Yuriys; one of each will do. 

    Similarly, you don’t need to understand Susan in fifteen different contexts. Once you’ve seen her in a couple of diverse situations, you’ve understood the scheme of Susan’s response to different contexts. Not Susan as an atomic being but Susan in relation to the surrounding environment: how she might act, feel, and think in different situations. 

    Given that each person is representative of a part of the total population you’re researching, it becomes clear why each should be represented as an individual, as each already is an abstraction of a larger group of individuals in similar contexts. You don’t want abstractions of abstractions! These selected people need to be understood and shown in their full expression, remaining in their microcosmos—and if you want to identify patterns you can focus on identifying patterns in contexts.

    Yet the question remains: how do you select a representative sample? First of all, you have to consider what’s the target audience of the product or service you are designing: it might be useful to look at the company’s goals and strategy, the current customer base, and/or a possible future target audience. 

    In our example project, we were designing an application for those who own a smart thermostat. In the future, everyone could have a smart thermostat in their house. Right now, though, only early adopters own one. To build a significant sample, we needed to understand the reason why these early adopters became such. We therefore recruited by asking people why they had a smart thermostat and how they got it. There were those who had chosen to buy it, those who had been influenced by others to buy it, and those who had found it in their house. So we selected representatives of these three situations, from different age groups and geographical locations, with an equal balance of tech savvy and non-tech savvy participants. 

    2. Conduct your research

    After having chosen and recruited your sample, conduct your research using ethnographic methodologies. This will make your qualitative data rich with anecdotes and examples. In our example project, given COVID-19 restrictions, we converted an in-house ethnographic research effort into remote family interviews, conducted from home and accompanied by diary studies.

    To gain an in-depth understanding of attitudes and decision-making trade-offs, the research focus was not limited to the interviewee alone but deliberately included the whole family. Each interviewee would tell a story that would then become much more lively and precise with the corrections or additional details coming from wives, husbands, children, or sometimes even pets. We also focused on the relationships with other meaningful people (such as colleagues or distant family) and all the behaviors that resulted from those relationships. This wide research focus allowed us to shape a vivid mental image of dynamic situations with multiple actors. 

    It’s essential that the scope of the research remains broad enough to be able to include all possible actors. Therefore, it normally works best to define broad research areas with macro questions. Interviews are best set up in a semi-structured way, where follow-up questions will dive into topics mentioned spontaneously by the interviewee. This open-minded “plan to be surprised” will yield the most insightful findings. When we asked one of our participants how his family regulated the house temperature, he replied, “My wife has not installed the thermostat’s app—she uses WhatsApp instead. If she wants to turn on the heater and she is not home, she will text me. I am her thermostat.”

    3. Analysis: Create the Dynamic Selves

    During the research analysis, you start representing each individual with multiple Dynamic Selves, each “Self” representing one of the contexts you have investigated. The core of each Dynamic Self is a quote, which comes supported by a photo and a few relevant demographics that illustrate the wider context. The research findings themselves will show which demographics are relevant to show. In our case, as our research focused on families and their lifestyle to understand their needs for thermal regulation, the important demographics were family type, number and nature of houses owned, economic status, and technological maturity. (We also included the individual’s name and age, but they’re optional—we included them to ease the stakeholders’ transition from personas and be able to connect multiple actions and contexts to the same person).

    To capture exact quotes, interviews need to be video-recorded and notes need to be taken verbatim as much as possible. This is essential to the truthfulness of the several Selves of each participant. In the case of real-life ethnographic research, photos of the context and anonymized actors are essential to build realistic Selves. Ideally, these photos should come directly from field research, but an evocative and representative image will work, too, as long as it’s realistic and depicts meaningful actions that you associate with your participants. For example, one of our interviewees told us about his mountain home where he used to spend every weekend with his family. Therefore, we portrayed him hiking with his little daughter. 

    At the end of the research analysis, we displayed all of the Selves’ “cards” on a single canvas, categorized by activities. Each card displayed a situation, represented by a quote and a unique photo. All participants had multiple cards about themselves.

    4. Identify design opportunities

    Once you have collected all main quotes from the interview transcripts and diaries, and laid them all down as Self cards, you will see patterns emerge. These patterns will highlight the opportunity areas for new product creation, new functionalities, and new services—for new design. 

    In our example project, there was a particularly interesting insight around the concept of humidity. We realized that people don’t know what humidity is and why it is important to monitor it for health: an environment that’s too dry or too wet can cause respiratory problems or worsen existing ones. This highlighted a big opportunity for our client to educate users on this concept and become a health advisor.

    Benefits of Dynamic Selves

    When you use the Dynamic Selves approach in your research, you start to notice unique social relations, peculiar situations real people face and the actions that follow, and that people are surrounded by changing environments. In our thermostat project, we have come to know one of the participants, Davide, as a boyfriend, dog-lover, and tech enthusiast. 

    Davide is an individual we might have once reduced to a persona called “tech enthusiast.” But we can have tech enthusiasts who have families or are single, who are rich or poor. Their motivations and priorities when deciding to purchase a new thermostat can be opposite according to these different frames. 

    Once you have understood Davide in multiple situations, and for each situation have understood in sufficient depth the underlying reasons for his behavior, you’re able to generalize how he would act in another situation. You can use your understanding of him to infer what he would think and do in the contexts (or scenarios) that you design for.

    The Dynamic Selves approach aims to dismiss the conflicted dual purpose of personas—to summarize and empathize at the same time—by separating your research summary from the people you’re seeking to empathize with. This is important because our empathy for people is affected by scale: the bigger the group, the harder it is to feel empathy for others. We feel the strongest empathy for individuals we can personally relate to.  

    If you take a real person as inspiration for your design, you no longer need to create an artificial character. No more inventing details to make the character more “realistic,” no more unnecessary additional bias. It’s simply how this person is in real life. In fact, in our experience, personas quickly become nothing more than a name in our priority guides and prototype screens, as we all know that these characters don’t really exist. 

    Another powerful benefit of the Dynamic Selves approach is that it raises the stakes of your work: if you mess up your design, someone real, a person you and the team know and have met, is going to feel the consequences. It might stop you from taking shortcuts and will remind you to conduct daily checks on your designs.

    And finally, real people in their specific contexts are a better basis for anecdotal storytelling and therefore are more effective in persuasion. Documentation of real research is essential in achieving this result. It adds weight and urgency behind your design arguments: “When I met Alessandra, the conditions of her workplace struck me. Noise, bad ergonomics, lack of light, you name it. If we go for this functionality, I’m afraid we’re going to add complexity to her life.”

    Conclusion

    Designit mentioned in their article on Mindsets that “design thinking tools offer a shortcut to deal with reality’s complexities, but this process of simplification can sometimes flatten out people’s lives into a few general characteristics.” Unfortunately, personas have been culprits in a crime of oversimplification. They are unsuited to represent the complex nature of our users’ decision-making processes and don’t account for the fact that humans are immersed in contexts. 

    Design needs simplification but not generalization. You have to look at the research elements that stand out: the sentences that captured your attention, the images that struck you, the sounds that linger. Portray those, use them to describe the person in their multiple contexts. Both insights and people come with a context; they cannot be cut from that context because it would remove meaning. 

    It’s high time for design to move away from fiction, and embrace reality—in its messy, surprising, and unquantifiable beauty—as our guide and inspiration.

  • That’s Not My Burnout

    That’s Not My Burnout

    Are you like me, reading about people fading away as they burn out, and feeling unable to relate? Do you feel like your feelings are invisible to the world because you’re experiencing burnout differently? When burnout starts to push down on us, our core comes through more. Beautiful, peaceful souls get quieter and fade into that distant and distracted burnout we’ve all read about. But some of us, those with fires always burning on the edges of our core, get hotter. In my heart I am fire. When I face burnout I double down, triple down, burning hotter and hotter to try to best the challenge. I don’t fade—I am engulfed in a zealous burnout

    So what on earth is a zealous burnout?

    Imagine a woman determined to do it all. She has two amazing children whom she, along with her husband who is also working remotely, is homeschooling during a pandemic. She has a demanding client load at work—all of whom she loves. She gets up early to get some movement in (or often catch up on work), does dinner prep as the kids are eating breakfast, and gets to work while positioning herself near “fourth grade” to listen in as she juggles clients, tasks, and budgets. Sound like a lot? Even with a supportive team both at home and at work, it is. 

    Sounds like this woman has too much on her plate and needs self-care. But no, she doesn’t have time for that. In fact, she starts to feel like she’s dropping balls. Not accomplishing enough. There’s not enough of her to be here and there; she is trying to divide her mind in two all the time, all day, every day. She starts to doubt herself. And as those feelings creep in more and more, her internal narrative becomes more and more critical.

    Suddenly she KNOWS what she needs to do! She should DO MORE. 

    This is a hard and dangerous cycle. Know why? Because once she doesn’t finish that new goal, that narrative will get worse. Suddenly she’s failing. She isn’t doing enough. SHE is not enough. She might fail, she might fail her family…so she’ll find more she should do. She doesn’t sleep as much, move as much, all in the efforts to do more. Caught in this cycle of trying to prove herself to herself, never reaching any goal. Never feeling “enough.” 

    So, yeah, that’s what zealous burnout looks like for me. It doesn’t happen overnight in some grand gesture but instead slowly builds over weeks and months. My burning out process looks like speeding up, not a person losing focus. I speed up and up and up…and then I just stop.

    I am the one who could

    It’s funny the things that shape us. Through the lens of childhood, I viewed the fears, struggles, and sacrifices of someone who had to make it all work without having enough. I was lucky that my mother was so resourceful and my father supportive; I never went without and even got an extra here or there. 

    Growing up, I did not feel shame when my mother paid with food stamps; in fact, I’d have likely taken on any debate on the topic, verbally eviscerating anyone who dared to criticize the disabled woman trying to make sure all our needs were met with so little. As a child, I watched the way the fear of not making those ends meet impacted people I love. As the non-disabled person in my home, I would take on many of the physical tasks because I was “the one who could” make our lives a little easier. I learned early to associate fears or uncertainty with putting more of myself into it—I am the one who can. I learned early that when something frightens me, I can double down and work harder to make it better. I can own the challenge. When people have seen this in me as an adult, I’ve been told I seem fearless, but make no mistake, I’m not. If I seem fearless, it’s because this behavior was forged from other people’s fears. 

    And here I am, more than 30 years later still feeling the urge to mindlessly push myself forward when faced with overwhelming tasks ahead of me, assuming that I am the one who can and therefore should. I find myself driven to prove that I can make things happen if I work longer hours, take on more responsibility, and do more

    I do not see people who struggle financially as failures, because I have seen how strong that tide can be—it pulls you along the way. I truly get that I have been privileged to be able to avoid many of the challenges that were present in my youth. That said, I am still “the one who can” who feels she should, so if I were faced with not having enough to make ends meet for my own family, I would see myself as having failed. Though I am supported and educated, most of this is due to good fortune. I will, however, allow myself the arrogance of saying I have been careful with my choices to have encouraged that luck. My identity stems from the idea that I am “the one who can” so therefore feel obligated to do the most. I can choose to stop, and with some quite literal cold water splashed in my face, I’ve made the choice to before. But that choosing to stop is not my go-to; I move forward, driven by a fear that is so a part of me that I barely notice it’s there until I’m feeling utterly worn away.

    So why all the history? You see, burnout is a fickle thing. I have heard and read a lot about burnout over the years. Burnout is real. Especially now, with COVID, many of us are balancing more than we ever have before—all at once! It’s hard, and the procrastinating, the avoidance, the shutting down impacts so many amazing professionals. There are important articles that relate to what I imagine must be the majority of people out there, but not me. That’s not what my burnout looks like.

    The dangerous invisibility of zealous burnout

    A lot of work environments see the extra hours, extra effort, and overall focused commitment as an asset (and sometimes that’s all it is). They see someone trying to rise to challenges, not someone stuck in their fear. Many well-meaning organizations have safeguards in place to protect their teams from burnout. But in cases like this, those alarms are not always tripped, and then when the inevitable stop comes, some members of the organization feel surprised and disappointed. And sometimes maybe even betrayed. 

    Parents—more so mothers, statistically speaking—are praised as being so on top of it all when they can work, be involved in the after-school activities, practice self-care in the form of diet and exercise, and still meet friends for coffee or wine. During COVID many of us have binged countless streaming episodes showing how it’s so hard for the female protagonist, but she is strong and funny and can do it. It’s a “very special episode” when she breaks down, cries in the bathroom, woefully admits she needs help, and just stops for a bit. Truth is, countless people are hiding their tears or are doom-scrolling to escape. We know that the media is a lie to amuse us, but often the perception that it’s what we should strive for has penetrated much of society.

    Women and burnout

    I love men. And though I don’t love every man (heads up, I don’t love every woman or nonbinary person either), I think there is a beautiful spectrum of individuals who represent that particular binary gender. 

    That said, women are still more often at risk of burnout than their male counterparts, especially in these COVID stressed times. Mothers in the workplace feel the pressure to do all the “mom” things while giving 110%. Mothers not in the workplace feel they need to do more to “justify” their lack of traditional employment. Women who are not mothers often feel the need to do even more because they don’t have that extra pressure at home. It’s vicious and systemic and so a part of our culture that we’re often not even aware of the enormity of the pressures we put on ourselves and each other. 

    And there are prices beyond happiness too. Harvard Health Publishing released a study a decade ago that “uncovered strong links between women’s job stress and cardiovascular disease.” The CDC noted, “Heart disease is the leading cause of death for women in the United States, killing 299,578 women in 2017—or about 1 in every 5 female deaths.” 

    This relationship between work stress and health, from what I have read, is more dangerous for women than it is for their non-female counterparts.

    But what if your burnout isn’t like that either?

    That might not be you either. After all, each of us is so different and how we respond to stressors is too. It’s part of what makes us human. Don’t stress what burnout looks like, just learn to recognize it in yourself. Here are a few questions I sometimes ask friends if I am concerned about them.

    Are you happy? This simple question should be the first thing you ask yourself. Chances are, even if you’re burning out doing all the things you love, as you approach burnout you’ll just stop taking as much joy from it all.

    Do you feel empowered to say no? I have observed in myself and others that when someone is burning out, they no longer feel they can say no to things. Even those who don’t “speed up” feel pressure to say yes to not disappoint the people around them.

    What are three things you’ve done for yourself? Another observance is that we all tend to stop doing things for ourselves. Anything from skipping showers and eating poorly to avoiding talking to friends. These can be red flags. 

    Are you making excuses? Many of us try to disregard feelings of burnout. Over and over I have heard, “It’s just crunch time,” “As soon as I do this one thing, it will all be better,” and “Well I should be able to handle this, so I’ll figure it out.” And it might really be crunch time, a single goal, and/or a skill set you need to learn. That happens—life happens. BUT if this doesn’t stop, be honest with yourself. If you’ve worked more 50-hour weeks since January than not, maybe it’s not crunch time—maybe it’s a bad situation that you’re burning out from.

    Do you have a plan to stop feeling this way? If something is truly temporary and you do need to just push through, then it has an exit route with a
    defined end.

    Take the time to listen to yourself as you would a friend. Be honest, allow yourself to be uncomfortable, and break the thought cycles that prevent you from healing. 

    So now what?

    What I just described is a different path to burnout, but it’s still burnout. There are well-established approaches to working through burnout:

    • Get enough sleep.
    • Eat healthy.
    • Work out.
    • Get outside.
    • Take a break.
    • Overall, practice self-care.

    Those are hard for me because they feel like more tasks. If I’m in the burnout cycle, doing any of the above for me feels like a waste. The narrative is that if I’m already failing, why would I take care of myself when I’m dropping all those other balls? People need me, right? 

    If you’re deep in the cycle, your inner voice might be pretty awful by now. If you need to, tell yourself you need to take care of the person your people depend on. If your roles are pushing you toward burnout, use them to help make healing easier by justifying the time spent working on you. 

    To help remind myself of the airline attendant message about putting the mask on yourself first, I have come up with a few things that I do when I start feeling myself going into a zealous burnout.

    Cook an elaborate meal for someone! 

    OK, I am a “food-focused” individual so cooking for someone is always my go-to. There are countless tales in my home of someone walking into the kitchen and turning right around and walking out when they noticed I was “chopping angrily.” But it’s more than that, and you should give it a try. Seriously. It’s the perfect go-to if you don’t feel worthy of taking time for yourself—do it for someone else. Most of us work in a digital world, so cooking can fill all of your senses and force you to be in the moment with all the ways you perceive the world. It can break you out of your head and help you gain a better perspective. In my house, I’ve been known to pick a place on the map and cook food that comes from wherever that is (thank you, Pinterest). I love cooking Indian food, as the smells are warm, the bread needs just enough kneading to keep my hands busy, and the process takes real attention for me because it’s not what I was brought up making. And in the end, we all win!

    Vent like a foul-mouthed fool

    Be careful with this one! 

    I have been making an effort to practice more gratitude over the past few years, and I recognize the true benefits of that. That said, sometimes you just gotta let it all out—even the ugly. Hell, I’m a big fan of not sugarcoating our lives, and that sometimes means that to get past the big pile of poop, you’re gonna wanna complain about it a bit. 

    When that is what’s needed, turn to a trusted friend and allow yourself some pure verbal diarrhea, saying all the things that are bothering you. You need to trust this friend not to judge, to see your pain, and, most importantly, to tell you to remove your cranium from your own rectal cavity. Seriously, it’s about getting a reality check here! One of the things I admire the most about my husband (though often after the fact) is his ability to break things down to their simplest. “We’re spending our lives together, of course you’re going to disappoint me from time to time, so get over it” has been his way of speaking his dedication, love, and acceptance of me—and I could not be more grateful. It also, of course, has meant that I needed to remove my head from that rectal cavity. So, again, usually those moments are appreciated in hindsight.

    Pick up a book! 

    There are many books out there that aren’t so much self-help as they are people just like you sharing their stories and how they’ve come to find greater balance. Maybe you’ll find something that speaks to you. Titles that have stood out to me include:

    • Thrive by Arianna Huffington
    • Tools of Titans by Tim Ferriss
    • Girl, Stop Apologizing by Rachel Hollis
    • Dare to Lead by Brené Brown

    Or, another tactic I love to employ is to read or listen to a book that has NOTHING to do with my work-life balance. I’ve read the following books and found they helped balance me out because my mind was pondering their interesting topics instead of running in circles:

    • The Drunken Botanist by Amy Stewart
    • Superlife by Darin Olien
    • A Brief History of Everyone Who Ever Lived by Adam Rutherford
    • Gaia’s Garden by Toby Hemenway 

    If you’re not into reading, pick up a topic on YouTube or choose a podcast to subscribe to. I’ve watched countless permaculture and gardening topics in addition to how to raise chickens and ducks. For the record, I do not have a particularly large food garden, nor do I own livestock of any kind…yet. I just find the topic interesting, and it has nothing to do with any aspect of my life that needs anything from me.

    Forgive yourself 

    You are never going to be perfect—hell, it would be boring if you were. It’s OK to be broken and flawed. It’s human to be tired and sad and worried. It’s OK to not do it all. It’s scary to be imperfect, but you cannot be brave if nothing were scary.

    This last one is the most important: allow yourself permission to NOT do it all. You never promised to be everything to everyone at all times. We are more powerful than the fears that drive us. 

    This is hard. It is hard for me. It’s what’s driven me to write this—that it’s OK to stop. It’s OK that your unhealthy habit that might even benefit those around you needs to end. You can still be successful in life.

    I recently read that we are all writing our eulogy in how we live. Knowing that your professional accomplishments won’t be mentioned in that speech, what will yours say? What do you want it to say? 

    Look, I get that none of these ideas will “fix it,” and that’s not their purpose. None of us are in control of our surroundings, only how we respond to them. These suggestions are to help stop the spiral effect so that you are empowered to address the underlying issues and choose your response. They are things that work for me most of the time. Maybe they’ll work for you.

    Does this sound familiar? 

    If this sounds familiar, it’s not just you. Don’t let your negative self-talk tell you that you “even burn out wrong.” It’s not wrong. Even if rooted in fear like my own drivers, I believe that this need to do more comes from a place of love, determination, motivation, and other wonderful attributes that make you the amazing person you are. We’re going to be OK, ya know. The lives that unfold before us might never look like that story in our head—that idea of “perfect” or “done” we’re looking for, but that’s OK. Really, when we stop and look around, usually the only eyes that judge us are in the mirror. 

    Do you remember that Winnie the Pooh sketch that had Pooh eat so much at Rabbit’s house that his buttocks couldn’t fit through the door? Well, I already associate a lot with Rabbit, so it came as no surprise when he abruptly declared that this was unacceptable. But do you recall what happened next? He put a shelf across poor Pooh’s ankles and decorations on his back, and made the best of the big butt in his kitchen. 

    At the end of the day we are resourceful and know that we are able to push ourselves if we need to—even when we are tired to our core or have a big butt of fluff ‘n’ stuff in our room. None of us has to be afraid, as we can manage any obstacle put in front of us. And maybe that means we will need to redefine success to allow space for being uncomfortably human, but that doesn’t really sound so bad either. 

    So, wherever you are right now, please breathe. Do what you need to do to get out of your head. Forgive and take care.

  • Asynchronous Design Critique: Giving Feedback

    Asynchronous Design Critique: Giving Feedback

    One of the most successful soft knowledge we have at our disposal is the ability to work together to improve our patterns while developing our own abilities and opinions, in whatever form it takes, and whatever it may be called.

    Feedback is also one of the most underestimated equipment, and generally by assuming that we’re already good at it, we settle, forgetting that it’s a talent that can be trained, grown, and improved. Bad comments can lead to conflict in projects, lower confidence, and long-term, undermine trust and teamwork. Quality suggestions can have a revolutionary effect.

    Practicing our knowledge is absolutely a good way to enhance, but the learning gets yet faster when it’s paired with a good base that programs and focuses the exercise. What are some fundamental components of providing effective opinions? And how can input be changed for workplaces where workers are located and distributed?

    On the web, we may discover a long history of sequential suggestions: from the early weeks of open source, script was shared and discussed on email addresses. Developers and scrum masters discuss take requests, designers make comments on their favourite design tools, and other things.

    Design criticism is frequently used as the term for a type of input that is given to improve our work collaboratively. So it shares a lot of the rules with comments in public, but it also has some variations.

    The information

    The content of the feedback serves as the foundation for every effective criticism, so we need to start there. There are many designs that you can use to form your content. This one from Lara Hogan is the one I privately like best because it’s simple and actionable.

    This calculation, which is typically used to provide feedback to users, even fits really well in a design critique because it finally addresses one of the main issues that we address: What? Where? Why? How? Imagine that you’re giving some comments about some pattern function that spans several screens, like an onboard movement: there are some pages shown, a movement blueprint, and an outline of the decisions made. You notice something that needs to be improved. You’ll have a mental model that will enable you to be more accurate and effective if you keep in mind the three components of the equation.

    Here is a comment that could be given as a part of some feedback, and it might look reasonable at a first glance: it seems to superficially fulfill the elements in the equation. Does it, though?

    Concerning the buttons ‘ styles and hierarchy, it seems off. Can you change them?

    Observation for design feedback doesn’t just mean pointing out which area of the interface your feedback touches, but it also means offering a perspective that’s as specific as possible. Do you offer the user’s viewpoint? Your expert perspective? From a business perspective? From the perspective of the project manager? A first-time user’s perspective?

    When I see these two buttons, I anticipate one to go forward and the other to go back.

    Impact is about the why. Just pointing out a UI element might sometimes be enough if the issue may be obvious, but more often than not, you should add an explanation of what you’re pointing out.

    When I see these two buttons, I anticipate one to go forward and the other to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow.

    By provoking the designer’s critical thinking while receiving the feedback, the question approach is intended to provide open guidance. Notably, Lara’s equation includes a second approach: request, which instead provides instructions on how to find a particular solution. While that’s a viable option for feedback in general, for design critiques, in my experience, defaulting to the question approach usually reaches the best solutions because designers are generally more comfortable in being given an open space to explore.

    For the question approach, the difference between the two can be demonstrated with the following example:

    When I see these two buttons, I anticipate one to go forward and the other to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Would it make sense to unify them?

    Or, for the request approach:

    When I see these two buttons, I anticipate one to go forward and the other to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same pair of forward and back buttons.

    In some situations, it might be helpful to include an additional reason why you think the suggestion is better at this point.

    When I see these two buttons, I anticipate one to go forward and the other to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same two forward and back buttons so that users don’t get confused.

    Choosing between the request and question approaches can occasionally be a matter of personal preference. I did rounds of anonymous feedback and I reviewed feedback with other people a while back when I was putting a lot of effort into improving my feedback. After a few rounds of this work and a year later, I got a positive response: my feedback came across as effective and grounded. until I switched teams. Surprise surprise, my next round of criticism from a specific person wasn’t very positive. The reason is that I had previously tried not to be prescriptive in my advice—because the people who I was previously working with preferred the open-ended question format over the request style of suggestions. However, there was a member of this other team who preferred specific guidance. So I modified my feedback to include requests.

    One comment that I heard come up a few times is that this kind of feedback is quite long, and it doesn’t seem very efficient. Yes, but also no. Let’s look at both sides.

    No, this style of feedback is actually efficient because the length here is a byproduct of clarity, and spending time giving this kind of feedback can provide exactly enough information for a good fix. Additionally, if we zoom out, it may lessen misunderstandings and back-and-forth conversations in the future, thereby increasing overall effectiveness and efficiency of collaboration beyond the single comment. Consider the following example:” Let’s make sure that all screens have the same two forward and back buttons” instead. The designer receiving this feedback wouldn’t have much to go by, so they might just apply the change. The interface might change in later iterations or new features might be introduced, and perhaps the change won’t make sense anymore. The designer might assume that the change is about consistency without the explanation, but what if it wasn’t? So there could now be an underlying concern that changing the buttons would be perceived as a regression.

    Yes, this type of feedback is not always effective because some comments don’t always need to be thorough, some times because some changes are made because they don’t always follow our instructions, and others because the team may have extensive internal knowledge, which makes some of the whys possible be implied.

    Therefore, the above equation serves as a mnemonic to reflect and enhance the practice rather than a strict template for feedback. Even after years of active work on my critiques, I still from time to time go back to this formula and reflect on whether what I just wrote is effective.

    The tone

    The foundation of feedback is well-rounded content, but that’s not really enough. The soft skills of the person who’s providing the critique can multiply the likelihood that the feedback will be well received and understood. It has been demonstrated that only positive feedback can lead to sustained change in people. It can be determined by tone alone whether content is rejected or welcomed.

    Tone is crucial to work on because our goal is to be understood and create a positive working environment. Over the years, I’ve tried to summarize the required soft skills in a formula that mirrors the one for content: the receptivity equation.

    Respectful feedback comes across as logical, solid, and constructive. It’s the kind of feedback that is viewed as useful and fair, regardless of whether it’s positive or negative.

    Timing refers to when the feedback happens. When given at the wrong time, to-the-point feedback has little chance of receiving favorable reception. When a new feature’s entire high-level information architecture is about to go on sale, it might still be relevant if the questioning raises a significant blocker that no one saw, but those concerns are much more likely to have to wait for a later revision. So in general, attune your feedback to the stage of the project. Iteration in the morning? Iteration that was later? Polishing work in progress? Each of these needs a different one. The ideal setting will increase the likelihood that your feedback will be appreciated.

    Attitude is the equivalent of intent, and in the context of person-to-person feedback, it can be referred to as radical candor. Before writing, it’s important to make sure the person we’re writing will actually benefit them and improve the overall project. Perhaps we don’t want to admit that we don’t really appreciate that person when we reflect on them. Hopefully that’s not the case, but that can happen, and that’s okay. How would I write if I really cared about them, aside from acknowledging and having that to help you make up for it? How can I stop being a passive tyrant? How can I be more constructive?

    Form is important in multicultural and cross-cultural workplaces because having excellent writing, perfect timing, and the right attitude might not be as effective if the writing style leads to miscommunications. There could be many reasons for this: some words might cause particular reactions, some non-native speakers might not understand all the nuances of some sentences, and other times our brains might be different and we might perceive the world differently. Neurodiversity must be taken into account. Whatever the reason, it’s important to review not just what we write but how.

    I asked for some feedback on how I gave it a while back. I was given some sound advice, but I also got a surprise comment. They pointed out that when I wrote” Oh, ]… ]”, I made them feel stupid. That wasn’t my intention at all! I just realized that I had been giving them feedback for months and that I had always made them feel foolish. I was horrified … but also thankful. I quickly changed the way I typed “oh” into my list of replaced words (your choice between aText, TextExpander, or others ), so that it was instantly deleted when I typed “oh.”

    People tend to beat around the bush, which is something to emphasize because it happens quite frequently, especially in teams with strong group spirit. It’s important to remember here that a positive attitude doesn’t mean going light on the feedback—it just means that even when you provide hard, difficult, or challenging feedback, you do so in a way that’s respectful and constructive. The best thing you can do for someone is to encourage their growth.

    Giving feedback in written form can be reviewed by someone else who isn’t directly involved, which can help to reduce or eliminate any bias that might exist. I found that the best, most insightful moments for me have happened when I’ve shared a comment and I’ve asked someone who I highly trusted,” How does this sound”?,” How can I do it better”, and even” How would you have written it” ?—and I’ve learned a lot by seeing the two versions side by side.

    The format

    Asynchronous feedback also has a significant inherent benefit: we can devote more time to making sure that the suggestions ‘ clarity of communication and actionability fulfill two main objectives.

    Let’s imagine that someone shared a design iteration for a project. You are commenting on it while reviewing it. Let’s try to think about some factors that might be helpful to consider, as there are many ways to accomplish this, and context is of course a factor.

    In terms of clarity, start by grounding the critique that you’re about to give by providing context. This includes specifically describing where you’re coming from: do you have a thorough understanding of the project, or is this your first time seeing it? Do you have a high-level perspective, or are you just learning the ins and outs? Are there regressions? Which user’s point of view are you addressing when offering feedback? Is the design iteration at the point where it would be acceptable to ship this, or are there important issues that need to be addressed first?

    Providing context is helpful even if you’re sharing feedback within a team that already has some information on the project. And context is a must when providing cross-team feedback. If I were to review a design that might be directly connected to my work, and if I had no idea how the project might have come to that conclusion, I would say so, highlighting my opinion as external.

    We often focus on the negatives, trying to outline all the things that could be done better. That’s obviously important, but it’s even more crucial to concentrate on the positive aspects, especially if you saw improvement in the previous iteration. Although this may seem superfluous, it’s important to remember that design has a number of possible solutions to each problem. So pointing out that the design solution that was chosen is good and explaining why it’s good has two major benefits: it confirms that the approach taken was solid, and it helps to ground your negative feedback. Sharing positive feedback can help prevent regressions on things that are going well because those things will have been identified as crucial in the long run. Positive feedback can also help, as an added bonus, prevent impostor syndrome.

    There’s one powerful approach that combines both context and a focus on the positives: frame how the design is better than the status quo ( compared to a previous iteration, competitors, or benchmarks ) and why, and then on that foundation, you can add what could be improved. There is a significant difference between a critique of a design that is already in good shape and one that isn’t quite there yet.

    Depersonalizing the feedback is another way to improve it: comments should always be about the work and never the creator of it. It’s” This button isn’t well aligned” versus” You haven’t aligned this button well”. This can be changed in your writing very quickly by reviewing it just before sending.

    One of the best ways to assist the designer who is reading through your feedback in terms of actionability is to divide it into bullet points or paragraphs, which are simpler to review and analyze one by one. For longer pieces of feedback, you might also consider splitting it into sections or even across multiple comments. Of course, it’s also possible to include screenshots or indicators for the specific area of the interface you’re referring to.

    One method that I’ve personally used to enhance the bullet points in some situations is using emojis. So a red square � � means that it’s something that I consider blocking, a yellow diamond � � is something that I can be convinced otherwise, but it seems to me that it should be changed, and a green circle � � is a detailed, positive confirmation. A blue spiral is also used for either something I’m uncertain about, an exploration, an open alternative, or just a note. However, I’d only use this strategy on teams where I’ve already established a high level of trust because the impact could be quite demoralizing if I had to deliver a lot of red squares, and I’d change how I’d communicate that a little.

    Let’s see how this would work by reusing the example that we used earlier as the first bullet point in this list:

    • 🔶 Navigation—When I see these two buttons, I anticipate one to go forward and the other to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same two forward and back buttons so that users don’t get confused.
    • Overall, I believe the page is strong, and this is a good candidate for a version 1. 1.0 release.
    • � � Metrics—Good improvement in the buttons on the metrics area, the improved contrast and new focus style make them more accessible.
    • Button Style: Using the green accent in this context, which conveys a positive action because green is typically seen as a confirmation color. Do we need to look for a different shade?
    • 🔶Tiles—Given the number of items on the page, and the overall page hierarchy, it seems to me that the tiles shouldn’t be using the Subtitle 1 style but the Subtitle 2 style. This will maintain consistency in the visual hierarchy.
    • Background: A light texture is effective, but I’m not sure if doing so will cause too much noise on this kind of page. What is the thinking in using that?

    What about using Figma or another design tool that enables in-place feedback to provide feedback directly? These are generally difficult to use because they conceal discussions and are harder to follow, but in the right setting, they can be very effective. Just make sure that each of the comments is separate so that it’s easier to match each discussion to a single task, similar to the idea of splitting mentioned above.

    One more thing: Say the obvious. Sometimes we might feel that something is clearly right or wrong, and we don’t say it. Or sometimes we might have a doubt that we don’t express because the question might sound stupid. Say it, that’s fine. Don’t hold it back, though, because you might need to change the phrasing a little to make the reader feel more at ease. Good feedback is transparent, even when it may be obvious.

    Asynchronous feedback also has the benefit of automatically guiding decisions, according to writing. Why did we do this, especially in large projects? could be a question that pops up from time to time, and there’s nothing better than open, transparent discussions that can be reviewed at any time. For this reason, I suggest using software to save these discussions without keeping them hidden until they are resolved.

    Content, tone, and format. Each one of these subjects provides a useful model, but working to improve eight areas—observation, impact, question, timing, attitude, form, clarity, and actionability—is a lot of work to put in all at once. One effective way to approach them is to start with the area you lack the most, either from your point of view or from other people’s feedback. Then the second, followed by the third, and so on. At first you’ll have to put in extra time for every piece of feedback that you give, but after a while, it’ll become second nature, and your impact on the work will multiply.

    Thanks to Mike Shelton and Brie Anne Demkiw for their contributions to the initial draft of this article.

  • Asynchronous Design Critique: Getting Feedback

    Asynchronous Design Critique: Getting Feedback

    ” Any reply”? is perhaps one of the worst ways to ask for opinions. It’s obscure and unfocused, and it doesn’t give us a sense of what we’re looking for. Great feedback begins sooner than we might anticipate: it begins with the demand.

    It might seem contradictory to start the process of receiving feedback with a problem, but that makes sense if we realize that getting feedback can be thought of as a form of pattern research. The best way to ask for feedback is to write strong questions, just like we wouldn’t do any studies without the right questions to get the insight we need.

    Design criticism is never a one-time procedure. Sure, any great comments process continues until the project is finished, but this is especially true for layout because architecture work continues iteration after iteration, from a high level to the finest details. Each stage requires its unique set of questions.

    Finally, we need to review what we received, get to the heart of its findings, and taking action, as with any good analysis. Problem, generation, and evaluation. Let’s take a look at each of those.

    The query

    Being available to input is important, but we need to be specific about what we’re looking for. Any comments,” What do you think,” or” I’d love to hear your mind” at the end of a presentation are likely to garner a lot of different ideas, or worse, to make people follow the lead of the first speaker. And finally, we become irritated because ambiguous queries like those can result in people leaving reviews that don’t even consider buttons. Which might be a savory matter, so it might be hard at that point to divert the crew to the topics that you had wanted to focus on.

    How do we enter this circumstance, though? A number of elements are involved. One is that we don’t often consider asking as a part of the input approach. Another is how healthy it is to keep the question open and assume that everyone else will agree. Another is that being extremely precise is frequently not necessary in non-professional debate. In short, we tend to underestimate the importance of the issues, so we don’t work on improving them.

    Great questioning helps to guide and concentrate the criticism. It also serves as a form of acceptance, outlining your willingness to make comments and the types of responses you want to receive. It puts people in the right emotional state, especially in situations when they weren’t expecting to give opinions.

    There isn’t a second best method to request feedback. It simply needs to be certain, and precision can take a variety of forms. A design for design critique that I’ve found especially helpful in my training is the one of stage over depth.

    The term” level” refers to each of the stages of the process, in our case, the design phase. The type of input changes as the consumer research moves on to the final design. But within a single stage, one might also examine whether some assumptions are correct and whether there’s been a suitable language of the amassed input into updated designs as the job has evolved. The levels of consumer experience may serve as a starting point for future inquiries. What are the project targets, in your opinion? User requirements? Funnality? Material? Contact design? Data infrastructure Interface pattern Navigation style? Physical layout Brand?

    Here’re a some example questions that are specific and to the place that refer to different levels:

    • Features: Is it desired to automate accounts creation?
    • Contact design: Please review the updated flow for any errors or steps I might have missed.
    • Information structures: We have two competing bits of information on this site. Does the construction work to effectively communicate both of them?
    • User interface design: What do you think about the problem desk at the top of the page, which makes sure you see the future error even if it is outside the viewport?
    • Navigation style: From study, we identified these second-level routing items, but when you’re on the webpage, the list feels very long and hard to understand. Are there any ways to deal with this?
    • The bottom-right corner’s thick alerts are clearly apparent, but are they sufficient?

    The other plane of sensitivity is about how heavy you’d like to go on what’s being presented. For instance, we may have introduced a new end-to-end movement, but you might want to know more about a particular viewpoint you found challenging. This can be particularly helpful from one generation to the next when it’s crucial to identify the areas that have changed.

    There are other things that we can consider when we want to accomplish more specific—and more effective—questions.

    Eliminating generic finals from your questions like “good,” “well,” “nice,” “bad,” “okay,” and” cool” is a simple strategy. For instance, what is the question” When the wall opens and the switches appear, is this contact good”? may seem precise, but you can place the “good” tournament, and transfer it to an even better query:” When the wall opens and the buttons appear, is it clear what the next action is”?

    Sometimes we do want a lot of feedback. Although that’s uncommon, it can occur. In that sense, you might still make it explicit that you’re looking for a wide range of opinions, whether at a high level or with details. Or perhaps just say,” At first glance, what do you think”? so that after someone’s first five seconds of viewing it, it becomes obvious that what you’re asking is open ended but focused on the subject.

    Sometimes the project is particularly expansive, and some areas may have already been explored in detail. In these circumstances, it might be helpful to state explicitly that some parts are already locked in and aren’t accessible for feedback. Although it’s not something I’d recommend in general, I’ve found it helpful in avoiding getting back into rabbit holes like those that could lead to further refinement but aren’t currently what matters most.

    Asking specific questions can completely change the quality of the feedback that you receive. People who have less refined critique abilities will now be able to provide more useful feedback, and even experienced designers will appreciate the clarity and effectiveness gained from concentrating solely on what is required. It can save a lot of time and frustration.

    The iteration

    Design iterations are probably the most recognizable component of the design process, and they act as a natural checkpoint for feedback. Many design tools have inline commenting, but many of them only display changes as a single fluid stream in the same file. These types of design tools cause conversations to end after they are resolved, update shared UI components automatically, and require designers to always display the most recent version unless these would-be useful features were manually disabled. The implied goal that these design tools seem to have is to arrive at just one final copy with all discussions closed, probably because they inherited patterns from how written documents are collaboratively edited. That approach to design critiques is probably not the best approach, but some teams might benefit from it even if I don’t want to be too prescriptive.

    The asynchronous design-critique approach that I find most effective is to make explicit checkpoints for discussion. I’m going to use the term iteration post for this. It refers to a design iteration write-up or presentation followed by some sort of discussion thread. Any platform that can accommodate this type of structure can use this. By the way, when I refer to a “write-up or presentation“, I’m including video recordings or other media too: as long as it’s asynchronous, it works.

    There are many benefits to using iteration posts:

      It establishes a rhythm in the design process, allowing the designer to review the feedback from each iteration and get ready for the following.
    • It makes decisions visible for future review, and conversations are likewise always available.
    • It keeps a record of how the design evolved over time.
    • Depending on the tool, it might also make it simpler to collect and act on feedback.

    These posts of course don’t mean that no other feedback approach should be used, just that iteration posts could be the primary rhythm for a remote design team to use. And from there, other feedback techniques ( such as live critique, pair designing, or inline comments ) can emerge.

    There isn’t, in my opinion, a universal format for iteration posts. But there are a few high-level elements that make sense to include as a baseline:

    1. The objective is.
    2. The layout
    3. The list of changes
    4. The querys

    Each project is likely to have a goal, and it should most likely be one that has already been summarized in one sentence elsewhere, such as the client brief, the product manager’s outline, or the request of the project owner. So this is something that I’d repeat in every iteration post—literally copy and pasting it. The goal is to provide context and repeat what is necessary to complete each iteration post so that there is no need to search for information in different posts. The most recent iteration post will have everything I need if I want to know about the most recent design.

    This copy-and-paste part introduces another relevant concept: alignment comes from repetition. Therefore, repeating information in posts is actually very effective at ensuring that everyone is on the same page.

    The actual series of information-architecture outlines, diagrams, flows, maps, wireframes, screens, visuals, and any other design work that has been done is what is then called the design. In short, it’s any design artifact. In the final stages of the project, I prefer to use the term “blank” to indicate that I’ll be displaying complete flows rather than individual screens to make it simpler to comprehend the larger picture.

    Because it makes it easier to refer to the objects, it might also be helpful to have clear names on them. Write the post in a way that helps people understand the work. It’s not much different from creating a strong live presentation.

    A bullet list of the changes made in the previous iteration should also be included for an effective discussion so that attendees can concentrate on what’s changed. This can be especially useful for larger works of work where keeping track, iteration after iteration, might prove difficult.

    And finally, as noted earlier, it’s essential that you include a list of the questions to drive the design critique in the direction you want. Creating a numbered list of questions can also help make it simpler to refer to each one by its number.

    Not every iteration is the same. Earlier iterations don’t need to be as tightly focused—they can be more exploratory and experimental, maybe even breaking some of the design-language guidelines to see what’s possible. Then, later, the iterations begin coming to a decision and improving it until the feature development is complete.

    Even if these iteration posts are written and intended as checkpoints, I want to point out that they are not by any means exhaustive. A post might be a draft—just a concept to get a conversation going—or it could be a cumulative list of each feature that was added over the course of each iteration until the full picture is done.

    I eventually started using particular labels for incremental iterations, such as i1, i2, i3, and so on. Although this may seem like a minor labeling tip, it can be useful in many ways:

    • Unique—It’s a clear unique marker. Everyone knows where to go to review things, and it’s simple to say” This was discussed in i4″ with each project.
    • Unassuming—Versions of the same thing ( such as v1, v2, and v3 ) give the impression of something enormous, exhaustive, and complete. Iterations must be able to be exploratory, incomplete, partial.
    • Future proof—It resolves the “final” naming issue that versions can have. No more files with the title “final final complete no-really-its-done” Within each project, the largest number always represents the latest iteration.

    The wording release candidate (RC ) could be used to indicate when a design is finished enough to be worked on, even if there are some areas that still need improvement and, in turn, require more iterations, such as” with i8 we reached RC” or “i12 is an RC” to indicate when it is finished.

    The evaluation

    What usually happens during a design critique is an open discussion, with a back and forth between people that can be very productive. This strategy is particularly successful when synchronous feedback is being received live. However, using a different approach when we work asynchronously is more effective: adopting a user-research mindset. Written feedback from teammates, stakeholders, or others can be treated as if it were the result of user interviews and surveys, and we can analyze it accordingly.

    This shift has some significant advantages, making asynchronous feedback particularly effective, especially around these friction points:

      It makes it easier to respond to everyone.
    1. It reduces the frustration from swoop-by comments.
    2. It lessens our personal stakes.

    The first friction point is having to feel pressured to respond to each and every comment. Sometimes we write the iteration post, and we get replies from our team. It’s simple, straightforward, and doesn’t cause any issues. Sometimes, however, some solutions may require more in-depth discussions, and responding to everyone quickly can add up to the pressure of trying to be a good team player by doing the same design iteration. This might be especially true if the person who’s replying is a stakeholder or someone directly involved in the project who we feel that we need to listen to. We must come to terms with the fact that this pressure is perfectly normal and that it’s human nature to try to accommodate those we care about. When responding to all comments, it can be effective, but when we consider a design critique more like user research, we realize that we don’t need to respond to every comment, and there are alternatives in asynchronous spaces:

      One is to let the next iteration speak for itself. When the design changes and we publish a follow-up iteration, that’s the response. You could tag everyone in the previous discussion, but that is only a choice, not a requirement.
    • Another is to briefly reply to acknowledge each comment, such as” Understood. Thank you,”” Good points— I’ll review,” or” Thanks. These will be included in the upcoming iteration. In some cases, this could also be just a single top-level comment along the lines of” Thanks for all the feedback everyone—the next iteration is coming soon”!
    • Another option is to quickly summarize the comments before moving on. This may be particularly helpful if your workflow uses a simplified checklist to refer to for the following iteration.

    The second friction point is the swoop-by comment, which is the kind of feedback that comes from someone outside the project or team who might not be aware of the context, restrictions, decisions, or requirements —or of the previous iterations ‘ discussions. On their side, there is something that one can hope to learn: they could begin to acknowledge that they are doing this and they could be more aware of where they are coming from. It can be annoying to have to repeat the same response repeatedly in swoop-by comments.

    Let’s begin by acknowledging again that there’s no need to reply to every comment. However, a brief response with a link to the previous discussion for additional information is typically sufficient if responding to a previously litigated point might be helpful. Remember that repetition results in alignment, so it’s acceptable to repeat things occasionally!

    Swoop-by commenting can still be useful for two reasons: they might point out something that still isn’t clear, and they also have the potential to stand in for the point of view of a user who’s seeing the design for the first time. Yes, you’ll still be frustrated, but that might at least make things better for you.

    The personal stake we might have in relation to the design could be the third friction point, which might cause us to feel defensive if the review turned out to be more of a discussion. Treating feedback as user research helps us create a healthy distance between the people giving us feedback and our ego ( because yes, even if we don’t want to admit it, it’s there ). In the end, putting everything in aggregate form helps us to prioritize our work more.

    Remember to always remember that you don’t have to accept every piece of feedback, even though you need to listen to stakeholders, project owners, and specific advice. You have to analyze it and make a decision that you can justify, but sometimes “no” is the right answer.

    You are in charge of making that choice as the designer leading the project. In the end, everyone has their area of expertise, and as a designer, you are the one with the most background and knowledge to make the right choice. And by listening to the feedback that you’ve received, you’re making sure that it’s also the best and most balanced decision.

    Thanks to Mike Shelton and Brie Anne Demkiw for their contributions to the initial draft of this article.

  • Designing for the Unexpected

    Designing for the Unexpected

    Although I’m not sure when I first heard this statement, it has stuck with me over the centuries. How do you generate solutions for scenarios you can’t think? Or create items that are functional on products that have not yet been created?

    Flash, Photoshop, and flexible style

    My go-to program when I first started designing platforms was Photoshop. I created a 960px paint and set about creating a design that I would eventually lose information in. The growth phase aimed to achieve pixel-perfect accuracy by using set widths, fixed heights, and absolute placement.

    Ethan Marcotte’s speak at An Event Off and later content” Responsive Web Design” in A List Off in 2010 changed all this. As soon as I learned about flexible style, I was convinced, but I was even terrified. The pixel-perfect models full of special figures that I had formerly prided myself on producing were no longer good enough.

    My first encounter with flexible style didn’t help my fear. My second project was to get an active fixed-width website and make it reactive. I quickly realized that you didn’t just put responsiveness at the end of a job. To make smooth design, you need to prepare throughout the style stage.

    a novel architecture process

    Developing flexible or smooth sites has always been about removing limitations, producing material that can be viewed on any system. I first used local CSS and utility classes, but it now relies on percentage-based layouts:

    .column-span-6 { width: 49%; float: left; margin-right: 0.5%; margin-left: 0.5%;}.column-span-4 { width: 32%; float: left; margin-right: 0.5%; margin-left: 0.5%;}.column-span-3 { width: 24%; float: left; margin-right: 0.5%; margin-left: 0.5%;}

    Then with Sass so I could take advantage of @includes to re-use repeated slabs of script and walk up to more semantic html:

    .logo { @include colSpan(6);}.search { @include colSpan(3);}.social-share { @include colSpan(3);}

    internet inquiries

    The next ingredient for reactive design is press queries. Without them, content would shrink to fit the available space, regardless of whether it remained readable ( The exact opposite issue resulted from the development of a mobile-first approach ).

    internet inquiries prevented this by allowing us to add breakpoints where the design could adapt. Like most people, I started out with three breakpoints: one for desktop, one for tablets, and one for mobile. Over the years, I added more and more for phablets, wide screens, and so on. 

    For years, I happily worked this way and improved both my design and front-end skills in the process. The only problem I encountered was making changes to content, since with our Sass grid system in place, there was no way for the site owners to add content without amending the markup—something a small business owner might struggle with. This is because each row in the grid was defined using a div as a container. Adding content meant creating new row markup, which requires a level of HTML knowledge.

    String premium was a mainstay of early flexible design, present in all the frequently used systems like Bootstrap and Skeleton.

    1 of 7
    2 of 7
    3 of 7
    4 of 7
    5 of 7
    6 of 7
    7 of 7

    Another difficulty arose as I moved from a design firm building websites for little- to medium-sized companies, to larger in-house teams where I worked across a collection of related sites. In those positions, I began to work more frequently with washable pieces.

    Our rely on multimedia queries resulted in parts that were tied to frequent screen sizes. If modify is the goal of part libraries, then this is a real issue because you can just use these components if the devices you’re designing for match the window sizes in the design library, which prevents you from actually achieving the “devices that don’t already exist” goal.

    Then there’s the problem of space. internet inquiries allow components to adapt based on the viewport size, but what if I put a component into a sidebar, like in the figure below?

    Container queries: a false dawn or our lord?

    Container concerns have long been touted as an improvement upon advertising questions, but at the time of writing are unsupported in most computers. Although there are JavaScript workarounds, they can lead to dependability and compatibility issues. The basic theory underlying container queries is that elements should change based on the size of their parent container and not the viewport width, as seen in the following illustrations.

    One of the biggest arguments in favor of container queries is that they help us create components or design patterns that are truly reusable because they can be picked up and placed anywhere in a layout. This is a significant step in the direction of a component-based design that can be used on any device of any size.

    In other words, responsive components to replace responsive layouts.

    Container queries will enable us to design components that can be placed in a sidebar or in the main content and respond accordingly rather than designing pages that respond to the browser or device size.

    My concern is that we are still using layout to determine when a design needs to adapt. This strategy will always be restrictive because we will still require pre-defined breakpoints. For this reason, my main question with container queries is, How would we decide when to change the CSS used by a component?

    A component library that is disconnected from context and real content is probably not the best place to make that choice.

    As the diagrams below illustrate, we can use container queries to create designs for specific container widths, but what if I want to change the design based on the image size or ratio?

    The container’s dimensions shouldn’t be what should be the design in this example; rather, the image should be.

    It’s hard to say for sure whether container queries will be a success story until we have solid cross-browser support for them. Responsive component libraries would undoubtedly change the way we design, enhancing reuse possibilities and scaling. But maybe we will always need to adjust these components to suit our content.

    CSS is evolving.

    Whilst the container query debate rumbles on, there have been numerous advances in CSS that change the way we think about design. The days of fixed-width elements measured in pixels and floated div elements used to cobble layouts together are long gone, consigned to history along with table layouts. Flexbox and CSS Grid have revolutionized layouts for the web. We can now create elements that wrap onto new rows when they run out of space, not when the device changes.

    .wrapper { display: grid; grid-template-columns: repeat(auto-fit, 450px); gap: 10px;}

    The repeat() function paired with auto-fit or auto-fill allows us to specify how much space each column should use while leaving it up to the browser to decide when to spill the columns onto a new line. Similar things can be achieved with Flexbox, as elements can wrap over multiple rows and “flex” to fill available space. 

    .wrapper { display: flex; flex-wrap: wrap; justify-content: space-between;}.child { flex-basis: 32%; margin-bottom: 20px;}

    The biggest benefit of all this is you don’t need to wrap elements in container rows. Without rows, content is not directly related to page markup, allowing for changes or additions to content without further development.

    The real game changer for flexible designs is CSS Subgrid, which is a significant improvement in terms of designing designs that allow for evolving content.

    Remember the days of crafting perfectly aligned interfaces, only for the customer to add an unbelievably long header almost as soon as they’re given CMS access, like the illustration below?

    Subgrid allows elements to respond to adjustments in their own content and in the content of sibling elements, helping us create designs more resilient to change.

    .wrapper { display: grid; grid-template-columns: repeat(auto-fit, minmax(150px, 1fr)); grid-template-rows: auto 1fr auto; gap: 10px;}.sub-grid { display: grid; grid-row: span 3; grid-template-rows: subgrid; /* sets rows to parent grid */}

    CSS Grid allows us to separate layout and content, thereby enabling flexible designs. Subgrid also enables us to create designs that can be modified to fit changing content. Subgrid at the time of writing is only supported in Firefox but the above code can be implemented behind an @supports feature query.

    Intrinsic layouts

    I’d be remiss not to mention intrinsic layouts, the term created by Jen Simmons to describe a mixture of new and old CSS features used to create layouts that respond to available space.

    Columns with percentages are flexible in responsive layouts. Intrinsic layouts, on the other hand, use the fr unit to create flexible columns that won’t ever shrink so much that they render the content illegible.

    frunits is a statement that says,” I want you to distribute the extra space in this way, but… don’t ever make it smaller than the content that is inside of it.”

    —Jen Simmons,” Designing Intrinsic Layouts”

    Additionally, intrinsic layouts can mix and match both fixed and flexible units, letting the content choose how much space is taken up.

    Intriguing design distinguishes itself because it not only creates designs that can withstand future devices but also helps scale designs without losing flexibility. Components and patterns can be lifted and reused without the prerequisite of having the same breakpoints or the same amount of content as in the previous implementation.

    We can now make designs that work in harmony with the content inside and the content around them. With an intrinsic approach, we can construct responsive components without depending on container queries.

    Another 2010 moment?

    This intrinsic approach should in my view be every bit as groundbreaking as responsive web design was ten years ago. It’s another instance of “everything changed,” in my opinion.

    But it doesn’t seem to be moving quite as fast, I haven’t yet had that same career-changing moment I had with responsive design, despite the widely shared and brilliant talk that brought it to my attention.

    One possible explanation for that is that I now work for a sizable company, which is quite different from the design agency position I held in 2010. In my agency days, every new project was a clean slate, a chance to try something new. Modern projects frequently improve existing websites with an existing codebase and use existing tools and frameworks.

    Another could be that I feel more prepared for change now. I was brand-new to design in general in 2010, and the shift involved a lot of learning. Also, an intrinsic approach isn’t exactly all-new, it’s about using existing skills and existing CSS knowledge in a different way.

    You can’t “frame” your way out of” a content issue.

    Another reason for the slightly slower adoption of intrinsic design could be the lack of quick-fix framework solutions available to kick-start the change.

    Ten years ago, responsive grid systems were everywhere. With a framework like Bootstrap or Skeleton, you had a responsive design template at your fingertips.

    Because the benefit of having a selection of units is a hindrance when it comes to creating layout templates, intrinsic design and frameworks do not go hand in hand quite as well. The beauty of intrinsic design is combining different units and experimenting with techniques to get the best for your content.

    Additionally, there are design tools. We probably all, at some point in our careers, used Photoshop templates for desktop, tablet, and mobile devices to drop designs in and show how the site would look at all three stages.

    How do you do that right away, with each component responding to content and layout flexing as and when necessary? This type of design must happen in the browser, which personally I’m a big fan of.

    Another ongoing debate centered on whether designers should code is ongoing. When designing a digital product, we should, at the very least, design for a best- and worst-case scenario when it comes to content. It’s not ideal to do this in a graphics-based software package. In code, we can add longer sentences, more radio buttons, and extra tabs, and watch in real time as the design adapts. Does it continue to function? Is the design too reliant on the current content?

    I’m personally anticipating the day when a design component can truly be flexible and adapt to both its space and content without relying on the device or container dimensions. This is the day.

    Content first

    Content is not a fixed number. After all, to design for the unknown or unexpected we need to account for content changes like our earlier Subgrid card example that allowed the cards to respond to adjustments to their own content and the content of sibling elements.

    Thankfully, there’s more to CSS than layout, and plenty of properties and values can help us put content first. Subgrid and pseudo-elements like ::first-line and ::first-letter help to separate design from markup so we can create designs that allow for changes.

    This is not the same as previous markup hacks like this.

    First line of text with different styling...

    —we can target content based on where it appears.

    .element::first-line { font-size: 1.4em;}.element::first-letter { color: red;}

    Much bigger additions to CSS include logical properties, which change the way we construct designs using logical dimensions (start and end) instead of physical ones (left and right), something CSS Grid also does with functions like min(), max(), and clamp().

    This flexibility allows for directional changes according to content, a common requirement when we need to present content in multiple languages. This was frequently accomplished with Sass mixins in the past, but it was frequently limited to a left-to-right or right-to-left orientation.

    In the Sass version, directional variables need to be set.

    $direction: rtl;$opposite-direction: ltr;$start-direction: right;$end-direction: left;

    These variables can also be used as values—

    body { direction: $direction; text-align: $start-direction;}

    —or as properties.

    margin-#{$end-direction}: 10px;padding-#{$start-direction}: 10px;

    However, with native logical properties, there is no longer a need to rely on Sass ( or another similar tool ) or pre-planning, which made using variables throughout a codebase necessary. These properties also start to break apart the tight coupling between a design and strict physical dimensions, creating more flexibility for changes in language and in direction.

    margin-block-end: 10px;padding-block-start: 10px;

    There are also native start and end values for properties like text-align, which means we can replace text-align: right with text-align: start.

    Like the earlier examples, these properties help to build out designs that aren’t constrained to one language, the design will reflect the content’s needs.

    Fluid and fixed

    We briefly covered the power of combining fixed widths with fluid widths with intrinsic layouts. The min() and max() functions are a similar concept, allowing you to specify a fixed value with a flexible alternative. 

    For min() this means setting a fluid minimum value and a maximum fixed value.

    .element { width: min(50%, 300px);}

    As long as the element’s width is not greater than 300px, the element in the figure above will cover 50 % of its container.

    For max() we can set a flexible max value and a minimum fixed value.

    .element { width: max(50%, 300px);}

    As long as the element’s width is at least 300px, it will now cover 50 % of its container. This means we can set limits but allow content to react to the available space.

    The clamp() function builds on this by allowing us to set a preferred value with a third parameter. Now we can allow the element to shrink or grow if it needs to without getting to a point where it becomes unusable.

    .element { width: clamp(300px, 50%, 600px);}

    This time, the element’s width will be 50 % ( the preferred value ) of its container, with no exceptions for 300px and 600px.

    With these techniques, we have a content-first approach to responsive design. We can’t change markup because content can’t be changed, so user modifications won’t have an impact on the design. We can start to future-proof designs by planning for unexpected changes in language or direction. Additionally, we can increase flexibility by enabling more or less content to be displayed correctly by matching desired dimensions with adaptable alternatives.

    Situation first

    We can address device flexibility by changing our approach, designing around content and space, and responding to what we’ve already discussed. But what about that last bit of Jeffrey Zeldman’s quote,”… situations you haven’t imagined”?

    Rather than someone using a mobile phone and moving through a crowded street in glaring sunshine, it’s a very different design to be done for someone using a desktop computer. Situations and environments are hard to plan for or predict because they change as people react to their own unique challenges and tasks.

    Choice is so crucial because of this. One size never fits all, so we need to design for multiple scenarios to create equal experiences for all our users.

    Thankfully, we have many options available to you.

    Responsible design

    ” Mobile data is prohibitively expensive in some places around the world, and broadband infrastructure is sparse or absent.”

    I Used the Web for a Day on a 50 MB Budget

    Chris Ashton

    One of the biggest assumptions we make is that people interacting with our designs have a good wifi connection and a wide screen monitor. However, in the real world, our users may be commuters using smaller mobile devices that may experience drops in connectivity while traveling on trains or other modes of transportation. There is nothing more frustrating than a web page that won’t load, but there are ways we can help users use less data or deal with sporadic connectivity.

    The srcset attribute allows the browser to decide which image to serve. This means we can create smaller ‘cropped’ images to display on mobile devices in turn using less bandwidth and less data.

    Image alt text

    The preload attribute can also help us to think about how and when media is downloaded. It can be used to tell a browser about any critical assets that need to be downloaded with high priority, improving perceived performance and the user experience. 

      

    Additionally, there is native lazy loading, which indicates that only required files should be downloaded for use.

    …

    With srcset, preload, and lazy loading, we can start to tailor a user’s experience based on the situation they find themselves in. What none of this does, however, is allow the user themselves to decide what they want downloaded, as the decision is usually the browser’s to make. 

    So how can we put users in control?

    The media queries are now being returned.

    internet inquiries have always been about much more than device sizes. They allow content to adapt to different situations, with screen size being just one of them.

    We’ve long been able to check for media types like print and speech and features such as hover, resolution, and color. Because of these checks, we can offer options that work for more than one situation. It’s less about one-size-fits-all and more about providing adaptable content.

    As of this writing, the Media Queries Level 5 spec is still under development. It brings up some really intriguing queries that will eventually enable us to design for a number of other unanticipated situations.

    For example, there’s a light-level feature that allows you to modify styles if a user is in sunlight or darkness. These features, which are enhanced by custom properties, make it simple to create designs or themes for particular environments.

    @media (light-level: normal) { --background-color: #fff; --text-color: #0b0c0c; }@media (light-level: dim) { --background-color: #efd226; --text-color: #0b0c0c;}

    Another key feature of the Level 5 spec is personalization. Instead of creating designs that are the same for everyone, users can choose what works for them. This is achieved by using features like prefers-reduced-data, prefers-color-scheme, and prefers-reduced-motion, the latter two of which already enjoy broad browser support. These features tap into preferences set via the operating system or browser so people don’t have to spend time making each site they visit more usable. 

    internet inquiries like this go beyond choices made by a browser to grant more control to the user.

    Expect the unanticipated

    In the end, the one thing we should always expect is for things to change. With foldable screens already available, especially in the form of tablets, we can’t keep up with them.

    We can’t design the same way we have for this ever-changing landscape, but we can design for content. We can create more robust, flexible designs that increase the longevity of our products by putting content first and allowing that content to adapt to whatever space surrounds it.

    A lot of the CSS discussed here is about moving away from layouts and putting content at the heart of design. There is a lot more we can do to adopt a more intrinsic approach, from responsive components to fixed and fluid units. Even better, we can test these techniques during the design phase by designing in-browser and watching how our designs adapt in real-time.

    When it comes to unexpected circumstances, we must make sure our goods are accessible whenever and wherever needed. We can move closer to achieving this by involving users in our design decisions, by creating choice via browsers, and by giving control to our users with user-preference-based media queries.

    Unexpected design should give our users, who we serve, choice and control over how they interact with the environment.

  • Voice Content and Usability

    Voice Content and Usability

    We’ve been conversing for a long time. Whether to present information, perform transactions, or just to check in on one another, people have yammered aside, chattering and gesticulating, through spoken discussion for many generations. Only recently have we begun to write our conversations, and only recently have we outsourced them to the system, a system that exhibits a far greater affection for written communications than for the vernacular rigors of spoken speech.

    Computers have issues because conversation is more important than written speech, between spoken and written. To have productive conversations with us, machines may struggle with the messiness of mortal speech: the disfluencies and pauses, the gestures and body language, and the variations in word choice and spoken dialect that is stymie even the most carefully crafted human-computer interaction. Speaking English also has the advantage of face-to-face contact, which enables us to view visual social cues in the human-to-human scenario.

    In contrast, written language develops its own fossil record of dated terms and phrases as we report it and retain utilization long after they are no longer relevant in spoken communication ( for example, the welcome” To whom it may concern” ). Because it tends to be more consistent, smooth, and proper, written word is necessarily far easier for devices to interpret and know.

    Spoken speech lacks this pleasure. There are verbal cues and vociferous behaviors that mimic conversation in nuanced ways, including how something is said, never what. These are also included in conversational cues that emphasize and enhance emotional context. Whether rapid-fire, low-pitched, or high-decibel, whether satirical, awkward, or groaning, our spoken speech conveys much more than the written word had ever muster. As designers and content managers, we face exciting difficulties when it comes to tone interfaces, the machines we use to communicate over the phone.

    Voice-to-text relations

    We interact with voice interfaces for a variety of reasons, but according to Michael McTear, Zoraida Callejas, and David Griol in The Conversational Interface, those motivations by and large mirror the reasons we initiate conversations with other people, too ( ). We typically strike up a discussion by:

    • we require something to be done ( such as a transaction ),
    • we want to know something ( information of some sort ), or
    • We are sociable creatures, and we need a talk partner.

    These three categories, which I refer to as interpersonal, technical, and prosocial, also apply to basically every voice interaction: a solitary conversation that starts with the voice interface’s initial greeting and ends with the user leaving the interface. Notice here that a discussion in our individual sense—a chat between people that leads to some result and lasts an arbitrary length of time—could encompass many interpersonal, technical, and interpersonal voice interactions in succession. In other words, a voice interaction is a conversation, but it is not always just one voice interaction.

    Purely prosocial exchanges are more gimmicky than captivating in the majority of voice interfaces because machines are unable to yet have the capability to truly understand how we are doing and engage in the kind of glad-handing behavior that people crave. There’s also ongoing debate as to whether users actually prefer the sort of organic human conversation that begins with a prosocial voice interaction and shifts seamlessly into other types. In Voice User Interface Design, Michael Cohen, James Giangola, and Jennifer Balogh advise sticking to user expectations by imitating how they interact with other voice interfaces rather than trying too hard to be human, which could lead to alienation ( ).

    A voice interface can also have two types of conversations we can have with one another that are both transactional and informational, each learning something new ( “discuss a musical” ).

    Transactional voice interactions

    When you order a Hawaiian pizza with extra pineapple, you’re typically having a conversation and a voice interaction when you’re tapping buttons on a food delivery app. The conversation quickly shifts from a brief smattering of neighborly small talk to ordering a pizza ( generously topped with pineapple, as it should be ) when we walk up to the counter and place an order.

    Alison: Hey, how’s it going?

    Burhan: Hello and welcome to Crust Deluxe! It’s chilly outside. How can I help you?

    Alison, can I get a pineapple-onion pizza in Hawaii?

    Burhan: Yes, but what size?

    Alison: Large.

    Burhan: Anything else?

    Alison: No, that’s it.

    Burhan: Something to drink?

    I’ll have a bottle of Coke, Alison.

    Burhan: You are aware of it. That’ll be$ 13.55 and about fifteen minutes.

    A service rendered or a product delivered: each progressive disclosure in this transactional conversation reveals more and more of the desired outcome of the transaction. Conversations that are transactional have certain characteristics: they are direct, concise, and cost-effective. They quickly dispense with pleasantries.

    Informational voice interactions

    While some conversations are primarily about obtaining information, some are. Though Alison might visit Crust Deluxe with the sole purpose of placing an order, she might not actually want to walk out with a pizza at all. She might be interested in trying kosher or halal dishes, trying gluten-free dishes, or something else entirely. Even though we have a prosocial mini-conversation once more at the beginning to establish politeness, we’re after much more.

    Alison: Hey, how’s it going?

    Burhan: Hello and welcome to Crust Deluxe! It’s chilly outside. How can I help you?

    Alison: Can I ask a few questions?

    Burhan: Of course! Continue straight ahead.

    Alison: Do you have any halal options on the menu?

    Burhan: Totally! On request, we can make any pie halal. We also have lots of vegetarian, ovo-lacto, and vegan options. Do you have any other dietary restrictions in mind?

    Alison, what about pizzas that don’t contain gluten?

    Burhan: We can definitely do a gluten-free crust for you, no problem, for both our deep-dish and thin-crust pizzas. Anything else I can say to you to help?

    Alison: That’s it for now. Good to know. Thank you.

    Burhan: Anytime, please.

    This is a very different dialogue. Here, the goal is to obtain a particular set of facts. Informational conversations are research expeditions that seek the truth through information gathering. Voice interactions that are informational might be more long-winded than transactional conversations by necessity. Responses are typically longer, more in-depth, and carefully communicated to ensure that the customer understands the main ideas.

    Voice Interfaces

    At their core, voice interfaces employ speech to support users in reaching their goals. However, just because an interface has a voice component doesn’t mean that every user interaction with it is mediated through voice. We’re most concerned with pure voice interfaces, which depend entirely on spoken conversation and lack any visual component, making multimodal voice interfaces much more nuanced and challenging to deal with because they can lean on visual components like screens as crutches.

    Though voice interfaces have long been integral to the imagined future of humanity in science fiction, only recently have those lofty visions become fully realized in genuine voice interfaces.

    IVR ( interactive voice response ) systems

    Written conversational interfaces have been used for computing for many years, but voice interfaces first started to appear in the early 1990s with text-to-speech ( TTS ) dictation programs that recited written text aloud and speech-enabled in-car systems that gave directions to a user-provided address. With the advent of interactive voice response ( IVR ) systems, intended as an alternative to overburdened customer service representatives, we became acquainted with the first true voice interfaces that engaged in authentic conversation.

    IVR systems made it easier for businesses to cut down on call centers, but they soon gained notoriety for their clunkiness. When you call an airline or hotel company, which is a common practice in the corporate world, these systems were primarily intended as metaphorical switchboards to direct customers to a real phone agent (” Say Reservations to book a flight or check an itinerary” ), which are more likely to happen when you call one. Despite their functional issues and users ‘ frustration with their inability to speak to an actual human right away, IVR systems proliferated in the early 1990s across a variety of industries (, PDF).

    IVR systems have a reputation for having less scintillating conversations than we’re used to in real life ( or even in science fiction ), despite being extremely repetitive and monotonous conversations that typically don’t veer from a single format.

    Screen readers are the norm

    Parallel to the evolution of IVR systems was the invention of the screen reader, a tool that transcribes visual content into synthesized speech. For Blind or visually impaired website users, it’s the predominant method of interacting with text, multimedia, or form elements. Screen readers are the norm represent perhaps the closest equivalent we have today to an out-of-the-box implementation of content delivered through voice.

    Among the first screen readers known by that moniker was the Screen Reader for the BBC Micro and NEEC Portable developed by the Research Centre for the Education of the Visually Handicapped (RCEVH) at the University of Birmingham in 1986 ( ). The first IBM Screen Reader for text-based computers was created by Jim Thatcher in the same year, which was later recreated for a computer with graphical user interfaces ( GUIs ) ( ).

    With the rapid expansion of the web in the 1990s, there was an explosion in the demand for user-friendly tools. Thanks to the introduction of semantic HTML and especially ARIA roles beginning in 2008, screen readers started facilitating speedy interactions with web pages that ostensibly allow disabled users to traverse the page as an aural and temporal space rather than a visual and physical one. In other words, web screen readers “provide mechanisms that translate visual design constructs—proximity, proportion, etc. —into useful information,” according to Aaron Gustafson in A List Apart. ” At least they do when documents are authored thoughtfully” ( ).

    There is a big draw for screen readers: they’re challenging to use and relentlessly verbose, despite being incredibly instructive for voice interface designers. Sometimes unwieldy pronouncements that name every manipulable HTML element and announce every formatting change are made because the visual structures of websites and web navigation don’t translate well to screen readers. For many screen reader users, working with web-based interfaces exacts a cognitive toll.

    Accessibility advocate and voice engineer Chris Maury examines why the screen reader experience is not appropriate for users who rely on voice in Wired:

    I disliked the operation of Screen Readers from the beginning. Why are they designed the way they are? It makes no sense to present information visually and then only to have that information translated into audio. All the effort and effort put into creating the ideal app user experience is wasted, or worse, having a negative effect on blind users ‘ experience. ( ) _ _ _

    Well-designed voice interfaces can often beat lengthy screen reader monologues in terms of speeding up users ‘ movements. After all, users of the visual interface have the advantage of freely scurrying around the viewport to find information, ignoring areas that are unimportant to them. Blind users, meanwhile, are obligated to listen to every utterance synthesized into speech and therefore prize brevity and efficiency. Users with disabilities who have long had no choice but to use clumsy screen readers might find that voice interfaces, especially more contemporary voice assistants, provide a more streamlined experience.

    Voice-overseers

    When we think of voice assistants (the subset of voice interfaces now commonplace in living rooms, smart homes, and offices), many of us immediately picture HAL from 2001: A Space Odyssey or hear Majel Barrett’s voice as the omniscient computer in Star Trek. Voice-overseers are akin to personal concierges that can answer questions, schedule appointments, conduct searches, and perform other common day-to-day tasks. And they’re rapidly gaining more attention from accessibility advocates for their assistive potential.

    Before the earliest IVR systems found success in the enterprise, Apple published a demonstration video in 1987 depicting the Knowledge Navigator, a voice assistant that could transcribe spoken words and recognize human speech to a great degree of accuracy. Then, in 2001, Tim Berners-Lee and others created their vision for a” semantic web agent” that would carry out routine tasks like” checking calendars, making appointments, and finding locations” ( hinter paywall ). Apple’s Siri only became a reality until 2011 when it finally made voice assistants a reality for consumers.

    Thanks to the plethora of voice assistants available today, there is considerable variation in how programmable and customizable certain voice assistants are over others ( Fig 1.1 ). At one extreme, everything but vendor-provided features are locked down. For instance, when Apple’s Siri and Microsoft’s Cortana were released, they couldn’t extend their existing capabilities. There are no other means by which developers can interact with Siri at a low level, aside from predefined categories of tasks like sending messages, hailing rideshares, making restaurant reservations, and other things, which are still unavoidable today.

    At the opposite end of the spectrum, voice assistants like Amazon Alexa and Google Home offer a core foundation on which developers can build custom voice interfaces. For this reason, developers who feel constrained by the limitations of Siri and Cortana are increasingly using programmable voice assistants that are extensibable and customizable. Google Home has the ability to program arbitrary Google Assistant skills, while Amazon offers the Alexa Skills Kit, a developer framework for creating custom voice interfaces for Amazon Alexa. Today, users can choose from among thousands of custom-built skills within both the Amazon Alexa and Google Assistant ecosystems.

    As businesses like Amazon, Apple, Microsoft, and Google continue to occupy their positions, they are also selling and open-sourcing an unheard array of tools and frameworks for designers and developers, aiming to make creating voice interfaces as simple as possible, even without code.

    Often by necessity, voice assistants like Amazon Alexa tend to be monochannel—they’re tightly coupled to a device and can’t be accessed on a computer or smartphone instead. In contrast, many development platforms, like Google’s Dialogflow, now support omnichannel features, allowing users to create a single conversational interface that then becomes a voice interface, textual chatbot, and IVR system upon deployment. In Chapter 4, we’ll explore some of the possible effects these variables might have on how you build out your design artifacts, but I don’t recommend any particular implementation strategies in this design-focused book.

    Voice Content

    Simply put, voice content is voice-transmitted content. Voice content must be free-flowing and organic, contextless and concise—everything written content isn’t enough to preserve what makes human conversation so compelling in the first place.

    Our world is replete with voice content in various forms: screen readers reciting website content, voice assistants rattling off a weather forecast, and automated phone hotline responses governed by IVR systems. We’re most concerned with the content in this book being delivered auditorically, not as an option but as a necessity.

    Our initial step in informational voice interfaces will likely be to provide user content, according to many of us. There’s only one problem: any content we already have isn’t in any way ready for this new habitat. How can we improve the conversational content on our websites? And how do we create fresh copy that works with voice-activated text?

    Lately, we’ve begun slicing and dicing our content in unprecedented ways. Websites are, in many ways, massive vaults of what I call macrocontent: lengthy prose that can last for miles in a browser window while extending like microfilm viewers of newspaper archives. Microcontent was defined as permalinked pieces of content that could be read in any environment, such as email or text messages back in 2002, well before the present-day ubiquity of voice assistants.

    A day’s weather forcast]sic], the arrival and departure times for an airplane flight, an abstract from a long publication, or a single instant message can all be examples of microcontent. ( ) _ _ _

    I would update Dash’s definition of microcontent to include all instances of bite-sized content that goes beyond written communiqués. After all, today we encounter microcontent in interfaces where a small snippet of copy is displayed alone, unmoored from the browser, like a textbot confirmation of a restaurant reservation. Informing delivery channels both established and novel, Microcontent provides the best opportunity to find out how your content can be stretched to the limits of its potential.

    Voice content stands out as being unique because it’s an illustration of how content is experienced in space rather than time. We can glance at a digital sign underground for an instant and know when the next train is arriving, but voice interfaces hold our attention captive for periods of time that we can’t easily escape or skip, something screen reader users are all too familiar with.

    We must ensure that our microcontent performs well as voice content because it is essentially composed of individual blobs without any connection to the channels in which they will eventually end up. This means focusing on the two most crucial characteristics of robust voice content: voice content legibility and voice content discoverability.

    Our voice content’s legibility and discoverability in general both depend on how it manifests in terms of perceived space and time.

  • Sustainable Web Design, An Excerpt

    Sustainable Web Design, An Excerpt

    Some members of the elite running group were beginning to think it was impossible to run a hour in less than four hours in the 1950s. Riders had been attempting it since the later 19th century and were beginning to draw the conclusion that the human body just wasn’t built for the job.

    But on May 6, 1956, Roger Bannister caught anyone off guard. It was a cold, damp morning in Oxford, England—conditions no one expected to give themselves to record-setting—and but Bannister did really that, running a mile in 3: 59.4 and becoming the first people in the history books to run a mile in under four hours.

    The world presently knew that the four-minute hour could be accomplished thanks to this change in the standard. Bannister’s history lasted just forty-six days, when it was snatched aside by American sprinter John Landy. Therefore, in the same race, three athletes managed to cross the four-minute challenge together. Since therefore, over 1, 400 walkers have actually run a mile in under four days, the current document is 3: 43.13, held by Moroccan performer Hicham El Guerrouj.

    We can do a lot more with what we think is possible, and we can only do it if we see that someone else has already done it. As with people running speed, there are also difficult limits on how a website can accomplish.

    Establishing requirements for a green website

    The key indicators of climate performance in most big companies are pretty well established, such as power per square metre for homes and miles per gallon for cars. The tools and methods for calculating those measures are standardized as well, which keeps everyone on the same site when doing economic evaluations. However, in the world of websites and apps, we aren’t held to any certain environmental standards, and we have only recently developed the tools and methods we need to also conduct an environmental assessment.

    The main objective in green web layout is to reduce carbon emissions. However, it’s nearly impossible to accurately assess the amount of CO2 that a website merchandise produces. We didn’t measure the pollutants coming out of the exhaust valves on our laptops. Our sites produce far-away, invisible, and unremarkable pollutants when they leave fuel and gas-burning power plants. We have no way to track the particles from a website or app up to the power station where the light is being generated and really know the exact amount of house oil produced. What then do we do?

    If we can‘t measure the actual carbon emissions, then we need to get what we can estimate. The following are the main elements that could be used as measures of coal pollution:

    1. Transfer of data
    2. Coal content of light

    Let’s take a look at how we can use these indicators to calculate the energy use, and in turn the carbon footprint, of the sites and web applications we create.

    Transfer of data

    Most researchers use kilowatt-hours per gigabyte (k Wh/GB ) as a metric of energy efficiency when measuring the amount of data transferred over the internet when a website or application is used. This serves as a wonderful example of how much energy is consumed and how much coal is released. As a rule of thumb, the more files transferred, the more electricity used in the data center, telecoms systems, and end users products.

    The webpage weight, or the page’s transfer size in kilobytes, can be most readily calculated for a second visit for web pages. It’s very easy to measure using the engineer equipment in any modern internet browser. Frequently, any web application’s overall data transfer statistics will be included in your web hosting account ( Fig. 2.1 ).

    The great thing about website weight as a parameter is that it allows us to compare the effectiveness of web pages on a level playing field without confusing the issue with frequently changing traffic volumes.

    A large scope is required to reduce page weight. By early 2020, the median page weight was 1.97 MB for setups the HTTP Archive classifies as “desktop” and 1.77 MB for “mobile”, with desktop increasing 36 percent since January 2016 and mobile page weights nearly doubling in the same period ( Fig 2.2 ). Image files account for roughly half of this data transfer, making them the single biggest contributor to carbon emissions on the typical website.

    History clearly shows us that our web pages can be smaller, if only we set our minds to it. While most technologies, including the underlying technology of the web like data centers and transmission networks, become more and more energy efficient, websites themselves become less effective as time goes on.

    You might be aware of the idea behind performance budgeting as a method for directing a project team to deliver faster user experiences. For example, we might specify that the website must load in a maximum of one second on a broadband connection and three seconds on a 3G connection. Performance budgets are upper limits rather than vague suggestions, much like speed limits while driving, so the goal should always be to come within budget.

    Designing for fast performance does often lead to reduced data transfer and emissions, but it isn’t always the case. Page weight and transfer size are more objective and reliable benchmarks for sustainable web design, whereas web performance is frequently more about the subjective perception of load times than it is about the underlying system’s actual efficiency.

    We can set a page weight budget in reference to a benchmark of industry averages, using data from sources like HTTP Archive. We can also use competitor page weights and the website’s current layout to compare it to. For example, we might set a maximum page weight budget as equal to our most efficient competitor, or we could set the benchmark lower to guarantee we are best in class.

    If we want to take it to the next level, we could start looking at how much more popular our web pages are when people visit them frequently. Although page weight for the first time someone visits is the easiest thing to measure, and easy to compare on a like-for-like basis, we can learn even more if we start looking at transfer size in other scenarios too. For instance, repeat users who load the same page frequently will likely have a high percentage of the files cached in their browser, which means they won’t need to move all of the files back on subsequent visits. Likewise, a visitor who navigates to new pages on the same website will likely not need to load the full page each time, as some global assets from areas like the header and footer may already be cached in their browser. Moving away from the first visit and allowing us to determine page weight budgets for scenarios other than this one can help us learn even more about how to optimize efficiency for users who regularly visit our pages.

    Page weight budgets are easy to track throughout a design and development process. Although they don’t directly disclose their data on energy consumption and carbon emissions, they do provide a clear indicator of efficiency in comparison to other websites. And as transfer size is an effective analog for energy consumption, we can actually use it to estimate energy consumption too.

    In summary, less data transfer leads to more energy efficiency, which is a crucial component of reducing web product carbon emissions. The more efficient our products, the less electricity they use, and the less fossil fuels need to be burned to produce the electricity to power them. However, as we’ll see next, it’s important to take into account the source of that electricity because all web products require some.

    Coal content of light

    Regardless of energy efficiency, the level of pollution caused by digital products depends on the carbon intensity of the energy being used to power them. The term” carbon intensity” (gCO2/k Wh ) is used to describe how much carbon dioxide is produced for each kilowatt-hour of electricity produced. This varies widely, with renewable energy sources and nuclear having an extremely low carbon intensity of less than 10 gCO2/k Wh ( even when factoring in their construction ), whereas fossil fuels have very high carbon intensity of approximately 200–400 gCO2/k Wh.

    The majority of electricity is produced by national or state grids, where energy from a variety of sources is combined with various levels of carbon intensity. The distributed nature of the internet means that a single user of a website or app might be using energy from multiple different grids simultaneously, a website user in Paris uses electricity from the French national grid to power their home internet and devices, but the website’s data center could be in Dallas, USA, pulling electricity from the Texas grid, while the telecoms networks use energy from everywhere between Dallas and Paris.

    Although we have some control over where our projects are hosted, we do not have complete control over the energy supply of web services. With a data center using a significant proportion of the energy of any website, locating the data center in an area with low carbon energy will tangibly reduce its carbon emissions. This user-provided data is reported and mapped by Danish startup Tomorrow, and a look at their map demonstrates how, for instance, choosing a data center in France will result in significantly lower carbon emissions than choosing a data center in the Netherlands ( Fig. 2.3 ).

    However, we don’t want to move our servers too far away from our users because it requires a lot of energy to transmit data through the telecom’s networks, and the more energy is used, the further the data travels. Just like food miles, we can think of the distance from the data center to the website’s core user base as “megabyte miles” —and we want it to be as small as possible.

    We can use website analytics to determine the country, state, or even city where our core user group is located and measure the distance between that location and the data center that our hosting company uses as a benchmark. This will be a somewhat fuzzy metric as we don’t know the precise center of mass of our users or the exact location of a data center, but we can at least get a rough idea.

    For instance, if a website is hosted in London but the main audience is on the United States ‘ West Coast, we could look up the travel distance between London and San Francisco, which is 5,300 miles. That’s a long way! We can see how significantly lessening the distance and energy needed to transmit the data would be if it was hosted somewhere in North America, ideally on the West Coast. In addition, locating our servers closer to our visitors helps reduce latency and delivers better user experience, so it’s a win-win.

    Reverting it to carbon emissions

    If we combine carbon intensity with a calculation for energy consumption, we can calculate the carbon emissions of our websites and apps. The method my team developed converts the amount of electricity transferred when loading a web page into a CO2 figure ( Fig. 2.4), and then converts that data into a figure for the tool. It also factors in whether or not the web hosting is powered by renewable energy.

    The Energy and Emissions Worksheet that comes with this book teaches you how to improve it and tailor the data more appropriately to your project’s unique features.

    We could even expand our page weight budget by establishing carbon budgets as well with the ability to calculate carbon emissions for our projects. CO2 is not a metric commonly used in web projects, we’re more familiar with kilobytes and megabytes, and can fairly easily look at design options and files to assess how big they are. Although translating that into carbon adds an air of abstraction, carbon budgets do focus our minds on the main issue we’re trying to reduce, which also supports the main goal of sustainable web design: reducing carbon emissions.

    Browser Energy

    Transfer of data might be the simplest and most complete analog for energy consumption in our digital projects, but by giving us one number to represent the energy used in the data center, the telecoms networks, and the end user’s devices, it can’t offer us insights into the efficiency in any specific part of the system.

    One part of the system we can look at in more detail is the energy used by end users ‘ devices. The computational burden is increasingly shifting from the data center to the users ‘ devices, whether they are smart TVs, tablets, laptops, phones, tablets, laptops, or other front-end web technologies. Modern web browsers allow us to implement more complex styling and animation on the fly using CSS and JavaScript. Additionally, JavaScript libraries like Angular and React make it possible to create applications where the” thinking” process is performed partially or completely in the browser.

    All of these advances are exciting and open up new possibilities for what the web can do to serve society and create positive experiences. However, more data is processed in a web browser, which means more energy is used by the user’s devices. This has implications not just environmentally, but also for user experience and inclusivity. Applications that put a lot of processing power on a user’s device unintentionally make them use older, slower devices and make their phones and laptops ‘ batteries discharge more quickly. Furthermore, if we build web applications that require the user to have up-to-date, powerful devices, people throw away old devices much more frequently. The poorest members of society are also under disproportionate financial burdens due to this, which is not just bad for the environment.

    In part because the tools are limited, and partly because there are so many different models of devices, it’s difficult to measure website energy consumption on end users ‘ devices. The Energy Impact monitor inside the developer console of the Safari browser is one of the tools we currently have ( Fig. 2.5 ).

    You know what happens when your computer’s cooling fans start spinning so frantically that you suspect it might take off when you load a website? That’s essentially what this tool is measuring.

    It uses these figures to create an energy impact rating based on the percentage of CPU used and how long it took the web page to load. It doesn’t give us precise data for the amount of electricity used in kilowatts, but the information it does provide can be used to benchmark how efficiently your websites use energy and set targets for improvement.

  • Design for Safety, An Excerpt

    Design for Safety, An Excerpt

    According to anti-racist scholar Kim Crayton, “intention without plan is chaos.” We’ve discussed how our prejudices, beliefs, and carelessness toward marginalized and resilient parties lead to dangerous and irresponsible tech—but what, precisely, do we need to do to fix it? We need a strategy, not just the desire to make our software safer.

    This book will provide you with that plan of action. It covers how to incorporate safety concepts into your design work to create healthy tech, how to persuade stakeholders that this work is required, and how to respond to criticism that there isn’t really enough diversity. ( Spoiler: we do, but diversity alone is not the antidote to fixing unethical, unsafe tech. )

    The diverse safety method

    When you are designing for health, your goals are to:

    • detect the abuse potential of your product.
    • style ways to prevent the maltreatment, and
    • offer assistance for users who are prone to regain control and power.

    The Process for Inclusive Safety is a tool to help you reach those goals ( Fig 5.1 ). It’s a method I developed in 2018 to better understand the different methods I used to create products that were designed with safety in mind. Whether you are creating an entirely new product or adding to an existing element, the Process can help you produce your product secure and diverse. Five main basic areas of action are included in the Process:

    • Conducting study
    • Creating themes
    • Pondering issues
    • creating answers
    • Testing for health

    The Process is meant to be flexible; in some situations, it didn’t make sense for groups to employ every step. Use the parts that are related to your special function and environment, this is meant to be something you can put into your existing style process.

    And if you’ve used it, if you’ve got ideas for improving it, or just want to give an example of how it helped your group, please get in touch with me. It’s a dwelling report that I hope will continue to be a helpful and practical tool that technicians can use in their day-to-day job.

    If you’re developing a product especially for a defenseless group or victims of some kind of stress, such as an app for victims of domestic violence, sexual abuse, or drug habit, make sure to read Chapter 7, which specifically addresses the issue and should be handled a little bit different. The guidelines below are for evaluating safety when designing a more basic product that will have a large customer base ( which, we now know from data, will include specific groups that should be protected from harm ). Chapter 7 concentrates on goods made especially for those who are most susceptible and those who have gone through stress.

    Step 1: Do studies

    A thorough examination of how your technology might be used to abuse people as well as a detailed analysis of the experiences of those who have survived and perpetrated that kind of abuse should be included in the design research. At this stage, you and your group does investigate issues of emotional damage and abuse, and examine any other safety, security, or inclusivity issues that might be a concern for your product or service, like data security, prejudiced algorithms, and harassment.

    broad research

    Your project should begin with broad, general research into similar products and issues around safety and ethical concerns that have already been reported. For instance, a team building a smart home device would be wise to comprehend the many ways that already-existing smart home devices have been misused as abuse tools. If your product will involve AI, seek to understand the potentials for racism and other issues that have been reported in existing AI products. Nearly all forms of technology have potential or actual harm that have been covered in academic writing or in the media. Google Scholar is a useful tool for finding these studies.

    Survivors as a specific field of study

    When possible and appropriate, include direct research ( surveys and interviews ) with people who are experts in the forms of harm you have uncovered. In order to gain a better understanding of the subject and be better positioned to avoid traumatizing survivors, you should first interview those who work in the area of your research. If you’ve uncovered possible domestic violence issues, for example, the experts you’ll want to speak with are survivors themselves, as well as workers at domestic violence hotlines, shelters, other related nonprofits, and lawyers.

    It is crucial to pay people for their knowledge and lived experiences, especially when interviewing survivors of any kind of trauma. Don’t ask survivors to share their trauma for free, as this is exploitative. You should always make the offer in the initial ask, even though some survivors may not want to be paid. An alternative to payment is to donate to an organization working against the type of violence that the interviewee experienced. In Chapter 6, we’ll discuss more about how to appropriately interview survivors.

    Specific research: Abusers

    Teams aiming to design for safety are unlikely to be able to interview self-declared abductors or those who have broken laws in areas like hacking. Don’t make this a goal, rather, try to get at this angle in your general research. Describe the ways that abusers or bad actors use technology to harm others, how they use it to silence others, and how they justify or explain the abuse.

    Step 2: Create archetypes

    Use your research after you’ve finished conducting it to create abuser and survivor archetypes. Archetypes are not personas, as they’re not based on real people that you interviewed and surveyed. Instead, they are based on your investigation into potential safety problems, much like when we design for accessibility: we don’t need to have identified any blind or deaf people in our interview pool to come up with a design that is representative of them. Instead, we base those designs on existing research into what this group needs. While archetypes are broad and can be more generalized, real users typically represent real users and contain many details.

    The abuser archetype is someone who will look at the product as a tool to perform harm ( Fig 5.2 ). They may be attempting to harm someone they don’t know by using surveillance or anonymous harassment, or they may be trying to control, monitor, abuse, or otherwise torment someone they know.

    The survivor archetype describes a person who is being abused with the product. There are various situations to consider in terms of the archetype’s understanding of the abuse and how to put an end to it: Do they need proof of abuse they already suspect is happening, or are they unaware they’ve been targeted in the first place and need to be alerted ( Fig 5.3 )?

    To capture a range of different experiences, you might want to create multiple survivor archetypes. They may know that the abuse is happening but not be able to stop it, like when an abuser locks them out of IoT devices, or they know it’s happening but don’t know how, such as when a stalker keeps figuring out their location ( Fig 5.4). In your survivor archetype, include as many of these scenarios as you need. You’ll use these later on when you design solutions to help your survivor archetypes achieve their goals of preventing and ending abuse.

    It may be useful for you to create persona-like artifacts for your archetypes, such as the three examples shown. Focus on their objectives rather than the demographic details we frequently see in personas. The goals of the abuser will be to carry out the specific abuse you’ve identified, while the goals of the survivor will be to prevent abuse, understand that abuse is happening, make ongoing abuse stop, or regain control over the technology that’s being used for abuse. Later, you’ll think about how to help the survivor’s goals and prevent the abuser’s goals.

    And while the “abuser/survivor” model fits most cases, it doesn’t fit all, so modify it as you need to. For instance, if you found a security flaw, such as the ability for someone to talk to children through a home camera system, the malicious hacker would receive the abuser archetype, and the child’s parents would receive the survivor archetype.

    Step 3: Brainstorm problems

    Brainstorm novel abuse cases and safety concerns after creating archetypes. ” Novel” means things not found in your research, you’re trying to identify completely new safety issues that are unique to your product or service. This step is intended to exhaust every effort put forth to identify potential harms your product might cause. You aren’t worrying about how to prevent the harm yet—that comes in the next step.

    What other uses could your product be used for besides what you’ve already identified in your research? I recommend setting aside at least a few hours with your team for this process.

    Try conducting a Black Mirror brainstorming session if you want to start somewhere. This exercise is based on the show Black Mirror, which features stories about the dark possibilities of technology. Try to figure out the most outrageous, horrible, and out-of-control ways your product could be used to cause harm in an episode of the show. When I’ve led Black Mirror brainstorms, participants usually end up having a good deal of fun ( which I think is great—it’s okay to have fun when designing for safety! ). I suggest that you time-box a Black Mirror brainstorm for the first half an hour, then dial it back, and then consider more realistic ways of harm the remaining half.

    After you’ve identified as many opportunities for abuse as possible, you may still not feel confident that you’ve uncovered every potential form of harm. When you perform this type of work, you should have a healthy amount of anxiety. It’s common for teams designing for safety to worry,” Have we really identified every possible harm? What if something is missing, then? If you’ve spent at least four hours coming up with ways your product could be used for harm and have run out of ideas, go to the next step.

    It’s impossible to say for sure that you’ve done everything, but instead of striving for 100 % assurance, acknowledge that you’ve done everything, and pledge to prioritize safety going forward. Once your product is released, your users may identify new issues that you missed, aim to receive that feedback graciously and course-correct quickly.

    Step 4: Create solutions

    At this point, you should have a list of ways your product can be used for harm as well as survivor and abuser archetypes describing opposing user goals. Next, it’s time to figure out how to design in accordance with the objectives of the abuser and the survivors ‘ objectives. This step is a good one to insert alongside existing parts of your design process where you’re proposing solutions for the various problems your research uncovered.

    Questions to ask yourself include: What are some ways to protect yourself and support your archetypes?

    • Can you design your product in such a way that the identified harm cannot happen in the first place? If not, what barriers can you place to stop the harm from occurring?
    • How can you make the victim aware that abuse is happening through your product?
    • How can you assist the victim in understanding what they need to do to stop the problem?
    • Can you identify any types of user activity that would indicate some form of harm or abuse? Could your product provide support for the user?

    In some products, it’s possible to proactively recognize that harm is happening. For instance, a pregnancy app might be modified to allow users to report being assault victims, which could result in an offer to receive resources from local and national organizations. This sort of proactiveness is not always possible, but it’s worth taking a half hour to discuss if any type of user activity would indicate some form of harm or abuse, and how your product could assist the user in receiving help in a safe manner.

    Nevertheless, be careful: you don’t want to do anything that could harm a user if their devices are being watched. If you do offer some kind of proactive help, always make it voluntary, and think through other safety issues, such as the need to keep the user in-app in case an abuser is checking their search history. In the next chapter, we’ll examine a good illustration of this.

    Step 5: Test for safety

    The final step is to evaluate the prototypes against the perspectives of your archetypes, who wants to harm the product or the victim of the harm who needs to regain control of the technology. Just like any other kind of product testing, at this point you’ll aim to rigorously test out your safety solutions so that you can identify gaps and correct them, validate that your designs will help keep your users safe, and feel more confident releasing your product into the world.

    Safety testing should be performed along with usability testing. If you’re at a company that doesn’t do usability testing, you might be able to use safety testing to cleverly perform both, a user who goes through your design attempting to weaponize the product against someone else can also be encouraged to point out interactions or other elements of the design that don’t make sense to them.

    If your final prototype or the finished product has already been released, you’ll want to conduct safety testing on both. There’s nothing wrong with testing an existing product that wasn’t designed with safety goals in mind from the onset —”retrofitting” it for safety is a good thing to do.

    Although it might not make sense for you to test for both an abuser and a survivor, keep in mind that testing for safety involves both. Alternatively, if you made multiple survivor archetypes to capture multiple scenarios, you’ll want to test from the perspective of each one.

    You as the designer are probably too closely acquainted with the product and its design at this point, just like other usability testing techniques, and you know the product too well. Instead of doing it yourself, set up testing as you would with other usability testing: find someone who is not familiar with the product and its design, set the scene, give them a task, encourage them to think out loud, and observe how they attempt to complete it.

    Abuse testing

    The goal of this testing is to understand how easy it is for someone to weaponize your product for harm. You want to make it impossible, or at least difficult, for them to accomplish their goal, in contrast to usability testing. Reference the goals in the abuser archetype you created earlier, and use your product in an attempt to achieve them.

    For instance, we can imagine that the abuser archetype would have the goal of discovering where his ex-girlfriend currently lives in a fitness app with GPS-enabled location features. With this goal in mind, you’d try everything possible to figure out the location of another user who has their privacy settings enabled. You might try to follow her running routes, view any information she has on her profile, view any information she has made private, and check out the profiles of any other users who are connected to her account, such as her followers.

    If by the end of this you’ve managed to uncover some of her location data, despite her having set her profile to private, you know now that your product enables stalking. Reverting to step 4 and figuring out how to stop this from occurring is your next step. You may need to repeat the process of designing solutions and testing them more than once.

    Survivor testing

    Survivor testing involves identifying how to give information and power to the survivor. It might not always make sense depending on the product or context. Thwarting the attempt of an abuser archetype to stalk someone also satisfies the goal of the survivor archetype to not be stalked, so separate testing wouldn’t be needed from the survivor’s perspective.

    However, there are instances where it makes sense. For example, for a smart thermostat, a survivor archetype’s goals would be to understand who or what is making the temperature change when they aren’t doing it themselves. If you couldn’t find the information in step 4, you would need to perform more work in step 4. You could test this by looking for the thermostat’s history log and looking for usernames, actions, and times.

    Another goal might be regaining control of the thermostat once the survivor realizes the abuser is remotely changing its settings. Are there any instructions that explain how to remove a user and change the password, and are they simple to locate? For your test, this would involve trying to figure out how to do this. This might again reveal that more work is needed to make it clear to the user how they can regain control of the device or account.

    stress testing

    To make your product more inclusive and compassionate, consider adding stress testing. Eric Meyer and Sara Wachter-Boettcher’s Design for Real Life inspired this idea. The authors pointed out that personas typically center people who are having a good day—but real users are often anxious, stressed out, having a bad day, or even experiencing tragedy. These are known as” stress cases,” and analyzing your products to see if they respond to users in stressful circumstances can reveal areas where your design lacks compassion. Design for Real Life has more details about what it looks like to incorporate stress cases into your design as well as many other great tactics for compassionate design.