Blog

  • Beware the Cut ‘n’ Paste Persona

    Beware the Cut ‘n’ Paste Persona

    A machine learning algorithm uses this man does not occur to create individual faces. It takes actual photos and recombines them into false people faces. We just squirted past a LinkedIn post that claimed this website might be helpful “if you are developing a image and looking for a photo.”

    We concur that computer-generated eyes may be excellent candidates for personas, but not for the reason you might think otherwise. Ironically, the website highlights the core issue of this very common design method: the person ( a ) does not exist. Personas are deliberately created, just like in the photos. Information is combined into an isolated preview that is detached from reality and taken out of the normal context.

    But strangely enough, manufacturers use personalities to encourage their style for the real world.

    A step up, identities

    Most manufacturers have at least once in their careers created, used, or encountered identities. In their content” Personas- A Plain Introduction”, the Interaction Design Foundation defines profile as “fictional characters, which you create based upon your study in order to reflect the unique user types that might use your service, product, site, or brand”. Personas typically consist of a name, profile picture, quotes, demographics, goals, needs, behavior in relation to a particular service/product, emotions, and motivations ( for example, see Creative Companion’s Persona Core Poster ). According to design firm Designit, the goal of personas is to “make the research relateable, ]and ] easy to communicate, digest, reference, and apply to product and service development.”

    The decontextualization of personalities

    Personalities are well-known because they make “dry” research information more realistic and people. However, this approach places a cap on the author’s data analysis, making it impossible for the investigated users to be excluded from their particular contexts. As a result, personalities don’t describe important factors that make you realize their decision-making method or allow you to connect to users ‘ thoughts and behavior, they lack stories. You are aware of the persona’s actions, but you lack the knowledge to know why. You end up with less human-like user images.

    This “decontextualization” we see in identities happens in four way, which we’ll discuss below.

    People are assumed to be stable, according to people.

    Here’s a painfully obvious truth: people are not a fixed set of features, despite the fact that many businesses still try to recruit and retain their employees and customers using outdated personality tests ( referring to you, Myers-Briggs ). You act, think, and feel different according to the conditions you experience. You may behave helpful to some people and harshly to others because you come across as different from everyone. And you constantly change your mind about the choices you’ve made.

    Modern psychology agree that while persons usually behave according to certain styles, it’s actually a combination of history and culture that determines how people act and take decisions. The context determines the kind of person you are at each particular time, including the environment, the effect of other persons, your mood, and the whole story that led up to a situation.

    Personas do not account for this variability in their attempt to simplify reality; instead, they present a user as a set of features. Like personality tests, personas snatch people away from real life. Even worse, people are labeled as” that kind of person” with no means to exercise their innate flexibility and are reduced to a label. This behavior defies stereotypes, diminishes diversity, and doesn’t reflect reality.

    Personas focus on individuals, not the environment

    In the real world, you’re creating content for a situation, not an individual. There are environmental, political, and social factors to consider when a person lives in a family, a community, or an ecosystem. A design is never meant for a single user. Instead, you create a product that is intended to be used by a certain number of people. However, personal experiences don’t explicitly describe how a user feels about the environment. Instead, they show the user only.

    Would you always make the same decision over and over again? Possibly you’re a committed vegan but still decide to buy some meat when your relatives visit. Your decisions, including your behavior, opinions, and statements, are not only completely accurate but highly contextual because they depend on a range of circumstances and variables. The persona that “represents” you wouldn’t take into account this dependency, because it doesn’t specify the premises of your decisions. It doesn’t give a justification for your behavior. People practice the well-known attribution error, which states that they too often attribute others ‘ behavior to their personalities and not to the circumstances.

    As mentioned by the Interaction Design Foundation, personas are usually placed in a scenario that’s a” specific context with a problem they want to or have to solve “—does that mean context actually is considered? Unfortunately, it’s common to pick a fictional character and build a character’s behavior around a particular circumstance based on the fiction. How could you possibly comprehend how someone you want to represent behave in new circumstances given that you haven’t even fully investigated and understood the current context of the people you want to represent?

    Personas are meaningless averages

    A persona is depicted as a specific person but is not a real person, as stated in Shlomo Goltz’s introduction article on Smashing Magazine; rather, it is made up of observations from numerous people. The famous USA Air Force design planes were designed based on the average of 140 of their pilots ‘ physical dimensions, with not a single pilot actually fit within that average seat, is a well-known criticism of this aspect of personas.

    The same limitation applies to mental aspects of people. Have you ever heard a famous person say something was taken out of context? They uttered my words, but I didn’t mean it that way. The celebrity’s statement was reported literally, but the reporter failed to explain the context around the statement and didn’t describe the non-verbal expressions. The intended purpose was lost as a result. You collect someone’s statement ( or need, or emotion ) into whose own specific context you specify it, and then report it as an isolated finding ( or goal, need, or emotion ).

    But personas go a step further, extracting a decontextualized finding and joining it with another decontextualized finding from somebody else. The resultant set of findings frequently does not make sense because it is unclear or even contradictory because it lacks the underlying causes for and how that finding came about. It lacks any significance. And the persona doesn’t give you the full background of the person ( s ) to uncover this meaning: you would need to dive into the raw data for each single persona item to find it. What then is the persona’s purposeful purpose?

    The validity of personas is deceiving.

    To a certain extent, designers realize that a persona is a lifeless average. Designers invent and add “relatable” details to personas to make them resemble real people in order to overcome this. Nothing better captures the absurdity of this than a phrase from the Interaction Design Foundation:” Add a few fictional personal details to make the persona a realistic character.” In other words, you add non-realism in an attempt to create more realism. Wouldn’t it be much more responsible to emphasize that John is only an abstraction if you purposefully conceal the fact that” John Doe” is an abstract representation of research findings? Let’s say something is artificial.

    It’s the finishing touch of a persona’s decontextualization: after having assumed that people’s personalities are fixed, dismissed the importance of their environment, and hidden meaning by joining isolated, non-generalizable findings, designers invent new context to create ( their own ) meaning. They do so by introducing a number of biases, as with everything they create. As Designit put it, as designers, we can” contextualize]the persona ] based on our reality and experience. We create connections that are familiar to us“. With each new detail added, this practice furthers stereotypes, doesn’t reflect real-world diversity, and takes people’s actual reality even further.

    To conduct effective design research, we must report the “as-is” reality and make it relatable for our audience so that everyone can use their own empathy and formula for their own interpretation and emotional response.

    Dynamic Selves: The alternative to personas

    What should we do instead if we shouldn’t use personas?

    Designit suggests using mindsets rather than personas. Each Mindset is a” spectrum of attitudes and emotional responses that different people have within the same context or life experience”. It challenges designers to avoid getting fixated on just one person’s way of being. Unfortunately, despite being a step in the right direction, this proposal doesn’t consider that people are a part of a system that controls their behavior, personality, and, yes, mindset. Therefore, Mindsets are also not absolute but change in regard to the situation. What determines a particular Mindset, remains to be seen.

    Another option is provided by Margaret P., the author of the article” Kill Your Personas,” who has argued for replacing personas with persona spectrums that include a range of user abilities. For example, a visual impairment could be permanent ( blindness ), temporary ( recovery from eye surgery ), or situational (screen glare ). Because they are based on the idea that the context is the pattern, not the personality ,ersona spectrums are very useful for more inclusive and context-based design. However, their only drawback is that they have a very functional perspective on users that misses the relatability of a real person taken from within a spectrum.

    In developing an alternative to personas, we aim to transform the standard design process to be context-based. Similar to how we tried to do this before with people, contexts are generalizable and have patterns that we can identify. So how do we learn these patterns? How do we ensure truly context-based design?

    Understand real people in a variety of settings

    Nothing can be more relatable and inspiring than reality. Therefore, we have to understand real individuals in their multi-faceted contexts, and use this understanding to fuel our design. Dynamic Selves is how we define it.

    Let’s take a look at how the approach looks based on an illustration from a recent study that examined Italians ‘ habits around energy consumption. We drafted a design research plan aimed at investigating people’s attitudes toward energy consumption and sustainable behavior, with a focus on smart thermostats.

    1. Select the appropriate sample.

    When we argue against personas, we’re often challenged with quotes such as” Where are you going to find a single person that encapsulates all the information from one of these advanced personas]? ]” The simple answer is that you are not required to. You don’t need to know a lot about everyone to have deep and meaningful insights.

    In qualitative research, validity does not derive from quantity but from accurate sampling. You choose the individuals who best fit the “population” you’re designing for. You can infer how the rest of the population thinks and acts if this sample is chosen wisely and you have a deep understanding of the sampled people. There’s no need to study seven Susans and five Yuriys, one of each will do.

    In the same way, you don’t need to comprehend Susan in fifteen different ways. You have understood Susan’s plan of action once you have seen her in a few different settings. Not Susan as an atomic being but Susan in relation to the surrounding environment: how she might act, feel, and think in different situations.

    It becomes clear why each person should be portrayed as an individual because each already represents an abstraction of a larger group of people in similar circumstances because each person is representative of a portion of the total population you’re researching. You oppose abstractions of abstraction! These selected people need to be understood and shown in their full expression, remaining in their microcosmos—and if you want to identify patterns you can focus on identifying patterns in contexts.

    However, the question remains: how do you select a sample representative? First, you must consider the target market for the product or service you are designing. It might be helpful to examine the company’s objectives and strategy, the current customer base, and/or a potential future target audience.

    In our example project, we were designing an application for those who own a smart thermostat. Everyone in their home could have a smart thermostat in the future. However, only early adopters currently own one. To build a significant sample, we needed to understand the reason why these early adopters became such. We therefore recruited by enticing people to explain why and how they obtained a smart thermostat. There were those who had made the decision to purchase it, those who had been influenced by others to do so, and those who had located it in their homes. So we selected representatives of these three situations, from different age groups and geographical locations, with an equal balance of tech savvy and non-tech savvy participants.

    2. Conduct your research

    After having chosen and recruited your sample, conduct your research using ethnographic methodologies. Your qualitative data will be enriched with examples and anecdotes thanks to this. Given COVID-19 restrictions, we turned an internal ethnographic research project into home-based remote family interviews that were followed by diary research in our example project.

    To gain an in-depth understanding of attitudes and decision-making trade-offs, the research focus was not limited to the interviewee alone but deliberately included the whole family. With the additions or corrections made by wives, husbands, children, or occasionally even pets, each interviewee would tell a story that would then become much more engaging and precise. We also paid attention to the behaviors that came from having relationships with other meaningful people ( such as coworkers or distant relatives ) and the relationships that came from those relationships. This wide research focus allowed us to shape a vivid mental image of dynamic situations with multiple actors.

    It is crucial that the research’s scope remain broad enough to cover all potential actors. Therefore, it typically works best to define broad research areas with broad questions. Interviews are best set up in a semi-structured way, where follow-up questions will dive into topics mentioned spontaneously by the interviewee. The most insightful findings will be made with this open-minded “plan to be surprised.” One of our participants responded,” My wife has not installed the thermostat’s app; she uses WhatsApp instead,” when we asked how his family controlled the house temperature. If she wants to turn on the heater and she is not home, she will text me. She uses me as her thermostat.

    3. Analysis: Create the Dynamic Selves

    You begin to represent each individual with several Dynamic Selves, each” Self” representing one of the circumstances you have examined throughout the research analysis. A quote serves as the foundation of each Dynamic Self, which is supported by a photo and a few relevant demographics that help to illustrate the larger context. The research findings themselves will show which demographics are relevant to show. In our case, the important demographics were family type, number and type of houses owned, economic status, and technological maturity because our research focused on families and their way of life to understand their needs for thermal regulation. The individuals ‘ names and ages are optional, but they were included to facilitate the stakeholders ‘ transition from personas and allow them to connect multiple actions and contexts to the same person.

    To capture exact quotes, interviews need to be video-recorded and notes need to be taken verbatim as much as possible. This is crucial to the completeness of each participant’s various selves. To create authentic selves in ethnographic research using real-world actors and photos of the setting are necessary. Ideally, these photos should come directly from field research, but an evocative and representative image will work, too, as long as it’s realistic and depicts meaningful actions that you associate with your participants. One of our interviewees, for instance, shared a story of his mountain home where he used to spend weekends with his family. Therefore, we depicted him taking a hike with his young daughter.

    At the end of the research analysis, we displayed all of the Selves ‘” cards” on a single canvas, categorized by activities. Each card featured a situation, which was indicated by a quote and a distinctive image. All participants had several cards about themselves.

    4. Identify potential design challenges

    You will notice patterns beginning to appear once you have taken all of the main quotes from the interview transcripts and diaries and written them down as self-cards. These patterns will highlight the opportunity areas for new product creation, new functionalities, and new services—for new design.

    There was a particularly intriguing insight around the concept of humidity in our example project. We became aware of the importance of humidity monitoring for health and how an environment that is too dry or wet can cause respiratory problems or worsen already existing ones. This highlighted a big opportunity for our client to educate users on this concept and become a health advisor.

    Benefits of Dynamic Selves

    When you conduct your research using the Dynamic Selves method, you start to notice peculiar social relations, peculiar circumstances that people face and the consequences of their actions, and that people are surrounded by ever-changing environments. In our thermostat project, we have come to know one of the participants, Davide, as a boyfriend, dog-lover, and tech enthusiast.

    Davide is a person we might have once consigned to the title of “tech enthusiast.” However, there are also those who love technology who have families or are single, who are wealthy or poor. Their motivations and priorities when deciding to purchase a new thermostat can be opposite according to these different frames.

    Once you have fully grasped the underlying causes of Davide’s behavior and have understood them in detail, you can then generalize how he would act in a different circumstance. You can infer what he would think and do in the circumstances ( or scenarios ) you design for using your understanding of him.

    The Dynamic Selves approach aims to dismiss the conflicted dual purpose of personas—to summarize and empathize at the same time—by separating your research summary from the people you’re seeking to empathize with. This is crucial because scale affects how we feel empathy for people and how difficult it is to do so with other people. We have the deepest compassion for people with whom we can relate.

    If you take a real person as inspiration for your design, you no longer need to create an artificial character. No more developing plot devices to “realize” the character, and no more need for additional bias. Simply put, this person is in real life. In fact, in our experience, personas quickly become nothing more than a name in our priority guides and prototype screens, as we all know that these characters don’t really exist.

    Another significant benefit of Dynamic Selves is that it raises the stakes of your work: if you ruin your design, someone you and the team know and have met will suffer the consequences. It might prompt you to perform daily design checks and may prevent you from taking shortcuts.

    And finally, real people in their specific contexts are a better basis for anecdotal storytelling and therefore are more effective in persuasion. To obtain this result, it is crucial to document real research. It reinforces your design arguments with more urgency and weight:” When I met Alessandra, the conditions of her workplace struck me. Noise, bad ergonomics, lack of light, you name it. I’m worried that her life will become more complicated if we choose to use this functionality.

    Conclusion

    Designit stated in their article on Mindsets that “design thinking tools offer a shortcut to deal with reality’s complexities, but this process of simplification can occasionally flatten out people’s lives into a few general characteristics.” Unfortunately, personas have been culprits in a crime of oversimplification. They fail to account for the complex nature of our users ‘ decision-making processes and don’t take into account the fact that people are immersed in environments.

    Design needs to be simplified, but not to be a generalization. You have to look at the research elements that stand out: the sentences that captured your attention, the images that struck you, the sounds that linger. Avoid using those and use them to describe the person in all of their contexts. People and insights both come with a context, and they cannot be taken out of that context because it would detract from meaning.

    It’s high time for design to move away from fiction, and embrace reality—in its messy, surprising, and unquantifiable beauty—as our guide and inspiration.

  • That’s Not My Burnout

    That’s Not My Burnout

    Do you find it hard to connect when I read about people who are dying as they experience exhaustion? Do you feel like your feelings are invisible to the earth because you’re experiencing burnout different? Our main comes through more when stress starts to press down on us. Beautiful, content souls quieten and fade into the remote, distracted stress we’ve all experienced. But some of us, those with fires constantly burning on the sides of our key, getting hotter. I am blaze in my brain. In an effort to overcome fatigue, I twice over, triple down, burn hotter, and burn hotter in an effort to overcome the situation. I don’t fade— I am engulfed in a passionate stress.

    What on earth is a passionate stress, then?

    Envision a person who is determined to accomplish everything. She has two wonderful children whom she, along with her father who is also working mildly, is homeschooling during a crisis. She loves everyone at work because of how demanding her work is. She wakes up early to get some movement in ( or frequently catch up on work ), prepares dinner while the kids are having breakfast, and works while positioning herself near the end of her “fourth grade” to watch as she balances clients, tasks, and budgets. Sound like a bit? Also with a supportive group at home and at work, it is.

    This person seems to need self-care because she has too much going on. But no, she doesn’t have occasion for that. In truth, she begins to feel as though she’s dropping balloons. Not enough is achieved. There’s not enough of her to be here and that, she is trying to divide her head in two all the time, all day, every day. She begins to question herself. And her domestic narrative grows more and more critical as those feelings grow in.

    Instantly she KNOWS what she needs to do! She ought to work harder.

    This is a challenging and risky period. Hear why? Because when she doesn’t complete that new purpose, the story will only get worse. She immediately starts failing. She isn’t doing much. She is insufficient. She does fail, she might refuse her family, but she’ll discover more to do. She doesn’t nap as much, proceed because much, all in the attempts to do more. caught in this pattern of attempting to prove herself to herself without ever succeeding. Always feeling “enough”

    But, yeah, that’s what zealous burnout looks like for me. It doesn’t develop overnight in some grand gesture, but it does rather develop gradually over the course of several weeks and months. Not a man losing concentration, but rather a burning out approach that seems to be speeding up. I rate up and up and up… and therefore I simply stop.

    I am the only person who has the ability.

    The things that shape us are interesting. Through the camera of youth, I viewed the worries, problems, and sacrifices of someone who had to make it all work without having much. I always went without and also got an extra here or there because my mother was so competent and my father was so friendly.

    Growing up, I didn’t feel shame when my mom gave me food passports; in fact, I would have likely sparked debates about the subject, orally eviscerating anyone who dared to criticize the disabled person who was attempting to ensure all of our needs were met with so little. As a child, I watched the way the worry of not making those ends meet impacted persons I love. Because I was” the one who was” make our lives a little easier, I would take on many of the physical things in my house as the non-disabled people. I soon realized that I had to put more of myself into it because I am the one who does. I learned first that when something frightens me, I can double down and work harder to make it better. I am in charge of the problem. I’ve been told that I seem brave when people have seen this in me as an adult, but truth be told, I’m no. If I seem courageous, it’s because this behavior was forged from another person’s fears.

    And here I am, more than 30 years later, despite the overwhelming pressures that come with putting my mind to work on them when I have many things to do and that I may. I feel more motivated to show that I may make things happen if I put in more effort, put on more responsibilities, and do more.

    I do not see people who struggle financially as problems, because I have seen how powerful that tide is be—it takes you along the way. I fully realize that I had the opportunity to prevent many of the difficulties that my children faced. Having said that, I am also” the one who can” who believes she should, so I would think I had failed if I had to struggle to make ends meet for my own home. Though I am supported and educated, most of this is due to great riches. But, I’ll give myself the haughtiness of claiming that my choices were wise and that they had sparked that success. My sense of self is the result of the notion that I am” the one who can” and feel compelled to accomplish the most. I can choose to halt, and with some pretty precise warm water splashed in my experience, I’ve made the choice to previously. However, I don’t always choose to quit, so I move on, driven by a fear that is so present in me that I hardly ever see until I’m completely worn out.

    Why all this story, then? You see, stress is a volatile thing. Over the years, I’ve read and heard a bunch about stress. Stress is a real phenomenon. Especially today, with COVID, many of us are balancing more than we ever have before—all at once! It’s challenging, and so many wonderful experts are affected by the mitigation, the shutting down, and the procrastination. There are significant reports that, in my opinion, relate to the majority of people out there, but not me. That’s not what my fatigue looks like.

    The perilous darkness of passionate burnout

    In many workplaces, extra work, more energy, and general focused commitment are seen as an asset ( and occasionally that’s all it is ). They see anyone trying to rise to difficulties, never people stuck in their anxiety. Some well-intentioned companies have procedures in place to safeguard their teams from fatigue. However, in situations like this, those alarms don’t always go off, and some business members are surprised and depressed when the inevitable prevent occurs. And maybe even actually betrayed.

    When it comes to parenting, which is more so for parents, mathematically speaking, are praised for being so on top of it all when they can work, participate in after-school activities, exercise self-care in the form of diet and exercise, and also meet pals for coffee or wines. Many of us have watched endless streaming episodes of COVID to see how challenging the female hero is, but she is powerful and interesting, and can do it. It’s a “very special season” when she breaks down, shouts in the bathroom, terribly admits she needs help, and only stops for a bit. Truth be told, countless people are hidden in tears or doom-scrolling to escape. Although we are aware that the media is a lie to amuse us, a large portion of society has been persuaded that it is what we should aim for.

    Women and burnout

    I cherish men. And despite the fact that I don’t love every man ( heads up, I don’t love every woman or nonbinary person either ), I think there is a wonderful range of people who fit that particular binary gender.

    That said, women are still more often at risk of burnout than their male counterparts, especially in these COVID stressed times. Mothers at work feel the pressure to do everything while giving absolutely everything. Mothers who are not employed feel they need to do more to” justify” their lack of traditional employment. Women who are not mothers often feel the need to do even more because they don’t have that extra pressure at home. It’s so ingrained in our culture and vicious and systemic that we frequently are unaware of how much pressure we place on ourselves and others.

    And there are costs that go beyond happiness. Harvard Health Publishing released a study a decade ago that “uncovered strong links between women’s job stress and cardiovascular disease”. According to the CDC,” Heart disease is the leading cause of death for women in the United States, killing 299,578 women in 2017—or roughly 1 in every 5 female deaths,”

    According to what I’ve read, this connection between work stress and health is more dangerous for women than it is for their non-female counterparts.

    But what if your burnout isn’t like that either?

    That might not be you either. After all, we are all unique, and our responses to stressors are also unique. It’s part of what makes us human. Don’t put too much emphasis on how burnout looks; instead, learn to recognize it in yourself. What are a few questions I occasionally ask my friends if they worry about them.

    Are you happy? This straightforward query ought to be your first inquiry. Even if you’re burning out doing all the things you love, chances are that as you get closer to burnout, you’ll just stop consuming as much joy from it all.

    Do you feel empowered to say no? I’ve observed in myself and others that someone who is out of sorts no longer feels like they can turn their back on things. Even those who don’t” speed up” feel pressured to say “yes” to avoid apprehension.

    What are three things you’ve done for yourself? Another fact to keep in mind is that we all have a tendency to stop doing things for ourselves. anything from avoiding conversations with friends to skipping showers and eating poorly. These can be red flags.

    Are you using justifications? Many of us make an effort to avoid feeling worn out. Over and over I have heard,” It’s just crunch time”,” As soon as I do this one thing, it will all be better”, and” Well I should be able to handle this, so I’ll figure it out”. And it might actually be crunch time, a single objective, and/or a set of skills you need to master. That occurs; life occurs. BUT if this doesn’t stop, be honest with yourself. Maybe it’s not crunch time; perhaps you’re burning out from a bad situation if you’ve worked more than 50 hours of weeks since January.

    Do you have a strategy for overcoming this feeling? If something is truly temporary and you do need to just push through, then it has an exit route with a
    defined the end

    Take the time to listen to your friend in the same way. Be honest, allow yourself to be uncomfortable, and break the thought cycles that prevent you from healing.

    So what comes next?

    What I just described has a different path to burnout, but it’s still burnout. There are well-established approaches to working through burnout:

    • Get enough sleep.
    • Eat well.
    • Work out.
    • Go outside.
    • Take a break.
    • Overall, practice self-care.

    I find those challenging because they seem like more chores. Doing any of the above for me feels like a waste if I’m in the burnout cycle. The narrative is that if I’m already failing, why would I take care of myself when I’m dropping all those other balls? People need me, don’t they?

    Your inner voice might already be pretty bad if you’re deeply in the cycle. If you need to, tell yourself you need to take care of the person your people depend on. Use your roles to make healing easier by defending the time you spend working on you if they are putting you in a bad mood.

    I have come up with a few things that I do when I start to feel like I’m going into a zealous burnout to help remind myself of the airline attendant advice to put the mask on yourself first.

    Cook an elaborate meal for someone!

    Okay, since I’m a “food-focused” person, I’ve always been a fan. In my home, there are countless tales of people coming into the kitchen, turning right, and leaving when they noticed I was” chopping angrily.” But it’s more than that, and you should give it a try. Seriously. If you don’t feel like giving time for yourself, make it a priority for someone else. Most of us work in a digital world, so cooking can fill all of your senses and force you to be in the moment with all the ways you perceive the world. It can help you get a better perspective and clear your head. I’ve been known to pick a location on a map and prepare food that comes from it ( thank you, Pinterest ) in my home. I love cooking Indian food, as the smells are warm, the bread needs just enough kneading to keep my hands busy, and the process takes real attention for me because it’s not what I was brought up making. And ultimately, we all triumph!

    Vent like a sniveling jerk.

    Be careful with this one!

    Over the past few years, I have made an effort to practice more gratitude, and I am aware of the real advantages of doing so. Having said that, sometimes you just need to let it all out, even the ugly ones. Hell, I’m a big fan of not sugarcoating our lives, and that sometimes means that to get past the big pile of poop, you’re gonna wanna complain about it a bit.

    When that is required, turn to a trusted friend and give yourself some pure verbal diarrhea by expressing all your concerns. You must have faith in this friend to not judge you, to feel your pain, and, most importantly, to instruct you to take your own rectal cavity out of your cranium. Seriously, it’s about getting a reality check here! One of the things that I admire most about my husband is how he can simplify things down to the simplest of terms, even though sometimes after the fact. We’re spending our lives together, and I can’t wait to get over it. He’s spoken in this way about his devotion, love, and acceptance of me, and I couldn’t be more appreciative. It also, of course, has meant that I needed to remove my head from that rectal cavity. Again, those are typically appreciated in retrospect.

    Grab a book, please!

    There are many books out there that aren’t so much self-help as they are people just like you sharing their stories and how they’ve come to find greater balance. You might discover something that resonates with you. Among the titles that have stood out to me are:

    • Thrive by Arianna Huffington
    • Tim Ferriss ‘ Tools of Titans
    • Girl, Stop Apologizing by Rachel Hollis
    • Dare to Lead by Brené Brown

    Or, a tactic I enjoy using is to read or listen to a book that is NOT related to my work-life balance. I’ve read the following books, and I think they helped to balance me out because my mind was thinking about the subjects they were interested in rather than whizzing around:

    • The Drunken Botanist by Amy Stewart
    • Darin Olien’s Superlife
    • A Brief History of Every Person Who Ever Lived by Adam Rutherford
    • Gaia’s Garden by Toby Hemenway

    If you’re not interested in reading, you can find a topic on YouTube or subscribe to a podcast. In addition to learning about raising chickens and ducks, I’ve watched a lot of gardening and permaculture topics. For the record, I do not have a particularly large food garden, nor do I own livestock of any kind… yet. Nothing about my life needs anything from me, and I just find the subject interesting.

    Give yourself a break.

    You are never going to be perfect—hell, it would be boring if you were. It can be imperfect and broken. It’s human to be depressed, anxious, and sad. It’s OK to not do it all. You can’t be brave without being imperfect, which is terrifying.

    This is the most crucial part: give yourself permission to NOT do it all. You never promised to be everything to everyone at all times. We are stronger than the anxieties that motivate us.

    It’s challenging. It is hard for me. That it’s okay to stop is what inspired me to write this. It’s acceptable that you have to stop an unhealthy habit that could even help you and those around you. You can still be successful in life.

    I just learned that we are all euthanizing in our daily lives. What will your professional accomplishments say, knowing that your speech won’t include them? What do you want it to say?

    Look, I understand that none of these concepts will “fix it,” and that’s not their intention. Only how we react to the things around us is what we control. These suggestions are to help stop the spiral effect so that you are empowered to address the underlying issues and choose your response. They are things that most of the time work for me. They might be able to help you.

    Does this sound familiar?

    If something resounds familiar to you, it’s not just you. Don’t let your sluggish self-talk tell you that you “even burn out wrong.” It’s not wrong. I think this need to do more comes from a place of love, determination, motivation, and other wonderful qualities that contribute to your amazing persona, even if you’re like my own drivers. We’re going to be fine, you see. The lives that unfold before us might never look like that story in our head—that idea of “perfect” or “done” we’re looking for, but that’s OK. Really, when we stop and look around, usually the only eyes that judge us are in the mirror.

    Do you recall the Winnie the Pooh cartoon where Pooh ate so much at Rabbit’s house that his buttocks were unable to pass through the door? It came as no surprise when he abruptly declared that this was unacceptable because I already associate a lot with Rabbit. But do you recall what happened next? He made the most of the large butt in his kitchen by placing a shelf across poor Pooh’s ankles and decorations on his back.

    At the end of the day, we are resourceful and aware that we can push ourselves if necessary, even when we are exhausted or have a ton of stuff in our room. None of us has to be afraid, as we can manage any obstacle put in front of us. And maybe that means we need to redefine success in order to make room for comfort for being uncomfortable human, but that doesn’t really sound that bad either.

    So, if you’re anywhere right now, take a deep breath. Do what you need to do to get out of your head. Give thanks and be considerate.

  • Asynchronous Design Critique: Giving Feedback

    Asynchronous Design Critique: Giving Feedback

    One of the most successful soft skills we have at our disposal is feedback, in whatever form it takes, and whatever it may be called. It helps us collaborate to improve our designs while developing our own abilities and perspectives.

    Feedback is also one of the most underestimated equipment, and generally by assuming that we’re now great at it, we settle, forgetting that it’s a skill that can be trained, grown, and improved. Bad feedback can cause conflict in jobs, lower motivation, and negatively impact faith and teamwork over the long term. Quality suggestions can have a revolutionary effect.

    Practicing our knowledge is absolutely a good way to enhance, but the learning gets yet faster when it’s paired with a good base that programs and focuses the exercise. What are some fundamental components of providing effective opinions? And how can distant and distributed workplaces change feedback?

    On the web, we may discover a long history of sequential suggestions: from the early weeks of open source, script was shared and discussed on email addresses. Developers and sprint masters discuss ideas on tickets, designers comment on their beloved design tools, and so on.

    Design analysis is frequently referred to as a form of collaborative feedback that is used to improve our work. So it shares a lot of the rules with comments in public, but it also has some variations.

    The information

    The content of the feedback serves as the foundation for every effective criticism, so we need to start there. There are many designs that you can use to form your content. This one from Lara Hogan is the one I privately like best because it’s simple and actionable.

    This equation, which is typically used to provide feedback to users, even fits really well in a design critique because it finally addresses one of the main issues that we address: What? Where? Why? How? Imagine that you’re giving some comments about some pattern function that spans several screens, like an onboard movement: there are some pages shown, a stream blueprint, and an outline of the decisions made. You notice a flaw in the situation. You’ll have a mental model that will enable you to be more accurate and effective if you keep in mind the three components of the equation.

    Here is a comment that could be given as a part of some feedback, and it might look reasonable at a first glance: it seems to superficially fulfill the elements in the equation. But does it exist?

    Concerning the buttons ‘ styles and hierarchy, it seems off. Can you change them?

    Observation for design feedback doesn’t just mean pointing out which area of the interface your feedback touches, but it also means offering a perspective that’s as specific as possible. Do you offer the user’s viewpoint? Your expert perspective? From a business perspective? From the perspective of the project manager? A first-time user’s perspective?

    I anticipate one to go forward and the other to go back when I see these two buttons.

    Impact is about the why. Just pointing out a UI element might sometimes be enough if the issue may be obvious, but more often than not, you should add an explanation of what you’re pointing out.

    I anticipate one to go forward and the other to go back when I see these two buttons. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow.

    The question approach is intended to give open guidance by encouraging the designer to think critically while receiving the feedback. Notably, Lara’s equation includes a second approach: request, which instead provides instructions on how to find a particular solution. While that’s a viable option for feedback in general, for design critiques, in my experience, defaulting to the question approach usually reaches the best solutions because designers are generally more comfortable in being given an open space to explore.

    For the question approach, the difference between the two can be demonstrated as an illustration:

    I anticipate one to go forward and the other to go back when I see these two buttons. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Would it make sense to unify them?

    Or, for the request approach:

    I anticipate one to go forward and the other to go back when I see these two buttons. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same pair of forward and back buttons.

    In some situations, adding an additional reason why you think the suggestion is better might be helpful at this point.

    I anticipate one to go forward and the other to go back when I see these two buttons. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same two forward and back buttons so that users don’t get confused.

    Choosing between the request and question approaches can occasionally be a matter of personal preference. I did rounds of anonymous feedback and reviewed feedback with other people before putting a lot of effort into improving it a while ago. After a few rounds of this work and a year later, I got a positive response: my feedback came across as effective and grounded. until I switched teams. Surprise surprise, my next round of criticism from a specific person wasn’t very positive. The reason is that I had previously tried not to be prescriptive in my advice—because the people who I was previously working with preferred the open-ended question format over the request style of suggestions. However, there was one person in this other team who now preferred specific guidance. So I changed my feedback so that it included requests.

    One comment that I heard come up a few times is that this kind of feedback is quite long, and it doesn’t seem very efficient. Yes, but also no. Let’s look at both sides.

    No, this style of feedback is actually efficient because the length here is a byproduct of clarity, and spending time giving this kind of feedback can provide exactly enough information for a good fix. Additionally, if we zoom out, it may lessen misunderstandings and back-and-forth conversations in the future, thereby increasing overall effectiveness and efficiency of collaboration beyond the single comment. Consider the example above where the feedback would be simply,” Let’s make sure that all screens have the same two forward and back buttons.” The designer receiving this feedback wouldn’t have much to go by, so they might just apply the change. The interface might change in later iterations or they might add new features, and perhaps that change no longer makes sense. Without explaining the why, the designer might assume that the change is one of consistency, but what if it wasn’t? So there could now be an underlying concern that changing the buttons would be perceived as a regression.

    Yes, this type of feedback is not always effective because some comments don’t always need to be thorough, some times because some changes are made because they don’t always follow our instructions, and others because the team may have extensive internal knowledge, which makes some of the whys possible be implied.

    Therefore, the equation above is intended to serve as a mnemonic to reflect and enhance the practice rather than a strict template for feedback. Even after years of active work on my critiques, I still from time to time go back to this formula and reflect on whether what I just wrote is effective.

    The tone

    Feedback forms the basis for well-developed content, but that’s not really enough. The soft skills of the person who’s providing the critique can multiply the likelihood that the feedback will be well received and understood. It has been demonstrated that only positive feedback can lead to sustained change in people, and tone alone can determine whether content is rejected or welcomed.

    Tone is crucial to work on because our goal is to be understood and to have a positive working environment. Over the years, I’ve tried to summarize the required soft skills in a formula that mirrors the one for content: the receptivity equation.

    Respectful feedback comes across as constructive, solid, and grounded. It’s the kind of feedback that, regardless of whether it’s positive or negative, is thought to be useful and fair.

    Timing refers to when the feedback happens. When given at the wrong time, to-the-point feedback has little chance of receiving favorable reception. When a new feature’s entire high-level information architecture is about to go live, it might still be relevant if the questioning raises a significant blocker that no one saw, but those concerns are much more likely to have to wait for a later revision. So in general, attune your feedback to the stage of the project. Iteration in the morning? Iteration that was later? Polishing work in progress? Each of these has unique needs. Your feedback will be received favorably if the right timing is chosen.

    Attitude is the equivalent of intent, and in the context of person-to-person feedback, it can be referred to as radical candor. Before writing, it’s important to make sure the person we’re writing will actually benefit them and improve the overall project. Perhaps we don’t want to admit that we don’t really appreciate that person when we reflect on them. Hopefully that’s not the case, but that can happen, and that’s okay. How would I write if I really cared about them, aside from acknowledging and having that to help you make up for it? What can I do to stop being passive-aggressive? How can I be more constructive?

    Form is important in multicultural and cross-cultural workplaces because having excellent writing, perfect timing, and the right attitude might not be as effective if the writing style leads to miscommunications. There could be many reasons for this, including the fact that occasionally certain words may cause specific reactions, that non-native speakers may not be able to comprehend all thenuances of some sentences, that our brains may be different, and that we may perceive the world differently. Neurodiversity is a requirement. Whatever the reason, it’s important to review not just what we write but how.

    I asked for some feedback on how I gave it a while back. I was given some sound advice, but I also got a surprise comment. They pointed out that when I wrote” Oh, ]… ]”, I made them feel stupid. That wasn’t my intention at all! I just realized that I had been giving them feedback for months and that I had always made them feel foolish. I was horrified … but also thankful. I quickly changed my spelling mistake by adding “oh” to my list of replaced words (your choice between aText, TextExpander, or others ) so that when I typed “oh,” it was immediately deleted.

    Something to keep in mind is that people frequently beat around the bush, especially in teams with strong group spirit. It’s important to remember here that a positive attitude doesn’t mean going light on the feedback—it just means that even when you provide hard, difficult, or challenging feedback, you do so in a way that’s respectful and constructive. You can help someone grow the best way you can.

    Giving feedback in written form can be reviewed by someone else who isn’t directly involved, which can help to reduce or eliminate any bias that might exist. I found that the best, most insightful moments for me have happened when I’ve shared a comment and I’ve asked someone who I highly trusted,” How does this sound”?,” How can I do it better”, and even” How would you have written it” ?—and I’ve learned a lot by seeing the two versions side by side.

    The format

    Asynchronous feedback also has a significant inherent benefit: we can devote more time to making sure that the suggestions ‘ clarity of communication and actionability fulfill two main objectives.

    Let’s imagine that someone shared a design iteration for a project. You are re-reading it and leaving a comment. Let’s try to think about some factors that might be helpful to consider, as there are many ways to accomplish this, and context is of course a factor.

    In terms of clarity, start by grounding the critique that you’re about to give by providing context. This includes specifically describing where you’re coming from: do you have a thorough understanding of the project, or is this your first time seeing it? Do you have a high-level perspective, or are you just learning the details? Are there regressions? Which user’s point of view are you addressing when offering feedback? Is the design iteration at the point where it would be acceptable to ship this, or are there important issues that need to be addressed first?

    Providing context is helpful even if you’re sharing feedback within a team that already has some information on the project. And context is a must when providing cross-team feedback. If I were to review a design that might be directly connected to my work, I would say that, underlining my opinion as external, and if I had no idea how the project came to that conclusion.

    We often focus on the negatives, trying to outline all the things that could be done better. That’s obviously important, but it’s even more crucial to concentrate on the positive aspects, especially if you saw improvement in the previous iteration. Although this may seem superfluous, it’s important to keep in mind that design is a field with hundreds of possible solutions to each problem. So pointing out that the design solution that was chosen is good and explaining why it’s good has two major benefits: it confirms that the approach taken was solid, and it helps to ground your negative feedback. Sharing positive feedback can help prevent regressions on things that are going well because those things will have been identified as crucial in the long run. Positive feedback can also help, as an added bonus, prevent impostor syndrome.

    There’s one powerful approach that combines both context and a focus on the positives: frame how the design is better than the status quo ( compared to a previous iteration, competitors, or benchmarks ) and why, and then on that foundation, you can add what could be improved. There is a significant difference between a critique of a design that is already in good shape and one that isn’t quite there yet.

    Depersonalizing your feedback is another way to make it better: it should never be about the creator of the piece of art. It’s” This button isn’t well aligned” versus” You haven’t aligned this button well”. Just before sending, review your writing to make changes to this.

    One of the best ways to assist the designer who is reading your feedback is to divide it into bullet points or paragraphs, which are simpler to review and analyze one by one, in terms of actionability. For longer pieces of feedback, you might also consider splitting it into sections or even across multiple comments. Of course, adding screenshots or identifying markers for the specific area of the interface you’re referring to can also be very helpful.

    One method that I’ve personally used to enhance the bullet points in some situations is using emojis. So a red square � � means that it’s something that I consider blocking, a yellow diamond � � is something that I can be convinced otherwise, but it seems to me that it should be changed, and a green circle � � is a detailed, positive confirmation. A blue spiral is also used for exploration, open alternatives, or just a note when I’m not sure what to make. However, I’d only use this strategy on teams where I’ve already established a high level of trust because it might turn out to be quite demoralizing if I deliver a lot of red squares, and I’d have to reframe how I’d communicate that.

    Let’s see how this would work by reusing the example that we used earlier as the first bullet point in this list:

    • 🔶 Navigation—I anticipate one to go forward and the other to go back when I see these two buttons. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same two forward and back buttons so that users don’t get confused.
    • Overall, I believe the page is strong, and this is a good candidate for our version 1. 1.0 release candidate.
    • � � Metrics—Good improvement in the buttons on the metrics area, the improved contrast and new focus style make them more accessible.
    • Button Style: Using the green accent in this context, which conveys a positive action because green is typically seen as a confirmation color. Do we need to look for a different shade?
    • 🔶Tiles—Given the number of items on the page, and the overall page hierarchy, it seems to me that the tiles shouldn’t be using the Subtitle 1 style but the Subtitle 2 style. This will maintain consistency in the visual hierarchy.
    • Background: A light texture is effective, but I’m not sure if doing so will cause too much noise on this kind of page. What is the thinking in using that?

    What about using Figma or another design tool that allows in-place feedback to provide feedback directly in Figma? These are generally difficult to use because they conceal discussions and are harder to follow, but they can be very useful in the right context. Just make sure that each of the comments is separate so that it’s easier to match each discussion to a single task, similar to the idea of splitting mentioned above.

    One more thing: Say the obvious. Sometimes we might feel good or bad about something, so we don’t say it. Or sometimes we might have a doubt that we don’t express because the question might sound stupid. Say it, that’s fine. Don’t hold it back, though. You might have to reword it a little to make the reader feel more at ease. Good feedback is transparent, even when it may be obvious.

    Another benefit of asynchronous feedback is that written feedback automatically monitors decisions. Why did we do this, especially in large projects? could be a question that pops up from time to time, and there’s nothing better than open, transparent discussions that can be reviewed at any time. For this reason, I suggest using software to save these discussions without keeping them hidden until they are resolved.

    Content, tone, and format are all there. Each one of these subjects provides a useful model, but working to improve eight areas—observation, impact, question, timing, attitude, form, clarity, and actionability—is a lot of work to put in all at once. One effective way to approach them is to start with the area you lack the most, either from your point of view or from other people’s feedback. Then the second, followed by the third, and so on. At first you’ll have to put in extra time for every piece of feedback that you give, but after a while, it’ll become second nature, and your impact on the work will multiply.

    Thanks to Mike Shelton and Brie Anne Demkiw for their contributions to the initial draft of this article.

  • Asynchronous Design Critique: Getting Feedback

    Asynchronous Design Critique: Getting Feedback

    ” Any post” you might have? is perhaps one of the worst ways to ask for opinions. It’s obscure and unreliable, and it doesn’t give a clear picture of what we’re looking for. Great feedback begins sooner than we might anticipate: it begins with the request.

    It might seem contradictory to start the process of receiving feedback with a problem, but that makes sense if we realize that getting feedback can be thought of as a form of design study. The best way to ask for feedback is to write down some insightful questions, just like we wouldn’t do any research without the correct questions to obtain the insight we need.

    Design criticism is never a one-time procedure. Sure, any great comments process continues until the project is finished, but this is especially true for layout because architecture work continues iteration after iteration, from a high level to the finest details. Each stage requires its unique set of questions.

    Finally, we need to review what we received, get to the heart of its findings, and taking action, as with any great research. Problem, generation, and evaluation. Let’s take a closer look at each of those.

    The query

    Being available to input is important, but we need to be specific about what we’re looking for. Any comments,” What do you think,” or” I’d love to hear your mind” at the end of a presentation are likely to garner a lot of different ideas, or worse, to make people follow the lead of the first speaker. And finally, we become irritated because ambiguous queries like those can result in people leaving reviews that don’t even consider keys. Which might be a savory matter, so it might be hard at that point to divert the crew to the topics that you had wanted to focus on.

    But how do we enter this circumstance? A number of elements are involved. One is that we don’t often consider asking as a part of the input approach. Another is how healthy it is to assume that everyone else will agree with the problem and leave it alone. Another is that there’s frequently no need to be that specific in nonprofessional conversations. In short, we tend to underestimate the importance of the concerns, so we don’t work on improving them.

    The work of asking insightful questions guidelines and concentrates the criticism. It also serves as a form of acceptance, outlining your willingness to make comments and the types of comments you want to receive. It puts people in the right emotional position, especially in situations when they weren’t expecting to provide feedback.

    There isn’t a second best method to request suggestions. It only needs to be certain, which can take many forms. A design for design critique that I’ve found especially helpful in my training is the one of stage over depth.

    The term” level” refers to each stage of the process, specifically the design phase. The type of input changes as the customer research moves forward to the final design. But within a single stage, one might also examine whether some assumptions are correct and whether there’s been a suitable language of the amassed input into updated designs as the job has evolved. The levels of consumer experience may serve as a starting point for future inquiries. What are your job goals, exactly? User requirements? Funnality? the glad Contact design? Data infrastructure Interface design Navigation style? Visual layout Brand?

    Here’re a some example questions that are specific and to the place that refer to different levels:

    • Functionality: Is it attractive to automate accounts creation?
    • Contact design: Please review the updated movement and let me know if there are any steps or error points I may have missed.
    • Information infrastructure: We have two competing bits of information on this site. Does the construction make a good communication between them?
    • User interface design: What do you think about the problem desk at the top of the page, which makes sure you see the following error even if it is outside the viewport?
    • Navigation style: From study, we identified these second-level routing items, but when you’re on the webpage, the list feels overly long and hard to understand. Are there any ways to deal with this?
    • Are the thick alerts in the bottom-right corner of the page obvious enough?

    The other plane of sensitivity is about how heavy you’d like to go on what’s being presented. For instance, we may have introduced a new end-to-end movement, but you might want to know more about a particular viewpoint you found especially hard. This can be especially helpful when switching between iterations because it’s crucial to identify the changes made.

    There are other issues that we can consider when we want to accomplish more specific—and more effective—questions.

    A quick fix is to get rid of the general qualifiers from issues like “good”, “well,” “nice,” “bad,” “okay,” and” cool.” For instance, what is the question” When the wall opens and the switches appear, is this contact good”? may seem precise, but you can place the “good” tournament, and transfer it to an even better query:” When the wall opens and the buttons appear, is it clear what the next action is”?

    Sometimes we do want a lot of feedback. Although that is uncommon, it is possible. In that sense, you might still make it explicit that you’re looking for a wide range of opinions, whether at a high level or with details. Or perhaps you should just say,” At first glance, what do you think”? so that it is obvious that what you’re asking is open ended but focused on a person’s impression after their first five seconds of inquiry.

    Sometimes the project is particularly expansive, and some areas may have already been explored in detail. In these circumstances, it might be helpful to state explicitly that some parts are already locked in and aren’t accessible for feedback. Although it’s not something I’d recommend in general, I’ve found it helpful in avoiding getting back into rabbit holes like those that could lead to even more refinement if what’s important right now isn’t.

    Asking specific questions can completely change the quality of the feedback that you receive. People with less refined criticism will now be able to provide more actionable feedback, and even expert designers will appreciate the clarity and effectiveness gained from concentrating solely on what’s needed. It can save a lot of time and frustration.

    The iteration

    Design iterations are probably the most recognizable component of the design process, and they act as a natural feedback loop. Many design tools have inline commenting, but many of those methods typically display changes as a single fluid stream in the same file. These methods cause conversations to vanish once they’re resolved, update shared UI components automatically, and require designs to always display the most recent version unless these would-be useful features were manually turned off. The implied goal that these design tools seem to have is to arrive at just one final copy with all discussions closed, probably because they inherited patterns from how written documents are collaboratively edited. That’s probably not the most effective way to go about designing critiques, but even if I don’t want to be too prescriptive, it might work for some teams.

    Create explicit checkpoints for discussion is the asynchronous design-critique strategy that I find to be most successful. I’m going to use the term iteration post for this. It refers to a write-up or presentation of the design iteration that is followed by some sort of discussion thread. This can be used on any platform that can accommodate this structure. By the way, when I refer to a “write-up or presentation“, I’m including video recordings or other media too: as long as it’s asynchronous, it works.

    There are many benefits to using iteration posts:

      It establishes a rhythm in the design process, allowing the designer to review the feedback from each iteration and get ready for the following.
    • It makes decisions visible for future review, and conversations are likewise always available.
    • It keeps track of how the design evolved over time.
    • It might also make it simpler to collect and act on feedback depending on the tool.

    These posts of course don’t mean that no other feedback approach should be used, just that iteration posts could be the primary rhythm for a remote design team to use. From there, there can be additional feedback techniques ( such as live critique, pair designing, or inline comments ).

    There isn’t, in my opinion, a universal format for iteration posts. But there are a few high-level elements that make sense to include as a baseline:

    1. The objective is to achieve
    2. The layout
    3. The list of changes
    4. The querys

    Each project is likely to have a goal, and it should most likely be one that has already been summarized in one sentence elsewhere, such as the client brief, the product manager’s outline, or the request of the project owner. So this is something that I’d repeat in every iteration post—literally copy and pasting it. The goal is to provide context and repeat what is necessary to complete each iteration post so that there is no need to search for information in different posts. The most recent iteration post will have everything I need if I want to know about the most recent design.

    This copy-and-paste part introduces another relevant concept: alignment comes from repetition. Therefore, repeating information in posts is actually very effective at ensuring that everyone is on the same page.

    The actual series of information-architecture outlines, diagrams, flows, maps, wireframes, screens, visuals, and any other design work that has been done is what the design is then called. In short, it’s any design artifact. In the final stages of the project, I prefer to use the term “blank” to indicate that I’ll be displaying complete flows rather than individual screens to make it simpler to comprehend the larger picture.

    It might also be helpful to have clear names on the artifacts so that it is easier to refer to them. Write the post in a way that helps people understand the work. It’s not much different from creating a strong live presentation.

    For a successful discussion, you should also include a bullet list of the changes made in the previous iteration to help people concentrate on what’s changed. This can be especially useful for larger pieces of work where keeping track, iteration after iteration, may prove difficult.

    And finally, as noted earlier, it’s essential that you include a list of the questions to drive the design critique in the direction you want. Creating a numbered list of questions can also help make it simpler to refer to each one by its number.

    Not every iteration is the same. Earlier iterations don’t need to be as tightly focused—they can be more exploratory and experimental, maybe even breaking some of the design-language guidelines to see what’s possible. Then, later, the iterations begin coming to a decision and improving it until the design process is complete and the feature is ready.

    Even if these iteration posts are written and intended as checkpoints, I want to point out that they are not by any means required to be exhaustive. A post might be a draft—just a concept to get a conversation going—or it could be a cumulative list of each feature that was added over the course of each iteration until the full picture is done.

    I also started using particular labels for incremental iterations over time, such as i1, i2, i3, and so on. Although this may seem like a minor labeling tip, it can be useful in many ways:

    • Unique—It’s a clear unique marker. Everyone knows where to go to review things, and it’s simple to say” This was discussed in i4″ with each project.
    • Unassuming—It functions like versions ( such as v1, v2, and v3 ), but versions give the impression of something that is large, exhaustive, and complete. Iterations must be able to be exploratory, incomplete, partial.
    • Future proof—It resolves the “final” naming issue that versions can encounter. No more files with the title “final final complete no-really-its-done” Within each project, the largest number always represents the latest iteration.

    The wording release candidate (RC ) could be used to indicate when a design is finished enough to be worked on, even if there are some areas that still need improvement and, in turn, require more iterations, such as” with i8 we reached RC” or “i12 is an RC” to indicate when it is finished.

    The evaluation

    What usually happens during a design critique is an open discussion, with a back and forth between people that can be very productive. This strategy is particularly successful when synchronous feedback is being received live. However, when we work asynchronously, using a different approach is more effective: we can adopt a user-research mindset. Written feedback from teammates, stakeholders, or others can be treated as if it were the result of user interviews and surveys, and we can analyze it accordingly.

    Asynchronous feedback is particularly effective around these friction points because of this shift’s significant benefits:

      It makes it easier to respond to everyone.
    1. It reduces the frustration from swoop-by comments.
    2. It lessens our own worth.

    The first friction point is having to press yourself to respond to each and every comment. Sometimes we write the iteration post, and we get replies from our team. It’s just a few of them, it’s simple, and there isn’t much to worry about. However, there may be times when some solutions may require more in-depth discussions and the number of replies may quickly rise, which can create tension between trying to be a good team player by responding to everyone and attempting the next design iteration. This might be especially true if the person who’s replying is a stakeholder or someone directly involved in the project who we feel that we need to listen to. It’s human nature to try to accommodate those we care about, and we need to accept that this pressure is completely normal. Responding to all comments at times can be effective, but when we consider a design critique more like user research, we realize that we don’t need to respond to every comment, and there are alternatives in asynchronous spaces:

      One is to let the next iteration speak for itself. That is the response when the design changes and we publish a follow-up iteration. You could tag everyone in the previous discussion, but that’s just a choice, not a requirement.
    • Another is to briefly reply to acknowledge each comment, such as” Understood. Thank you,”” Good points— I’ll review,” or” Thanks. These will be included in the upcoming iteration. In some cases, this could also be just a single top-level comment along the lines of” Thanks for all the feedback everyone—the next iteration is coming soon”!
    • Another option is to provide a quick summary of the comments before moving on. This may be particularly helpful if your workflow allows you to create a simplified checklist that you can use for the following iteration.

    The second friction point is the swoop-by comment, which is the kind of feedback that comes from someone outside the project or team who might not be aware of the context, restrictions, decisions, or requirements —or of the previous iterations ‘ discussions. One thing that one can hope that they might learn is that they could begin to acknowledge that they are doing this and that they could be more aware of where they are coming from. Swoop-by comments frequently prompt the simple thought,” We’ve already discussed this,” and it can be frustrating to have to keep coming back and forth.

    Let’s begin by acknowledging again that there’s no need to reply to every comment. However, if responding to a previously litigated point is useful, a brief response with a link to the previous discussion for additional information is typically sufficient. Remember that repetition results in alignment, so it’s acceptable to repeat things occasionally!

    Swoop-by commenting can still be useful for two reasons: they might point out something that still isn’t clear, and they also have the potential to stand in for the point of view of a user who’s seeing the design for the first time. Yes, you’ll still be frustrated, but that might at least make things better for you.

    The personal stake we might have in the design could be the third friction point, which might cause us to feel defensive if the review turned into a discussion. Treating feedback as user research helps us create a healthy distance between the people giving us feedback and our ego ( because yes, even if we don’t want to admit it, it’s there ). In the end, putting everything in aggregate form helps us to prioritize our work more.

    You don’t have to accept every piece of feedback, but you do need to listen to stakeholders, project owners, and specific advice. You have to analyze it and make a decision that you can justify, but sometimes “no” is the right answer.

    You are in charge of making that choice as the designer who is in charge of the project. In the end, everyone has their area of specialization, and the designer is the one with the most background and knowledge to make the right choice. And by listening to the feedback that you’ve received, you’re making sure that it’s also the best and most balanced decision.

    Thanks to Mike Shelton and Brie Anne Demkiw for their initial review of this article.

  • Designing for the Unexpected

    Designing for the Unexpected

    I’m not sure when I first heard this quote, but it’s something that has stayed with me over the years. How do you create services for situations you can’t imagine? Or design products that work on devices yet to be invented?

    Flash, Photoshop, and responsive design

    When I first started designing websites, my go-to software was Photoshop. I created a 960px canvas and set about creating a layout that I would later drop content in. The development phase was about attaining pixel-perfect accuracy using fixed widths, fixed heights, and absolute positioning.

    Ethan Marcotte’s talk at An Event Apart and subsequent article “Responsive Web Design” in A List Apart in 2010 changed all this. I was sold on responsive design as soon as I heard about it, but I was also terrified. The pixel-perfect designs full of magic numbers that I had previously prided myself on producing were no longer good enough.

    The fear wasn’t helped by my first experience with responsive design. My first project was to take an existing fixed-width website and make it responsive. What I learned the hard way was that you can’t just add responsiveness at the end of a project. To create fluid layouts, you need to plan throughout the design phase.

    A new way to design

    Designing responsive or fluid sites has always been about removing limitations, producing content that can be viewed on any device. It relies on the use of percentage-based layouts, which I initially achieved with native CSS and utility classes:

    .column-span-6 {
      width: 49%;
      float: left;
      margin-right: 0.5%;
      margin-left: 0.5%;
    }
    
    
    .column-span-4 {
      width: 32%;
      float: left;
      margin-right: 0.5%;
      margin-left: 0.5%;
    }
    
    .column-span-3 {
      width: 24%;
      float: left;
      margin-right: 0.5%;
      margin-left: 0.5%;
    }

    Then with Sass so I could take advantage of @includes to re-use repeated blocks of code and move back to more semantic markup:

    .logo {
      @include colSpan(6);
    }
    
    .search {
      @include colSpan(3);
    }
    
    .social-share {
      @include colSpan(3);
    }

    Media queries

    The second ingredient for responsive design is media queries. Without them, content would shrink to fit the available space regardless of whether that content remained readable (The exact opposite problem occurred with the introduction of a mobile-first approach).

    Media queries prevented this by allowing us to add breakpoints where the design could adapt. Like most people, I started out with three breakpoints: one for desktop, one for tablets, and one for mobile. Over the years, I added more and more for phablets, wide screens, and so on. 

    For years, I happily worked this way and improved both my design and front-end skills in the process. The only problem I encountered was making changes to content, since with our Sass grid system in place, there was no way for the site owners to add content without amending the markup—something a small business owner might struggle with. This is because each row in the grid was defined using a div as a container. Adding content meant creating new row markup, which requires a level of HTML knowledge.

    Row markup was a staple of early responsive design, present in all the widely used frameworks like Bootstrap and Skeleton.

    1 of 7
    2 of 7
    3 of 7
    4 of 7
    5 of 7
    6 of 7
    7 of 7

    Another problem arose as I moved from a design agency building websites for small- to medium-sized businesses, to larger in-house teams where I worked across a suite of related sites. In those roles I started to work much more with reusable components. 

    Our reliance on media queries resulted in components that were tied to common viewport sizes. If the goal of component libraries is reuse, then this is a real problem because you can only use these components if the devices you’re designing for correspond to the viewport sizes used in the pattern library—in the process not really hitting that “devices that don’t yet exist”  goal.

    Then there’s the problem of space. Media queries allow components to adapt based on the viewport size, but what if I put a component into a sidebar, like in the figure below?

    Container queries: our savior or a false dawn?

    Container queries have long been touted as an improvement upon media queries, but at the time of writing are unsupported in most browsers. There are JavaScript workarounds, but they can create dependency and compatibility issues. The basic theory underlying container queries is that elements should change based on the size of their parent container and not the viewport width, as seen in the following illustrations.

    One of the biggest arguments in favor of container queries is that they help us create components or design patterns that are truly reusable because they can be picked up and placed anywhere in a layout. This is an important step in moving toward a form of component-based design that works at any size on any device.

    In other words, responsive components to replace responsive layouts.

    Container queries will help us move from designing pages that respond to the browser or device size to designing components that can be placed in a sidebar or in the main content, and respond accordingly.

    My concern is that we are still using layout to determine when a design needs to adapt. This approach will always be restrictive, as we will still need pre-defined breakpoints. For this reason, my main question with container queries is, How would we decide when to change the CSS used by a component? 

    A component library removed from context and real content is probably not the best place for that decision. 

    As the diagrams below illustrate, we can use container queries to create designs for specific container widths, but what if I want to change the design based on the image size or ratio?

    In this example, the dimensions of the container are not what should dictate the design; rather, the image is.

    It’s hard to say for sure whether container queries will be a success story until we have solid cross-browser support for them. Responsive component libraries would definitely evolve how we design and would improve the possibilities for reuse and design at scale. But maybe we will always need to adjust these components to suit our content.

    CSS is changing

    Whilst the container query debate rumbles on, there have been numerous advances in CSS that change the way we think about design. The days of fixed-width elements measured in pixels and floated div elements used to cobble layouts together are long gone, consigned to history along with table layouts. Flexbox and CSS Grid have revolutionized layouts for the web. We can now create elements that wrap onto new rows when they run out of space, not when the device changes.

    .wrapper {
      display: grid;
      grid-template-columns: repeat(auto-fit, 450px);
      gap: 10px;
    }

    The repeat() function paired with auto-fit or auto-fill allows us to specify how much space each column should use while leaving it up to the browser to decide when to spill the columns onto a new line. Similar things can be achieved with Flexbox, as elements can wrap over multiple rows and “flex” to fill available space. 

    .wrapper {
      display: flex;
      flex-wrap: wrap;
      justify-content: space-between;
    }
    
    .child {
      flex-basis: 32%;
      margin-bottom: 20px;
    }

    The biggest benefit of all this is you don’t need to wrap elements in container rows. Without rows, content isn’t tied to page markup in quite the same way, allowing for removals or additions of content without additional development.

    This is a big step forward when it comes to creating designs that allow for evolving content, but the real game changer for flexible designs is CSS Subgrid. 

    Remember the days of crafting perfectly aligned interfaces, only for the customer to add an unbelievably long header almost as soon as they’re given CMS access, like the illustration below?

    Subgrid allows elements to respond to adjustments in their own content and in the content of sibling elements, helping us create designs more resilient to change.

    .wrapper {
      display: grid;
      grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
         grid-template-rows: auto 1fr auto;
      gap: 10px;
    }
    
    .sub-grid {
      display: grid;
      grid-row: span 3;
      grid-template-rows: subgrid; /* sets rows to parent grid */
    }

    CSS Grid allows us to separate layout and content, thereby enabling flexible designs. Meanwhile, Subgrid allows us to create designs that can adapt in order to suit morphing content. Subgrid at the time of writing is only supported in Firefox but the above code can be implemented behind an @supports feature query. 

    Intrinsic layouts 

    I’d be remiss not to mention intrinsic layouts, the term created by Jen Simmons to describe a mixture of new and old CSS features used to create layouts that respond to available space. 

    Responsive layouts have flexible columns using percentages. Intrinsic layouts, on the other hand, use the fr unit to create flexible columns that won’t ever shrink so much that they render the content illegible.

    fr units is a way to say I want you to distribute the extra space in this way, but…don’t ever make it smaller than the content that’s inside of it.

    —Jen Simmons, “Designing Intrinsic Layouts”

    Intrinsic layouts can also utilize a mixture of fixed and flexible units, allowing the content to dictate the space it takes up.

    What makes intrinsic design stand out is that it not only creates designs that can withstand future devices but also helps scale design without losing flexibility. Components and patterns can be lifted and reused without the prerequisite of having the same breakpoints or the same amount of content as in the previous implementation. 

    We can now create designs that adapt to the space they have, the content within them, and the content around them. With an intrinsic approach, we can construct responsive components without depending on container queries.

    Another 2010 moment?

    This intrinsic approach should in my view be every bit as groundbreaking as responsive web design was ten years ago. For me, it’s another “everything changed” moment. 

    But it doesn’t seem to be moving quite as fast; I haven’t yet had that same career-changing moment I had with responsive design, despite the widely shared and brilliant talk that brought it to my attention. 

    One reason for that could be that I now work in a large organization, which is quite different from the design agency role I had in 2010. In my agency days, every new project was a clean slate, a chance to try something new. Nowadays, projects use existing tools and frameworks and are often improvements to existing websites with an existing codebase. 

    Another could be that I feel more prepared for change now. In 2010 I was new to design in general; the shift was frightening and required a lot of learning. Also, an intrinsic approach isn’t exactly all-new; it’s about using existing skills and existing CSS knowledge in a different way. 

    You can’t framework your way out of a content problem

    Another reason for the slightly slower adoption of intrinsic design could be the lack of quick-fix framework solutions available to kick-start the change. 

    Responsive grid systems were all over the place ten years ago. With a framework like Bootstrap or Skeleton, you had a responsive design template at your fingertips.

    Intrinsic design and frameworks do not go hand in hand quite so well because the benefit of having a selection of units is a hindrance when it comes to creating layout templates. The beauty of intrinsic design is combining different units and experimenting with techniques to get the best for your content.

    And then there are design tools. We probably all, at some point in our careers, used Photoshop templates for desktop, tablet, and mobile devices to drop designs in and show how the site would look at all three stages.

    How do you do that now, with each component responding to content and layouts flexing as and when they need to? This type of design must happen in the browser, which personally I’m a big fan of. 

    The debate about “whether designers should code” is another that has rumbled on for years. When designing a digital product, we should, at the very least, design for a best- and worst-case scenario when it comes to content. To do this in a graphics-based software package is far from ideal. In code, we can add longer sentences, more radio buttons, and extra tabs, and watch in real time as the design adapts. Does it still work? Is the design too reliant on the current content?

    Personally, I look forward to the day intrinsic design is the standard for design, when a design component can be truly flexible and adapt to both its space and content with no reliance on device or container dimensions.

    Content first 

    Content is not constant. After all, to design for the unknown or unexpected we need to account for content changes like our earlier Subgrid card example that allowed the cards to respond to adjustments to their own content and the content of sibling elements.

    Thankfully, there’s more to CSS than layout, and plenty of properties and values can help us put content first. Subgrid and pseudo-elements like ::first-line and ::first-letter help to separate design from markup so we can create designs that allow for changes.

    Instead of old markup hacks like this—

    First line of text with different styling...

    —we can target content based on where it appears.

    .element::first-line {
      font-size: 1.4em;
    }
    
    .element::first-letter {
      color: red;
    }

    Much bigger additions to CSS include logical properties, which change the way we construct designs using logical dimensions (start and end) instead of physical ones (left and right), something CSS Grid also does with functions like min(), max(), and clamp().

    This flexibility allows for directional changes according to content, a common requirement when we need to present content in multiple languages. In the past, this was often achieved with Sass mixins but was often limited to switching from left-to-right to right-to-left orientation.

    In the Sass version, directional variables need to be set.

    $direction: rtl;
    $opposite-direction: ltr;
    
    $start-direction: right;
    $end-direction: left;

    These variables can be used as values—

    body {
      direction: $direction;
      text-align: $start-direction;
    }

    —or as properties.

    margin-#{$end-direction}: 10px;
    padding-#{$start-direction}: 10px;

    However, now we have native logical properties, removing the reliance on both Sass (or a similar tool) and pre-planning that necessitated using variables throughout a codebase. These properties also start to break apart the tight coupling between a design and strict physical dimensions, creating more flexibility for changes in language and in direction.

    margin-block-end: 10px;
    padding-block-start: 10px;

    There are also native start and end values for properties like text-align, which means we can replace text-align: right with text-align: start.

    Like the earlier examples, these properties help to build out designs that aren’t constrained to one language; the design will reflect the content’s needs.

    Fixed and fluid 

    We briefly covered the power of combining fixed widths with fluid widths with intrinsic layouts. The min() and max() functions are a similar concept, allowing you to specify a fixed value with a flexible alternative. 

    For min() this means setting a fluid minimum value and a maximum fixed value.

    .element {
      width: min(50%, 300px);
    }

    The element in the figure above will be 50% of its container as long as the element’s width doesn’t exceed 300px.

    For max() we can set a flexible max value and a minimum fixed value.

    .element {
      width: max(50%, 300px);
    }

    Now the element will be 50% of its container as long as the element’s width is at least 300px. This means we can set limits but allow content to react to the available space. 

    The clamp() function builds on this by allowing us to set a preferred value with a third parameter. Now we can allow the element to shrink or grow if it needs to without getting to a point where it becomes unusable.

    .element {
      width: clamp(300px, 50%, 600px);
    }

    This time, the element’s width will be 50% (the preferred value) of its container but never less than 300px and never more than 600px.

    With these techniques, we have a content-first approach to responsive design. We can separate content from markup, meaning the changes users make will not affect the design. We can start to future-proof designs by planning for unexpected changes in language or direction. And we can increase flexibility by setting desired dimensions alongside flexible alternatives, allowing for more or less content to be displayed correctly.

    Situation first

    Thanks to what we’ve discussed so far, we can cover device flexibility by changing our approach, designing around content and space instead of catering to devices. But what about that last bit of Jeffrey Zeldman’s quote, “…situations you haven’t imagined”?

    It’s a very different thing to design for someone seated at a desktop computer as opposed to someone using a mobile phone and moving through a crowded street in glaring sunshine. Situations and environments are hard to plan for or predict because they change as people react to their own unique challenges and tasks.

    This is why choice is so important. One size never fits all, so we need to design for multiple scenarios to create equal experiences for all our users.

    Thankfully, there is a lot we can do to provide choice.

    Responsible design 

    “There are parts of the world where mobile data is prohibitively expensive, and where there is little or no broadband infrastructure.”

    I Used the Web for a Day on a 50 MB Budget

    Chris Ashton

    One of the biggest assumptions we make is that people interacting with our designs have a good wifi connection and a wide screen monitor. But in the real world, our users may be commuters traveling on trains or other forms of transport using smaller mobile devices that can experience drops in connectivity. There is nothing more frustrating than a web page that won’t load, but there are ways we can help users use less data or deal with sporadic connectivity.

    The srcset attribute allows the browser to decide which image to serve. This means we can create smaller ‘cropped’ images to display on mobile devices in turn using less bandwidth and less data.

    Image alt text

    The preload attribute can also help us to think about how and when media is downloaded. It can be used to tell a browser about any critical assets that need to be downloaded with high priority, improving perceived performance and the user experience. 

     
     

    There’s also native lazy loading, which indicates assets that should only be downloaded when they are needed.

    …

    With srcset, preload, and lazy loading, we can start to tailor a user’s experience based on the situation they find themselves in. What none of this does, however, is allow the user themselves to decide what they want downloaded, as the decision is usually the browser’s to make. 

    So how can we put users in control?

    The return of media queries 

    Media queries have always been about much more than device sizes. They allow content to adapt to different situations, with screen size being just one of them.

    We’ve long been able to check for media types like print and speech and features such as hover, resolution, and color. These checks allow us to provide options that suit more than one scenario; it’s less about one-size-fits-all and more about serving adaptable content. 

    As of this writing, the Media Queries Level 5 spec is still under development. It introduces some really exciting queries that in the future will help us design for multiple other unexpected situations.

    For example, there’s a light-level feature that allows you to modify styles if a user is in sunlight or darkness. Paired with custom properties, these features allow us to quickly create designs or themes for specific environments.

    @media (light-level: normal) {
      --background-color: #fff;
      --text-color: #0b0c0c;  
    }
    
    @media (light-level: dim) {
      --background-color: #efd226;
      --text-color: #0b0c0c;
    }

    Another key feature of the Level 5 spec is personalization. Instead of creating designs that are the same for everyone, users can choose what works for them. This is achieved by using features like prefers-reduced-data, prefers-color-scheme, and prefers-reduced-motion, the latter two of which already enjoy broad browser support. These features tap into preferences set via the operating system or browser so people don’t have to spend time making each site they visit more usable. 

    Media queries like this go beyond choices made by a browser to grant more control to the user.

    Expect the unexpected

    In the end, the one thing we should always expect is for things to change. Devices in particular change faster than we can keep up, with foldable screens already on the market.

    We can’t design the same way we have for this ever-changing landscape, but we can design for content. By putting content first and allowing that content to adapt to whatever space surrounds it, we can create more robust, flexible designs that increase the longevity of our products. 

    A lot of the CSS discussed here is about moving away from layouts and putting content at the heart of design. From responsive components to fixed and fluid units, there is so much more we can do to take a more intrinsic approach. Even better, we can test these techniques during the design phase by designing in-browser and watching how our designs adapt in real-time.

    When it comes to unexpected situations, we need to make sure our products are usable when people need them, whenever and wherever that might be. We can move closer to achieving this by involving users in our design decisions, by creating choice via browsers, and by giving control to our users with user-preference-based media queries. 

    Good design for the unexpected should allow for change, provide choice, and give control to those we serve: our users themselves.

  • Voice Content and Usability

    Voice Content and Usability

    We’ve been conversing for a long time. Whether to present information, perform transactions, or just to check in on one another, people have yammered aside, chattering and gesticulating, through spoken discussion for many generations. Only recently have we begun to write our discussions, and only recently have we outsourced them to the system, a system that exhibits a significantly higher affection for written letter than for the vernacular rigors of spoken language.

    Speech is more important in laptops because it is more important than written speech in spoken and written writing. To have productive conversations with us, machines may struggle with the messiness of mortal speech: the disfluencies and pauses, the gestures and body language, and the variations in word choice and spoken dialect that is stymie even the most carefully crafted human-computer interaction. Speaking language also has the advantage of face-to-face contact, where we can easily interpret nonverbal social cues in the human-to-human scenario.

    In contrast, written language develops its own fossil record of dated terms and phrases as we record it and keep usages long after they are no longer needed in spoken communication ( for example, the salutation” To whom it may concern” ). Because it tends to be more consistent, polished, and formal, written text is fundamentally much easier for machines to parse and understand.

    This luxury is not available in spoken language. There are verbal cues and vocal behaviors that modulate conversation in nuanced ways, including how something is said, not what. These are also included in conversational cues that emphasize and enhance emotional context. Whether rapid-fire, low-pitched, or high-decibel, whether sarcastic, stilted, or sighing, our spoken language conveys much more than the written word could ever muster. As designers and content strategists, we face significant challenges when it comes to voice interfaces, the machines with which we speak.

    Voice-to-voice interactions

    We interact with voice interfaces for a variety of reasons, but according to Michael McTear, Zoraida Callejas, and David Griol in The Conversational Interface, those motivations by and large mirror the reasons we initiate conversations with other people, too ( ). We typically strike up a conversation as a result:

    • we require something ( such as a transaction ),
    • we want to know something ( information of some sort ), or
    • We are social creatures, and we need a conversation partner.

    A single conversation from beginning to end that achieves some outcome for the user, starting with the voice interface’s first greeting and ending with the user exiting the interface, also fits into these three categories, which I refer to as transactional, informational, and prosocial. Note here that a conversation in our human sense—a chat between people that leads to some result and lasts an arbitrary length of time—could encompass multiple transactional, informational, and prosocial voice interactions in succession. In other words, a voice interaction is a conversation, but it may not always be one voice interaction.

    Most voice interfaces are more gimmicky than captivating in purely prosocial conversations because machines are unable to yet be truly interested in our progress and engage in the kind of glad-handing behavior that people crave. There’s also ongoing debate as to whether users actually prefer the sort of organic human conversation that begins with a prosocial voice interaction and shifts seamlessly into other types. In Voice User Interface Design, Michael Cohen, James Giangola, and Jennifer Balogh advise sticking to user expectations by imitating how they interact with other voice interfaces, which might lead to alienating them ( ).

    That leaves two different types of conversations we can have with one another that a voice interface can also have easily, including one that is transactional and one that is informational, teaching us something new ( “discuss a musical” ).

    Transactional voice interactions

    When you order a Hawaiian pizza with extra pineapple, you’re typically having a conversation and a voice interaction when you’re tapping buttons on a food delivery app. The conversation quickly shifts from a brief smattering of neighborly small talk to ordering a pizza ( generously topped with pineapple, as it should be ) when we walk up to the counter and place an order.

    Alison: Hey, how’s it going?

    Burhan: Hello and welcome to Crust Deluxe! It’s chilly outside. How can I help you?

    Can I get a Hawaiian pizza with extra pineapple, Alison?

    Burhan: Yes, but what size?

    Alison: Large.

    Burhan: Anything else?

    Alison: No thanks, that’s it.

    Burhan: Something to drink?

    Alison: I’ll have a bottle of Coke.

    Burhan: You know it. That’ll be$ 13.55 and about fifteen minutes.

    A service rendered or a product delivered, as each incremental disclosure in this transactional conversation reveals more and more of the desired transactional outcome. Conversations that are transactional have certain characteristics: they are direct, concise, and cost-effective. They quickly dispense with pleasantries.

    Informational voice interactions

    In the meantime, some conversations are primarily about getting information. Though Alison might visit Crust Deluxe with the sole purpose of placing an order, she might not actually want to walk out with a pizza at all. She might be interested in trying kosher or halal dishes, trying gluten-free dishes, or something else entirely. Even though we have a prosocial mini-conversation once more at the beginning to practice politeness, we are after much more.

    Alison: Hey, how’s it going?

    Burhan: Hello and welcome to Crust Deluxe! It’s chilly outside. How can I help you?

    Alison: Can I ask a few questions?

    Burhan: Of course! Continue straight ahead.

    Alison: Do you have any halal options on the menu?

    Burhan: Totally! On request, we can make any pie halal. We also have lots of vegetarian, ovo-lacto, and vegan options. Do you have any other dietary restrictions in mind?

    Alison, what about pizzas that don’t contain gluten?

    Burhan: We can definitely do a gluten-free crust for you, no problem, for both our deep-dish and thin-crust pizzas. Anything else I can say for you to answer?

    Alison: That’s it for the moment. Good to know. Thank you!

    Burhan: Anytime, come back soon!

    This is a very different dialogue. Here, the goal is to obtain a particular set of facts. Informational conversations are research expeditions to gather data, news, or facts in search of the truth. Voice interactions that are informational might be more long-winded than transactional conversations by necessity. Responses are typically longer, more in-depth, and carefully communicated so that the customer is aware of the important lessons.

    Voice-to-text interfaces

    At their core, voice interfaces employ speech to support users in reaching their goals. However, just because an interface has a voice component doesn’t mean that every user interacts with it through voice. We’re most concerned in this book with pure voice interfaces because multimodal voice interfaces can lean on visual components like screens as crutches, which are completely dependent on spoken conversation and lack any visual component, making them much more nuanced and challenging to deal with.

    Though voice interfaces have long been integral to the imagined future of humanity in science fiction, only recently have those lofty visions become fully realized in genuine voice interfaces.

    IVR ( interactive voice response ) systems

    Written conversational interfaces have been a part of computing for many decades, but voice interfaces first started to appear in the early 1990s with text-to-speech ( TTS ) dictation programs that recited written text aloud as well as speech-enabled in-car systems that gave directions to a user-provided address. With the advent of interactive voice response ( IVR ) systems, intended as an alternative to overburdened customer service representatives, we became acquainted with the first true voice interfaces that engaged in authentic conversation.

    IVR systems made it easier for businesses to cut down on call centers, but they soon gained notoriety for their clunkiness. When you call an airline or hotel company, which is a common practice in the corporate world, these systems were primarily intended as metaphorical switchboards to direct customers to a real phone agent (” Say Reservations to book a flight or check an itinerary” ), which are more likely to happen when you call one. Despite their functional issues and users ‘ frustration with their inability to speak to an actual human right away, IVR systems proliferated in the early 1990s across a variety of industries (, PDF).

    IVR systems have a reputation for having less scintillating conversation than we’re used to in real life ( or even in science fiction ), but they are great for highly repetitive, monotonous conversations that typically don’t veer from a single format.

    Screen readers are the norm

    Parallel to the evolution of IVR systems was the invention of the screen reader, a tool that transcribes visual content into synthesized speech. For Blind or visually impaired website users, it’s the predominant method of interacting with text, multimedia, or form elements. Screen readers are the norm represent perhaps the closest equivalent we have today to an out-of-the-box implementation of content delivered through voice.

    Among the first screen readers known by that moniker was the Screen Reader for the BBC Micro and NEEC Portable developed by the Research Centre for the Education of the Visually Handicapped (RCEVH) at the University of Birmingham in 1986 ( ). In the same year, Jim Thatcher created the first IBM Screen Reader for text-based computers, which was later reworked for computers with graphical user interfaces ( GUIs ) ( ).

    The demand for accessible website tools exploded as a result of the web’s explosive growth in the 1990s. Thanks to the introduction of semantic HTML and especially ARIA roles beginning in 2008, screen readers started facilitating speedy interactions with web pages that ostensibly allow disabled users to traverse the page as an aural and temporal space rather than a visual and physical one. In other words, web screen readers “provide mechanisms that translate visual design constructs—proximity, proportion, etc. in A List Apart, writes Aaron Gustafson, “into useful information.” ” At least they do when documents are authored thoughtfully” ( ).

    There’s a big deal with screen readers: they’re difficult to use and relentlessly verbose, despite being incredibly instructive for voice interface designers. Screen readers may not be able to read websites ‘ visual structures, which can occasionally lead to awkward pronouncements that list every manipulable HTML element and make an announcement about every formatting change. For many screen reader users, working with web-based interfaces exacts a cognitive toll.

    Accessibility advocate and voice engineer Chris Maury examines why the screen reader experience is not appropriate for users who rely on voice in Wired:

    I disliked the operation of Screen Readers from the beginning. Why are they designed the way they are? It makes no sense to present information visually before converting it to audio only after that. All the effort and effort put into creating the ideal app user experience is wasted, or worse, having a negative effect on blind users ‘ experience. ( )

    In many cases, well-designed voice interfaces can deliver users ‘ requests more quickly than rambling screen reader monologues. After all, users of the visual interface have the advantage of freely scurrying around the viewport to find information without worrying about it. Blind users, meanwhile, are obligated to listen to every utterance synthesized into speech and therefore prize brevity and efficiency. Users with disabilities who have long had no choice but to use clumsy screen readers might benefit from more streamlined user interfaces, especially more advanced voice assistants.

    Voice-overseers are

    When we think of voice assistants (the subset of voice interfaces now commonplace in living rooms, smart homes, and offices), many of us immediately picture HAL from 2001: A Space Odyssey or hear Majel Barrett’s voice as the omniscient computer in Star Trek. Voice-overseers are are akin to personal concierges that can answer questions, schedule appointments, conduct searches, and perform other common day-to-day tasks. And they’re rapidly gaining more attention from accessibility advocates for their assistive potential.

    Before the earliest IVR systems found success in the enterprise, Apple published a demonstration video in 1987 depicting the Knowledge Navigator, a voice assistant that could transcribe spoken words and recognize human speech to a great degree of accuracy. Then, in 2001, Tim Berners-Lee and others created their vision for a” semantic web agent” that would carry out routine tasks like” checking calendars, making appointments, and finding locations” ( hinter paywall ). Apple’s Siri only became a reality until 2011 when it finally made voice assistants a reality for consumers.

    Thanks to the plethora of voice assistants available today, there is considerable variation in how programmable and customizable certain voice assistants are over others ( Fig 1.1 ). At one extreme, everything but vendor-provided features are locked down. For instance, when Apple’s Siri and Microsoft’s Cortana were released, they couldn’t extend their existing capabilities. There are no other means by which developers can interact with Siri at a low level, aside from predefined categories of tasks like sending messages, hailing rideshares, making restaurant reservations, and other things, which are still unavoidable today.

    At the opposite end of the spectrum, voice assistants like Amazon Alexa and Google Home offer a core foundation on which developers can build custom voice interfaces. For this reason, developers who feel stifled by the limitations of Siri and Cortana are increasingly using programmable voice assistants that allow for customization and extensibility. Google Home has the ability to program arbitrary Google Assistant skills, while Amazon offers the Alexa Skills Kit, a developer framework for creating custom voice interfaces for Amazon Alexa. Today, users can choose from among thousands of custom-built skills within both the Amazon Alexa and Google Assistant ecosystems.

    As businesses like Amazon, Apple, Microsoft, and Google continue to occupy their positions, they are also selling and open-sourcing an unheard array of tools and frameworks for designers and developers, aiming to make creating voice interfaces as simple as possible, even without code.

    Often by necessity, voice assistants like Amazon Alexa tend to be monochannel—they’re tightly coupled to a device and can’t be accessed on a computer or smartphone instead. In contrast, many development platforms, such as Google’s Dialogflow, have omnichannel capabilities that allow users to create a single conversational interface that then becomes a voice interface, textual chatbot, and IVR system upon deployment. In this design-focused book, I don’t recommend any particular implementation strategies, but in Chapter 4 we’ll discuss some of the possible effects that these variables might have on how you construct your design artifacts.

    Voice Content

    Simply put, voice content is voice-transmitted content. Voice content must be free-flowing and organic, contextless and concise in order to preserve what makes human conversation so compelling in the first place. Everything written content is not.

    Our world is replete with voice content in various forms: screen readers reciting website content, voice assistants rattling off a weather forecast, and automated phone hotline responses governed by IVR systems. We’re most concerned with the content in this book being delivered auditorically, not as an option but as a necessity.

    Our initial step in informational voice interfaces will likely be to provide user content, according to many of us. There’s only one problem: any content we already have isn’t in any way ready for this new habitat. So how can we make the content on our websites more conversational? And how do we create fresh copy that works with voice movements?

    Lately, we’ve begun slicing and dicing our content in unprecedented ways. Websites are, in many ways, colossal vaults of what I call macrocontent: lengthy prose that can last for miles in a browser window, like microfilm viewers of newspaper archives. Microcontent was defined as permalinked pieces of content that could be read in any environment, such as email or text messages back in 2002, well before the present-day ubiquity of voice assistants.

    A day’s weather forcast]sic], the arrival and departure times for an airplane flight, an abstract from a long publication, or a single instant message can all be examples of microcontent. ( )

    I would update Dash’s definition of microcontent to include all instances of bite-sized content that transcends written communiqués. After all, today we encounter microcontent in interfaces where a small snippet of copy is displayed alone, unmoored from the browser, like a textbot confirmation of a restaurant reservation. The best way to learn how your content can be stretched to the limits of its potential is through microcontent, which will inform both established and new delivery channels.

    Voice content stands out as being unique because it illustrates how content is experienced in space as opposed to time. We can glance at a digital sign underground for an instant and know when the next train is arriving, but voice interfaces hold our attention captive for periods of time that we can’t easily escape or skip, something screen reader users are all too familiar with.

    We must ensure that our microcontent performs well as voice content because it is essentially composed of individual blobs without any connection to the channels in which they will eventually end up. This means focusing on the two most crucial characteristics of robust voice content: voice content legibility and voice content discoverability.

    Fundamentally, how voice content manifests in perceived time and space both affect the legibility and discoverability of our voice content.

  • Sustainable Web Design, An Excerpt

    Sustainable Web Design, An Excerpt

    In the 1950s, some members of the elite running group had come to accept the idea that it was impossible to run a hour in less than four hours. Riders had been attempting it since the later 19th century and were beginning to draw the conclusion that the human body just wasn’t built for the job.

    But on May 6, 1956, Roger Bannister caught all off guard. It was a cold, damp morning in Oxford, England—conditions no one expected to give themselves to record-setting—and but Bannister did really that, running a mile in 3: 59.4 and becoming the first people in the history books to run a mile in under four hours.

    The world today knew that the four-minute hour could be accomplished thanks to this change in the standard. Bannister’s history lasted just forty-six days, when it was snatched aside by American sprinter John Landy. Therefore, in the same race, three athletes managed to cross the four-minute challenge together. Since therefore, over 1, 400 walkers have actually run a mile in under four days, the current document is 3: 43.13, held by Moroccan performer Hicham El Guerrouj.

    We do a lot more when we think something is possible, and we only think it can be done when we see someone else doing it once more. As for human running speed, we also think there are strict guidelines for how a website should do.

    Establishing requirements for a green website

    The key indicators of climate performance in most big companies are pretty well established, such as power per square metre for homes and miles per gallon for cars. The tools and methods for calculating those measures are standardized as well, which keeps everyone on the same site when doing economic evaluations. However, we aren’t held to any specific environmental standards in the world of websites and apps, and we only recently have access to the tools and strategies we need to do so.

    The main objective in green web layout is to reduce carbon emissions. However, it’s nearly impossible to accurately assess the CO2 output of a website product. We can’t assess the pollutants coming out of the exhaust valves on our laptops. The pollution coming from power plants that burn coal and oil are far apart, out of sight, and out of mind. We have no way to track the particles from a website or app up to the power station where the light is being generated and really know the exact amount of house oil produced. So what do we accomplish then?

    If we can‘t measure the actual carbon pollution, therefore we need to get what we can measure. The following are the main elements that could be used as measures of coal pollution:

    1. Transfer of data
    2. Electricity’s carbon power

    Let’s take a look at how we can use these indicators to calculate the energy use, and in turn the carbon footprint, of the sites and web applications we create.

    Transfer of data

    Most researchers use kilowatt-hours per gigabyte (k Wh/GB ) as a metric of energy efficiency when measuring the amount of data transferred over the internet when a website or application is used. This serves as a wonderful example of how much energy is consumed and how much carbon is released. As a rule of thumb, the more files transferred, the more electricity used in the data center, telecoms systems, and end users products.

    The easiest way to calculate data transfer for a second visit for web pages is to measure the site weight, which is the page’s transfer size in kilobytes when someone first visits the page. It’s very easy to measure using the engineer equipment in any modern internet browser. Frequently, any web application’s overall data transfer statistics will be included in your web hosting account ( Fig. 2.1 ).

    The great thing about website weight as a parameter is that it allows us to compare the effectiveness of web pages on a level playing field without confusing the issue with frequently changing traffic volumes.

    A large scope is required to reduce page weight. By early 2020, the median page weight was 1.97 MB for setups the HTTP Archive classifies as “desktop” and 1.77 MB for “mobile”, with desktop increasing 36 percent since January 2016 and mobile page weights nearly doubling in the same period ( Fig 2.2 ). Image files account for roughly half of this data transfer, making them the single biggest contributor to carbon emissions on the typical website.

    History clearly shows us that our web pages can be smaller, if only we set our minds to it. While the majority of technologies, including the underlying technology of the web like data centers and transmission networks, become more and more energy efficient, websites themselves become less effective as time goes on.

    You may be aware of the idea of performance budgeting as a method for directing a project team to deliver faster user experiences. For example, we might specify that the website must load in a maximum of one second on a broadband connection and three seconds on a 3G connection. Performance budgets are upper limits rather than vague suggestions, much like speed limits while driving, so the goal should always be to come in within budget.

    Designing for fast performance does often lead to reduced data transfer and emissions, but it isn’t always the case. Page weight and transfer size are more objective and reliable benchmarks for sustainable web design, whereas web performance often depends more on the user’s perception of load times than it does on how effective the underlying system is.

    We can set a page weight budget in reference to a benchmark of industry averages, using data from sources like HTTP Archive. We can also use competitor page weights and the website’s current layout to compare it to. For example, we might set a maximum page weight budget as equal to our most efficient competitor, or we could set the benchmark lower to guarantee we are best in class.

    We could start looking at the transferability of our web pages for repeat visitors if we want to take it one step further. Although page weight for the first time someone visits is the easiest thing to measure, and easy to compare on a like-for-like basis, we can learn even more if we start looking at transfer size in other scenarios too. For instance, visitors who load the same page more frequently are likely to have a high percentage of the files cached in their browser, which means they don’t need to move all the files on subsequent visits. Likewise, a visitor who navigates to new pages on the same website will likely not need to load the full page each time, as some global assets from areas like the header and footer may already be cached in their browser. Moving beyond the first visit and measuring page weight budgets for scenarios beyond this level of detail can help us learn even more about how to optimize efficiency for users who regularly visit our pages.

    Page weight budgets are easy to track throughout a design and development process. Although they don’t directly disclose carbon emissions and energy consumption data, they do provide a clear indicator of efficiency in comparison to other websites. And as transfer size is an effective analog for energy consumption, we can actually use it to estimate energy consumption too.

    In summary, less data transfer leads to more energy efficiency, which is a crucial component of lowering web product carbon emissions. The more efficient our products, the less electricity they use, and the less fossil fuels need to be burned to produce the electricity to power them. However, as we’ll see next, it’s important to take into account the source of that electricity because all web products require some.

    Electricity’s carbon power

    Regardless of energy efficiency, the level of pollution caused by digital products depends on the carbon intensity of the energy being used to power them. The term” carbon intensity” (gCO2/k Wh ) is used to describe how much carbon dioxide is produced for each kilowatt-hour of electricity ). This varies widely, with renewable energy sources and nuclear having an extremely low carbon intensity of less than 10 gCO2/k Wh ( even when factoring in their construction ), whereas fossil fuels have very high carbon intensity of approximately 200–400 gCO2/k Wh.

    The majority of electricity is produced by national or state grids, where different levels of carbon intensity are combined with energy from a variety of sources. The distributed nature of the internet means that a single user of a website or app might be using energy from multiple different grids simultaneously, a website user in Paris uses electricity from the French national grid to power their home internet and devices, but the website’s data center could be in Dallas, USA, pulling electricity from the Texas grid, while the telecoms networks use energy from everywhere between Dallas and Paris.

    Although we don’t have complete control over the energy supply of web services, we do have some control over where our projects are hosted. With a data center using a significant proportion of the energy of any website, locating the data center in an area with low carbon energy will tangibly reduce its carbon emissions. This user-provided data is reported and mapped by Danish startup Tomorrow, and a look at their map demonstrates how, for instance, choosing a data center in France will result in significantly lower carbon emissions than choosing a data center in the Netherlands ( Fig. 2.3 ).

    However, we don’t want to move our servers too far away from our users because it requires a lot of energy to transmit data through the telecom’s networks, and the more energy is used, the further the data travels. Just like food miles, we can think of the distance from the data center to the website’s core user base as “megabyte miles” —and we want it to be as small as possible.

    We can use website analytics to determine the country, state, or even city where our core user group is located and measure the distance from that location to the data center used by our hosting company by using the distance itself as a benchmark. This will be a somewhat fuzzy metric as we don’t know the precise center of mass of our users or the exact location of a data center, but we can at least get a rough idea.

    For instance, if a website is hosted in London but the main audience is on the United States ‘ West Coast, we could look up the travel distance between London and San Francisco, which is 5,300 miles. That’s a long way! We can see how hosting it somewhere in North America, ideally on the West Coast, would significantly shorten the distance and the amount of energy needed to transmit the data. In addition, locating our servers closer to our visitors helps reduce latency and delivers better user experience, so it’s a win-win.

    Reverting it to carbon emissions

    If we combine carbon intensity with a calculation for energy consumption, we can calculate the carbon emissions of our websites and apps. A tool my team created accomplishes this by measuring the data transfer over the wire when a web page is loaded, calculating the associated electricity consumption, and then converting that data into a CO2 figure ( Fig. 2.4). It also factors in whether or not the web hosting is powered by renewable energy.

    The Energy and Emissions Worksheet that comes with this book teaches you how to improve it and tailor the data more appropriately to your project’s unique features.

    With the ability to calculate carbon emissions for our projects, we could actually expand our page weight budget and establish carbon budgets as well. CO2 is not a metric commonly used in web projects, we’re more familiar with kilobytes and megabytes, and can fairly easily look at design options and files to assess how big they are. Although translating that into carbon adds a layer of abstraction that isn’t as intuitive, carbon budgets do focus our minds on the main thing we’re trying to reduce, and this is in line with the main goal of sustainable web design: reducing carbon emissions.

    Browser Energy

    Transfer of data might be the simplest and most complete analog for energy consumption in our digital projects, but by giving us one number to represent the energy used in the data center, the telecoms networks, and the end user’s devices, it can’t offer us insights into the efficiency in any specific part of the system.

    One part of the system we can look at in more detail is the energy used by end users ‘ devices. The computational load is increasingly shifting from the data center to users ‘ devices, whether they are phones, tablets, laptops, desktops, or even smart TVs, as front-end web technologies advance. Modern web browsers allow us to implement more complex styling and animation on the fly using CSS and JavaScript. Additionally, JavaScript libraries like Angular and React make it possible to create applications where the” thinking” process is performed either partially or completely in the browser.

    All of these advances are exciting and open up new possibilities for what the web can do to serve society and create positive experiences. However, more energy is used by the user’s devices as a result of the user’s web browser’s increased computation. This has implications not just environmentally, but also for user experience and inclusivity. Applications that put a lot of processing power on a user’s device unintentionally make them use older, slower devices and make their phones and laptops ‘ batteries discharge more quickly. Furthermore, if we build web applications that require the user to have up-to-date, powerful devices, people throw away old devices much more frequently. The poorest members of society are also under disproportionate financial burdens due to this, which is not just bad for the environment.

    In part because the tools are limited, and partly because there are so many different models of devices, it’s difficult to measure website energy consumption on end users ‘ devices. The Energy Impact monitor inside the developer console of the Safari browser is one of the tools we currently have ( Fig. 2.5 ).

    You are aware of the moment your computer’s cooling fans start spinning so frantically that you mistakenly believe it might take off when you load a website? That’s essentially what this tool is measuring.

    It uses these figures to create an energy impact rating and shows the percentage of CPU used and how long the CPU used when loading the web page last. It doesn’t give us precise data for the amount of electricity used in kilowatts, but the information it does provide can be used to benchmark how efficiently your websites use energy and set targets for improvement.

  • Design for Safety, An Excerpt

    Design for Safety, An Excerpt

    According to antiracist scholar Kim Crayton, “intention without plan is chaos.” We’ve discussed how our prejudices, beliefs, and carelessness toward marginalized and resilient parties lead to dangerous and irresponsible tech—but what, precisely, do we need to do to fix it? We need a strategy, not just the desire to make our technology safer.

    This book will provide you with that plan of action. It covers how to incorporate security concepts into your design work to create healthy technology, how to persuade stakeholders that this work is required, and how to respond to criticism that there isn’t really enough diversity. ( Spoiler: we do, but diversity alone is not the antidote to fixing unethical, unsafe tech. )

    The procedure for ensuring that everyone is safe

    When you are designing for protection, your goals are to:

    • determine the best ways to abuse your solution.
    • style ways to prevent the maltreatment, and
    • offer assistance for harmed people to regain control and power.

    The Process for Inclusive Safety is a tool to help you reach those goals ( Fig 5.1 ). I developed this strategy in 2018 to better understand the various methods I used to create products that were designed with safety in mind. Whether you are creating an entirely new product or adding to an existing element, the Process can help you produce your product secure and diverse. Five main public areas of action are included in the Process:

    • Conducting study
    • creating tropes
    • Pondering problems
    • creating options
    • Testing for health

    It is intended to be flexible, so teams might not want to utilize every action in all circumstances. Use the parts that are related to your special function and environment, this is meant to be something you can put into your existing style process.

    And if you’ve used it, if you have suggestions for improving it, or just want to give an example of how it helped your staff, please get in touch with me. It’s a dwelling report that I hope will continue to be a helpful and practical tool that technicians can use in their day-to-day job.

    If you’re developing a product especially for a defenseless group or victims of some kind of stress, such as an app for victims of domestic violence, sexual abuse, or drug habit, make sure to read Chapter 7, which specifically addresses the issue and should be handled a little bit different. The guidelines below are for evaluating safety when designing a more basic product that will have a large customer base ( which, we now know from data, will include specific groups that should be protected from harm ). Chapter 7 concentrates on goods made especially for those who have been traumatized and are vulnerable.

    Step 1: Do study

    Design research should involve a thorough evaluation of how your technology might be used for abuse as well as certain insight into the experiences of those who have witnessed and perpetrated that kind of abuse. At this stage, you and your group does investigate issues of emotional damage and abuse, and examine any other safety, security, or inclusivity issues that might be a concern for your product or service, like data security, prejudiced algorithms, and harassment.

    broad analysis

    Your project should begin with broad, general research into similar products and issues around safety and ethical concerns that have already been reported. For instance, a team building a smart home device would be wise to comprehend the many ways that already-existing smart home devices have been misused as abuse tools. If your product will involve AI, seek to understand the potentials for racism and other issues that have been reported in existing AI products. Nearly all forms of technology have potential or actual harm that have been covered in academic writing or in the media. Google Scholar is a useful tool for finding these studies.

    Survivors as a specific research area

    When possible and appropriate, include direct research ( surveys and interviews ) with people who are experts in the forms of harm you have uncovered. In order to have a better understanding of the subject and be better positioned to prevent retraumatize survivors, you should interview advocates working in the area of your research first. If you’ve uncovered possible domestic violence issues, for example, the experts you’ll want to speak with are survivors themselves, as well as workers at domestic violence hotlines, shelters, other related nonprofits, and lawyers.

    It is crucial to pay people for their knowledge and lived experiences, especially when interviewing survivors of any kind of trauma. Don’t ask survivors to share their trauma for free, as this is exploitative. You should always make the offer in the first interview, even though some survivors might not want to be paid. An alternative to payment is to donate to an organization working against the type of violence that the interviewee experienced. In Chapter 6, we’ll discuss how to appropriately interview survivors.

    Specific research: Abusers

    Teams aiming to design for safety are unlikely to be able to interview self-declared abductors or those who have broken laws in areas like hacking. Don’t make this a goal, rather, try to get at this angle in your general research. Describe the ways that abusers or bad actors use technology to harm others, how they use it to silence others, and how they justify or explain the abuse.

    Step 2: Create archetypes

    Use your research’s findings to create abuser and survivor archetypes once you’ve finished conducting your research. Archetypes are not personas, as they’re not based on real people that you interviewed and surveyed. They are based on your investigation into potential safety problems, much like when we design for accessibility: we don’t need to have identified any blind or deaf people in our interview pool to come up with a design that is representative of them. Instead, we base those designs on existing research into what this group needs. While archetypes are more generalized and typically represent real users, they typically include a lot of details.

    The abuser archetype is someone who will look at the product as a tool to perform harm ( Fig 5.2 ). They may be attempting to harm someone they don’t know by using surveillance or anonymous harassment, or they may be trying to control, monitor, abuse, or otherwise torment someone they know.

    The survivor archetype refers to a person who is being abused with the product. There are various situations to consider in terms of the archetype’s understanding of the abuse and how to put an end to it: Do they need proof of abuse they already suspect is happening, or are they unaware they’ve been targeted in the first place and need to be alerted ( Fig 5.3 )?

    To capture a range of experiences, you might want to create several survivor archetypes. They may know that the abuse is happening but not be able to stop it, like when an abuser locks them out of IoT devices, or they know it’s happening but don’t know how, such as when a stalker keeps figuring out their location ( Fig 5.4). In your survivor archetype, include as many of these scenarios as you need. You’ll use these later on when you design solutions to help your survivor archetypes achieve their goals of preventing and ending abuse.

    It may be useful for you to create persona-like artifacts for your archetypes, such as the three examples shown. Focus on their objectives rather than the demographic information we frequently see in personas. The goals of the abuser will be to carry out the specific abuse you’ve identified, while the goals of the survivor will be to prevent abuse, understand that abuse is happening, make ongoing abuse stop, or regain control over the technology that’s being used for abuse. Later, you’ll think about how to help the survivor’s goals and prevent the abuser’s goals.

    And while the “abuser/survivor” model fits most cases, it doesn’t fit all, so modify it as you need to. For instance, if you found a security flaw, such as the ability for someone to talk to children through a home camera system, the malicious hacker would receive the abuser archetype and the child’s parents would receive the survivor archetype.

    Step 3: Brainstorm problems

    Brainstorm novel abuse cases and safety issues after creating archetypes. ” Novel” means things not found in your research, you’re trying to identify completely new safety issues that are unique to your product or service. This step is intended to exhaust every effort put forth to identify potential harms your product might cause. You aren’t worrying about how to prevent the harm yet—that comes in the next step.

    What other abuses could your product be used for besides what you’ve already discovered through your research? I recommend setting aside at least a few hours with your team for this process.

    Try conducting a Black Mirror brainstorming if you’re looking for a place to start. This exercise is based on the show Black Mirror, which features stories about the dark possibilities of technology. Try to figure out the most outrageous, horrible, and out-of-control ways your product could harm you in a show episode. When I’ve led Black Mirror brainstorms, participants usually end up having a good deal of fun ( which I think is great—it’s okay to have fun when designing for safety! ). I suggest time-boxing a Black Mirror brainstorm for the first half an hour, then dialing back, and using the remaining time to consider more plausible forms of harm.

    After you’ve identified as many opportunities for abuse as possible, you may still not feel confident that you’ve uncovered every potential form of harm. When you perform this type of work, you should have a healthy amount of anxiety. It’s common for teams designing for safety to worry,” Have we really identified every possible harm? What if something is missing, then? If you’ve spent at least four hours coming up with ways your product could be used for harm and have run out of ideas, go to the next step.

    It’s impossible to say 100 % assurance that you’ve done everything, but instead of aiming for 100 %, acknowledge that you’ve done it and will continue to prioritize safety in the future. Once your product is released, your users may identify new issues that you missed, aim to receive that feedback graciously and course-correct quickly.

    Step 4: Create solutions

    At this point, you should have a list of ways your product can be used for harm as well as survivor and abuser archetypes describing opposing user goals. Next, it is important to figure out how to design in opposition to the identified abuser’s objectives and to support the survivor’s objectives. This step is a good one to insert alongside existing parts of your design process where you’re proposing solutions for the various problems your research uncovered.

    Questions to ask yourself include: What are some ways to protect yourself and support your archetypes?

    • Can you design your product in such a way that the identified harm cannot happen in the first place? What barriers can you place to stop the harm from occurring if not?
    • How can you make the victim aware that abuse is happening through your product?
    • How can you explain to the victim what they must do to stop the problem?
    • Can you identify any types of user activity that would indicate some form of harm or abuse? Could your product aid in the user’s access to support?

    In some products, it’s possible to proactively recognize that harm is happening. For instance, a pregnancy app might be modified to allow users to report that they were the victims of an assault, which could result in an offer to receive resources for local and national organizations. This sort of proactiveness is not always possible, but it’s worth taking a half hour to discuss if any type of user activity would indicate some form of harm or abuse, and how your product could assist the user in receiving help in a safe manner.

    However, be careful when doing anything that could harm a user if their devices are being monitored. If you do offer some kind of proactive help, always make it voluntary, and think through other safety issues, such as the need to keep the user in-app in case an abuser is checking their search history. In the next chapter, we’ll walk through a good illustration of this.

    Step 5: Test for safety

    The final step is to evaluate your prototypes from the perspective of your archetypes, who wants to harm the product and the victim of the harm who needs to regain control over the technology. Just like any other kind of product testing, at this point you’ll aim to rigorously test out your safety solutions so that you can identify gaps and correct them, validate that your designs will help keep your users safe, and feel more confident releasing your product into the world.

    Safety testing should be performed in addition to usability testing. If you’re at a company that doesn’t do usability testing, you might be able to use safety testing to cleverly perform both, a user who goes through your design attempting to weaponize the product against someone else can also be encouraged to point out interactions or other elements of the design that don’t make sense to them.

    If your final prototype or the finished product has already been released, you’ll want to conduct safety testing on both. There’s nothing wrong with testing an existing product that wasn’t designed with safety goals in mind from the onset —”retrofitting” it for safety is a good thing to do.

    Keep in mind that testing for safety involves both an abuser and a survivor’s perspective, even though it might not make sense for you to do both. Alternatively, if you made multiple survivor archetypes to capture multiple scenarios, you’ll want to test from the perspective of each one.

    You as the designer are probably too closely acquainted with the product and its design at this point, just like other usability testing techniques, and you know the product too well. Instead of doing it yourself, set up testing as you would with other usability testing: find someone who is not familiar with the product and its design, set the scene, give them a task, encourage them to think out loud, and observe how they attempt to complete it.

    Abuse testing

    The goal of this testing is to understand how easy it is for someone to weaponize your product for harm. You want to make it impossible, or at least difficult, for them to accomplish their goal, in contrast to usability testing. Reference the goals in the abuser archetype you created earlier, and use your product in an attempt to achieve them.

    For instance, we can imagine that the abuser archetype would have the goal of determining the location of his ex-girlfriend right now in a fitness app with GPS-enabled location features. With this goal in mind, you’d try everything possible to figure out the location of another user who has their privacy settings enabled. You might try to follow her running routes, view any information she has on her profile, view any information she has made private, and check out other users ‘ profiles, such as those of her followers.

    If by the end of this you’ve managed to uncover some of her location data, despite her having set her profile to private, you know now that your product enables stalking. Reverting to step 4 and figuring out how to stop this from occurring is your next step. You may need to repeat the process of designing solutions and testing them more than once.

    Survivor testing

    Survivor testing involves identifying how to give information and power to the survivor. Depending on the product or context, it might not always make sense. Thwarting the attempt of an abuser archetype to stalk someone also satisfies the goal of the survivor archetype to not be stalked, so separate testing wouldn’t be needed from the survivor’s perspective.

    However, there are instances where it makes sense. For example, for a smart thermostat, a survivor archetype’s goals would be to understand who or what is making the temperature change when they aren’t doing it themselves. If you couldn’t find the information in step 4, you would need to perform more work in step 4. You could test this by looking for the thermostat’s history log and looking for usernames, actions, and times.

    Another goal might be regaining control of the thermostat once the survivor realizes the abuser is remotely changing its settings. Are there any instructions that explain how to remove a user and change the password, and are they simple to locate? For your test, this would involve trying to figure out how to do this. This might again reveal that more work is needed to make it clear to the user how they can regain control of the device or account.

    Stress testing

    To make your product more inclusive and compassionate, consider adding stress testing. Eric Meyer and Sara Wachter-Boettcher’s Design for Real Life inspired this idea. The authors pointed out that personas typically center people who are having a good day—but real users are often anxious, stressed out, having a bad day, or even experiencing tragedy. These are known as” stress cases,” and testing your products for users in stress-case scenarios can reveal areas where your design lacks compassion. Design for Real Life has more details about what it looks like to incorporate stress cases into your design as well as many other great tactics for compassionate design.

  • A Content Model Is Not a Design System

    A Content Model Is Not a Design System

    Do you recall the days when having a fantastic site was sufficient? Today, people are getting answers from Siri, Google search fragments, and mobile applications, not only our websites. Companies with forward-thinking goals have adopted an holistic information plan whose goal is to reach people across a variety of digital stations and platforms.

    How can a content management system ( CMS ) be set up to reach your current and future audience? I learned the hard way that creating a content model—a concept of information types, attributes, and relationships that let people and systems understand content—with my more comfortable design-system wondering would collapse my patient’s holistic information strategy. By developing content versions that are conceptual and even join related content, you can avoid that result.

    I just had the opportunity to lead a Fortune 500 company’s CMS execution. The customer was excited by the benefits of an holistic information plan, including material modify, multichannel marketing, and robot delivery—designing content to be comprehensible to bots, Google knowledge panels, snippets, and voice user interfaces.

    A content type is essential for an omnichannel information strategy, and the model needed conceptual types, which are types of types that are categorized according to their meaning rather than their presentation. Our goal was to allow writers to produce original content that could be used wherever they felt was most useful. However, as the project progressed, I realized that the entire group had to be aware of a new design in order to support material reuse on the level that my customer needed.

    Despite our best purposes, we kept drawing from what we were more common with: design techniques. Unlike web-focused information strategies, an holistic information strategy doesn’t rely on WYSIWYG equipment for design and structure. Our inclination to approach the material model using our well-known design-system thinking consistently made us wander away from one of the main objectives of a material model: delivering content to audiences across multiple marketing channels.

    Two fundamental tenets must be followed in order to create a successful information type

    We had to explain to our designers, developers, and stakeholders that their previous internet projects had taught them that content should be treated as physical building blocks that fit into layouts. The past approach made the designs feel more recognizable and intuitive, at first, at least because it was more comfortable and also more intuitive. The team was able to know how a willing model differs from the design systems we were familiar with by discovering two principles:

    1. Instead of design, content models may establish semantics.
    2. And glad models should connect elements that belong together.

    Conceptual articles models

    A conceptual content type uses form and attribute names that reflect the content’s intended purpose and not how it will be displayed. For instance, in a nonsemantic design, groups may produce varieties like teasers, press blocks, and cards. These types may make it simple to present information, but they do not aid in understanding the meaning of the articles, which would have opened the door to the content presented in each marketing channel. To allow each distribution channel to comprehend the information and use it as it sees fit, a conceptual content type uses kind names like product, service, and testimonial.

    A great place to start when creating a conceptual content type is by reviewing the types and qualities that Schema has defined. nonprofit, a community-driven source for type meanings that are comprehensible to platforms like Google search.

    A conceptual content model has many advantages:

      A semantic material type decouples information from its presentation but that teams can change the website’s design without having to restructure its content, even if your team doesn’t worry about omnichannel content. In this way, content can withstand disruptive website redesigns.
    • A competitive advantage can also be gained by a semantic content model. by including schema-based structured data. org’s types and properties, a website can provide hints to help Google understand the content, display it in search snippets or knowledge panels, and use it to answer voice-interface user questions. Potential customers could access your content without ever visiting your website.
    • A semantic content model is also necessary if you want to deliver omnichannel content in addition to those practical advantages. Delivery channels must be able to comprehend the same content in order to use it across multiple marketing channels. For instance, if your content model provided a list of questions and answers, it could be easily displayed on a frequently asked questions ( FAQ ) page as well, but it could also be used by a bot that answers frequently asked questions.

    For example, using a semantic content model for articles, events, people, and locations lets A List Apart provide cleanly structured data for search engines so that users can read the content on the website, in Google knowledge panels, and even with hypothetical voice interfaces in the future.

    Content models that connect

    Instead of slicing up related content across disparate content components, I’ve come to the realization that the best models are those that are semantic and also connect related content components ( such as a FAQ item’s question and answer pair ). A good content model connects pieces of content that ought to be preserved so that multiple delivery channels can use it without having to assemble those pieces first.

    Consider creating an essay or article. An article’s meaning and usefulness depends upon its parts being kept together. Would one of the headings or paragraphs have any significance on their own if the entire article were not included? Our well-known design-system thinking on our project frequently led us to want to develop content models that would divide content into distinct chunks to fit the web-centric layout. This had a similar effect to an article that had its headline removed. Content that belonged together became challenging to manage and nearly impossible for multiple delivery channels to understand because we were cutting content into separate pieces based on layout.

    To illustrate, let’s look at how connecting related content applies in a real-world scenario. A complex layout for a software product page that included multiple tabs and sections was presented by the client’s design team. The content model lacked instincts, so we had to follow our instincts. Shouldn’t we make adding any number of tabs in the future as simple and flexible as possible?

    We felt like we needed a “tab section” content type because our design-system instincts allowed for the addition of multiple tab sections to a page because they were so well-versed. Each tab section would display various kinds of information. One tab might provide the software’s overview or its specifications. Another tab might provide a list of resources.

    Our tendency to divide the content model into “tab section” pieces would have resulted in a cumbersome editing process, as well as unnecessarily complex content that couldn’t have been digested by additional delivery channels. How would another system have resorted to counting tab sections and content blocks, for instance, if it had been able to identify a product’s “tab section” when referring to its specifications or resource list? This would have prevented the tabs from ever being rearranged, and it would have required adding logic to each other delivery channel to interpret the layout of the design system. Additionally, it would have been difficult to migrate to a new content model in response to the new page redesign if the customer had decided against displaying this content in a tab layout.

    We had a breakthrough when we discovered that our customer had a specific purpose in mind for each tab: it would reveal specific information such as the software product’s overview, specifications, related resources, and pricing. Our desire to concentrate on the visually appealing and well-known had obscured the design’s purpose once implementation began. With a little digging, it didn’t take long to realize that the concept of tabs wasn’t relevant to the content model. What was important was the meaning of the information that was intended to be displayed in the tabs.

    In fact, the customer could have chosen to switch to another format, using tabs, elsewhere. Based on the meaningful attributes the customer had desired to display on the web, we created content types for the software product. There were obvious semantic attributes like name and description as well as rich attributes like screenshots, software requirements, and feature lists. The software’s product information stayed together because it wasn’t sliced across separate components like “tab sections” that were derived from the content’s presentation. Any delivery channel—including future ones—could understand and present this content.

    Conclusion

    In this omnichannel marketing project, we discovered that the best way to maintain the content model’s semantic consistency was by ensuring that it was semantic ( with type and attribute names that reflected the content’s meaning ) and that it maintained content that belonged together ( as opposed to fragmenting it ). These two ideas made it easier for us to decide what to do with the content model based on the design. Remember: If you’re developing a content model to support an omnichannel content strategy, or even if you just want to make sure Google and other interfaces understand your content, keep in mind:

    • A design system isn’t a content model. You should maintain the semantic value and contextual structure of the content strategy throughout the entire implementation process because team members might be drawn to conflate them and force your content model to resemble your design system. Without the use of a magic decoder ring, every delivery channel will be able to consume the content.
    • If your team is having trouble making this transition, Schema can still offer some of the advantages. org–based structured data in your website. The benefit of search engine optimization is a compelling reason on its own, even if additional delivery channels aren’t on the horizon in the near future.
    • Remind the team that separating the content model from the design will allow them to update the designs more quickly because they won’t be hindered by the cost of content migrations. They will be prepared for the upcoming big thing, and they will be able to create new designs without compromising the compatibility between the content and the design.

    By firmly defending these ideas, you’ll help your team treat content the way it deserves as the most important component of your user experience and the best way to engage with your audience.

  • How to Sell UX Research with Two Simple Questions

    How to Sell UX Research with Two Simple Questions

    Do you find yourself designing screens with only a vague idea of how the things on the screen relate to the things elsewhere in the system? Do you leave stakeholder meetings with unclear directives that often seem to contradict previous conversations? You know a better understanding of user needs would help the team get clear on what you are actually trying to accomplish, but time and budget for research is tight. When it comes to asking for more direct contact with your users, you might feel like poor Oliver Twist, timidly asking, “Please, sir, I want some more.” 

    Here’s the trick. You need to get stakeholders themselves to identify high-risk assumptions and hidden complexity, so that they become just as motivated as you to get answers from users. Basically, you need to make them think it’s their idea. 

    In this article, I’ll show you how to collaboratively expose misalignment and gaps in the team’s shared understanding by bringing the team together around two simple questions:

    1. What are the objects?
    2. What are the relationships between those objects?

    A gauntlet between research and screen design

    These two questions align to the first two steps of the ORCA process, which might become your new best friend when it comes to reducing guesswork. Wait, what’s ORCA?! Glad you asked.

    ORCA stands for Objects, Relationships, CTAs, and Attributes, and it outlines a process for creating solid object-oriented user experiences. Object-oriented UX is my design philosophy. ORCA is an iterative methodology for synthesizing user research into an elegant structural foundation to support screen and interaction design. OOUX and ORCA have made my work as a UX designer more collaborative, effective, efficient, fun, strategic, and meaningful.

    The ORCA process has four iterative rounds and a whopping fifteen steps. In each round we get more clarity on our Os, Rs, Cs, and As.

    I sometimes say that ORCA is a “garbage in, garbage out” process. To ensure that the testable prototype produced in the final round actually tests well, the process needs to be fed by good research. But if you don’t have a ton of research, the beginning of the ORCA process serves another purpose: it helps you sell the need for research.

    In other words, the ORCA process serves as a gauntlet between research and design. With good research, you can gracefully ride the killer whale from research into design. But without good research, the process effectively spits you back into research and with a cache of specific open questions.

    Getting in the same curiosity-boat

    What gets us into trouble is not what we don’t know. It’s what we know for sure that just ain’t so.

    Mark Twain

    The first two steps of the ORCA process—Object Discovery and Relationship Discovery—shine a spotlight on the dark, dusty corners of your team’s misalignments and any inherent complexity that’s been swept under the rug. It begins to expose what this classic comic so beautifully illustrates:

    This is one reason why so many UX designers are frustrated in their job and why many projects fail. And this is also why we often can’t sell research: every decision-maker is confident in their own mental picture. 

    Once we expose hidden fuzzy patches in each picture and the differences between them all, the case for user research makes itself.

    But how we do this is important. However much we might want to, we can’t just tell everyone, “YOU ARE WRONG!” Instead, we need to facilitate and guide our team members to self-identify holes in their picture. When stakeholders take ownership of assumptions and gaps in understanding, BAM! Suddenly, UX research is not such a hard sell, and everyone is aboard the same curiosity-boat.

    Say your users are doctors. And you have no idea how doctors use the system you are tasked with redesigning.

    You might try to sell research by honestly saying: “We need to understand doctors better! What are their pain points? How do they use the current app?” But here’s the problem with that. Those questions are vague, and the answers to them don’t feel acutely actionable.

    Instead, you want your stakeholders themselves to ask super-specific questions. This is more like the kind of conversation you need to facilitate. Let’s listen in:

    “Wait a sec, how often do doctors share patients? Does a patient in this system have primary and secondary doctors?”

    “Can a patient even have more than one primary doctor?”

    “Is it a ‘primary doctor’ or just a ‘primary caregiver’… Can’t that role be a nurse practitioner?”

    “No, caregivers are something else… That’s the patient’s family contacts, right?”

    “So are caregivers in scope for this redesign?”

    “Yeah, because if a caregiver is present at an appointment, the doctor needs to note that. Like, tag the caregiver on the note… Or on the appointment?”

    Now we are getting somewhere. Do you see how powerful it can be getting stakeholders to debate these questions themselves? The diabolical goal here is to shake their confidence—gently and diplomatically.

    When these kinds of questions bubble up collaboratively and come directly from the mouths of your stakeholders and decision-makers, suddenly, designing screens without knowing the answers to these questions seems incredibly risky, even silly.

    If we create software without understanding the real-world information environment of our users, we will likely create software that does not align to the real-world information environment of our users. And this will, hands down, result in a more confusing, more complex, and less intuitive software product.

    The two questions

    But how do we get to these kinds of meaty questions diplomatically, efficiently, collaboratively, and reliably

    We can do this by starting with those two big questions that align to the first two steps of the ORCA process:

    1. What are the objects?
    2. What are the relationships between those objects?

    In practice, getting to these answers is easier said than done. I’m going to show you how these two simple questions can provide the outline for an Object Definition Workshop. During this workshop, these “seed” questions will blossom into dozens of specific questions and shine a spotlight on the need for more user research.

    Prep work: Noun foraging

    In the next section, I’ll show you how to run an Object Definition Workshop with your stakeholders (and entire cross-functional team, hopefully). But first, you need to do some prep work.

    Basically, look for nouns that are particular to the business or industry of your project, and do it across at least a few sources. I call this noun foraging.

    Here are just a few great noun foraging sources:

    • the product’s marketing site
    • the product’s competitors’ marketing sites (competitive analysis, anyone?)
    • the existing product (look at labels!)
    • user interview transcripts
    • notes from stakeholder interviews or vision docs from stakeholders

    Put your detective hat on, my dear Watson. Get resourceful and leverage what you have. If all you have is a marketing website, some screenshots of the existing legacy system, and access to customer service chat logs, then use those.

    As you peruse these sources, watch for the nouns that are used over and over again, and start listing them (preferably on blue sticky notes if you’ll be creating an object map later!).

    You’ll want to focus on nouns that might represent objects in your system. If you are having trouble determining if a noun might be object-worthy, remember the acronym SIP and test for:

    1. Structure
    2. Instances
    3. Purpose

    Think of a library app, for example. Is “book” an object?

    Structure: can you think of a few attributes for this potential object? Title, author, publish date… Yep, it has structure. Check!

    Instance: what are some examples of this potential “book” object? Can you name a few? The Alchemist, Ready Player One, Everybody Poops… OK, check!

    Purpose: why is this object important to the users and business? Well, “book” is what our library client is providing to people and books are why people come to the library… Check, check, check!

    As you are noun foraging, focus on capturing the nouns that have SIP. Avoid capturing components like dropdowns, checkboxes, and calendar pickers—your UX system is not your design system! Components are just the packaging for objects—they are a means to an end. No one is coming to your digital place to play with your dropdown! They are coming for the VALUABLE THINGS and what they can do with them. Those things, or objects, are what we are trying to identify.

    Let’s say we work for a startup disrupting the email experience. This is how I’d start my noun foraging.

    First I’d look at my own email client, which happens to be Gmail. I’d then look at Outlook and the new HEY email. I’d look at Yahoo, Hotmail…I’d even look at Slack and Basecamp and other so-called “email replacers.” I’d read some articles, reviews, and forum threads where people are complaining about email. While doing all this, I would look for and write down the nouns.

    (Before moving on, feel free to go noun foraging for this hypothetical product, too, and then scroll down to see how much our lists match up. Just don’t get lost in your own emails! Come back to me!)

    Drumroll, please…

    Here are a few nouns I came up with during my noun foraging:

    • email message
    • thread
    • contact
    • client
    • rule/automation
    • email address that is not a contact?
    • contact groups
    • attachment
    • Google doc file / other integrated file
    • newsletter? (HEY treats this differently)
    • saved responses and templates

    Scan your list of nouns and pick out words that you are completely clueless about. In our email example, it might be client or automation. Do as much homework as you can before your session with stakeholders: google what’s googleable. But other terms might be so specific to the product or domain that you need to have a conversation about them.

    Aside: here are some real nouns foraged during my own past project work that I needed my stakeholders to help me understand:

    • Record Locator
    • Incentive Home
    • Augmented Line Item
    • Curriculum-Based Measurement Probe

    This is really all you need to prepare for the workshop session: a list of nouns that represent potential objects and a short list of nouns that need to be defined further.

    Facilitate an Object Definition Workshop

    You could actually start your workshop with noun foraging—this activity can be done collaboratively. If you have five people in the room, pick five sources, assign one to every person, and give everyone ten minutes to find the objects within their source. When the time’s up, come together and find the overlap. Affinity mapping is your friend here!

    If your team is short on time and might be reluctant to do this kind of grunt work (which is usually the case) do your own noun foraging beforehand, but be prepared to show your work. I love presenting screenshots of documents and screens with all the nouns already highlighted. Bring the artifacts of your process, and start the workshop with a five-minute overview of your noun foraging journey.

    HOT TIP: before jumping into the workshop, frame the conversation as a requirements-gathering session to help you better understand the scope and details of the system. You don’t need to let them know that you’re looking for gaps in the team’s understanding so that you can prove the need for more user research—that will be our little secret. Instead, go into the session optimistically, as if your knowledgeable stakeholders and PMs and biz folks already have all the answers. 

    Then, let the question whack-a-mole commence.

    1. What is this thing?

    Want to have some real fun? At the beginning of your session, ask stakeholders to privately write definitions for the handful of obscure nouns you might be uncertain about. Then, have everyone show their cards at the same time and see if you get different definitions (you will). This is gold for exposing misalignment and starting great conversations.

    As your discussion unfolds, capture any agreed-upon definitions. And when uncertainty emerges, quietly (but visibly) start an “open questions” parking lot. 😉

    After definitions solidify, here’s a great follow-up:

    2. Do our users know what these things are? What do users call this thing?

    Stakeholder 1: They probably call email clients “apps.” But I’m not sure.

    Stakeholder 2: Automations are often called “workflows,” I think. Or, maybe users think workflows are something different.

    If a more user-friendly term emerges, ask the group if they can agree to use only that term moving forward. This way, the team can better align to the users’ language and mindset.

    OK, moving on. 

    If you have two or more objects that seem to overlap in purpose, ask one of these questions:

    3. Are these the same thing? Or are these different? If they are not the same, how are they different?

    You: Is a saved response the same as a template?

    Stakeholder 1: Yes! Definitely.

    Stakeholder 2: I don’t think so… A saved response is text with links and variables, but a template is more about the look and feel, like default fonts, colors, and placeholder images. 

    Continue to build out your growing glossary of objects. And continue to capture areas of uncertainty in your “open questions” parking lot.

    If you successfully determine that two similar things are, in fact, different, here’s your next follow-up question:

    4. What’s the relationship between these objects?

    You: Are saved responses and templates related in any way?

    Stakeholder 3:  Yeah, a template can be applied to a saved response.

    You, always with the follow-ups: When is the template applied to a saved response? Does that happen when the user is constructing the saved response? Or when they apply the saved response to an email? How does that actually work?

    Listen. Capture uncertainty. Once the list of “open questions” grows to a critical mass, pause to start assigning questions to groups or individuals. Some questions might be for the dev team (hopefully at least one developer is in the room with you). One question might be specifically for someone who couldn’t make it to the workshop. And many questions will need to be labeled “user.” 

    Do you see how we are building up to our UXR sales pitch?

    5. Is this object in scope?

    Your next question narrows the team’s focus toward what’s most important to your users. You can simply ask, “Are saved responses in scope for our first release?,” but I’ve got a better, more devious strategy.

    By now, you should have a list of clearly defined objects. Ask participants to sort these objects from most to least important, either in small breakout groups or individually. Then, like you did with the definitions, have everyone reveal their sort order at once. Surprisingly—or not so surprisingly—it’s not unusual for the VP to rank something like “saved responses” as #2 while everyone else puts it at the bottom of the list. Try not to look too smug as you inevitably expose more misalignment.

    I did this for a startup a few years ago. We posted the three groups’ wildly different sort orders on the whiteboard.

    The CEO stood back, looked at it, and said, “This is why we haven’t been able to move forward in two years.”

    Admittedly, it’s tragic to hear that, but as a professional, it feels pretty awesome to be the one who facilitated a watershed realization.

    Once you have a good idea of in-scope, clearly defined things, this is when you move on to doing more relationship mapping.

    6. Create a visual representation of the objects’ relationships

    We’ve already done a bit of this while trying to determine if two things are different, but this time, ask the team about every potential relationship. For each object, ask how it relates to all the other objects. In what ways are the objects connected? To visualize all the connections, pull out your trusty boxes-and-arrows technique. Here, we are connecting our objects with verbs. I like to keep my verbs to simple “has a” and “has many” statements.

    This system modeling activity brings up all sorts of new questions:

    • Can a saved response have attachments?
    • Can a saved response use a template? If so, if an email uses a saved response with a template, can the user override that template?
    • Do users want to see all the emails they sent that included a particular attachment? For example, “show me all the emails I sent with ProfessionalImage.jpg attached. I’ve changed my professional photo and I want to alert everyone to update it.” 

    Solid answers might emerge directly from the workshop participants. Great! Capture that new shared understanding. But when uncertainty surfaces, continue to add questions to your growing parking lot.

    Light the fuse

    You’ve positioned the explosives all along the floodgates. Now you simply have to light the fuse and BOOM. Watch the buy-in for user research flooooow.

    Before your workshop wraps up, have the group reflect on the list of open questions. Make plans for getting answers internally, then focus on the questions that need to be brought before users.

    Here’s your final step. Take those questions you’ve compiled for user research and discuss the level of risk associated with NOT answering them. Ask, “if we design without an answer to this question, if we make up our own answer and we are wrong, how bad might that turn out?” 

    With this methodology, we are cornering our decision-makers into advocating for user research as they themselves label questions as high-risk. Sorry, not sorry. 

    Now is your moment of truth. With everyone in the room, ask for a reasonable budget of time and money to conduct 6–8 user interviews focused specifically on these questions. 

    HOT TIP: if you are new to UX research, please note that you’ll likely need to rephrase the questions that came up during the workshop before you present them to users. Make sure your questions are open-ended and don’t lead the user into any default answers.

    Final words: Hold the screen design!

    Seriously, if at all possible, do not ever design screens again without first answering these fundamental questions: what are the objects and how do they relate?

    I promise you this: if you can secure a shared understanding between the business, design, and development teams before you start designing screens, you will have less heartache and save more time and money, and (it almost feels like a bonus at this point!) users will be more receptive to what you put out into the world. 

    I sincerely hope this helps you win time and budget to go talk to your users and gain clarity on what you are designing before you start building screens. If you find success using noun foraging and the Object Definition Workshop, there’s more where that came from in the rest of the ORCA process, which will help prevent even more late-in-the-game scope tugs-of-war and strategy pivots. 

    All the best of luck! Now go sell research!