Blog

  • Everything We Saw at SXSW 2026: I Love Boosters, Wishful Thinking, The Comeback, and More

    Everything We Saw at SXSW 2026: I Love Boosters, Wishful Thinking, The Comeback, and More

    The braintrusts that have presided over SXSW since nearly its inception have always smiled on the festival for its contrasts. It’s a fest where south meets west, and music meets film, meets technology, meets comedy, meets plain ol’ innovation. And in no other year has that felt more like a lived-in ethos than 2026, which […]

    The post Everything We Saw at SXSW 2026: I Love Boosters, Wishful Thinking, The Comeback, and More appeared first on Den of Geek.

    The braintrusts that have presided over SXSW since nearly its inception have always smiled on the festival for its contrasts. It’s a fest where south meets west, and music meets film, meets technology, meets comedy, meets plain ol’ innovation. And in no other year has that felt more like a lived-in ethos than 2026, which saw everything happening everywhere and all at once (heh).

    With the film, TV, music, activation, and cornucopia of other moments all occurring simultaneously in Austin, it was a whirlwind that can nearly overwhelm. Yet the Den of Geek team was there for almost all of it, covering what we can, and bringing it to you in this handy, dandy round-up.

    cnx.cmd.push(function() {
    cnx({
    playerId: “106e33c0-3911-473c-b599-b1426db57530”,

    }).render(“0270c398a82f44f49c23c16122516796”);
    });

    Movies

    Daniel Kwan and Tristan Harris for The AI Doc at SXSW

    The AI Doc: Or How I Became an Apocaloptimist

    At the turn of the century, the world was excited by the prospect of new technology. The internet was a shiny toy, the dot com bubble hadn’t burst, and the iPod (never mind the iPhone) was still a glint in Steve Jobs’ eye. Filmmaker Daniel Kwan and technology ethicist Tristan Harris remember those days as a bit like a lost kingdom. After all, the world it has wrought has very little good cheer left in the public for new tech, especially with the AI revolution that is now commencing.

    “With social media, we were not great stewards of that technology and how it rolled out,” Harris observes in the Den of Geek studio. “We created the most anxious and depressed generation of our lifetime, even though some of the people who were building it—the people who started Instagram, they were my dorm mates at Stanford—they didn’t intend for that to happen.” Which raises the question of why should folks have any confidence that many of the same companies and technology leaders will do better with the far more powerful prospect of artificial intelligence.

    That conundrum is something which both men are confronting in the new Daniel Roher documentary, The AI Doc: Or How I Became an Apocaloptimist, which Kwan is also a producer on. As the title suggests, the film examines both the most pessimistic and rosiest predictions for AI’s future, and everything in between. All possibilities are running rampant in Silicon Valley while billion-dollar companies dash madly toward being the first to achieve artificial general intelligence (AGI).

    Yet the film also interrogates what tools can be put in place to create better stewards in the next generation. As Oscar-winner Kwan notes, “I think Big Tech has broken our social contract that we have as a society with technology. They have used our world as a playground to basically consolidate more power and more resources. The technology that they’re building—even though a lot of the architects and the technicians building this stuff, they have the greatest intentions and the greatest ideals of what this technology can do—the fact that it is being deployed within this current system, within this current incentive structure, it is taking a neutral technology and turning it into an extractive one.” – David Crow

    Amazing Live Sea Monkeys

    As the large, colorful gates open to Yolanda Signorelli’s Maryland estate, she’s without electricity, feeding woodland creatures tiny crackers – an unexpectedly intimate introduction to the woman at the center of The Amazing Live Sea Monkeys. Directed by Mark Becker and Aaron Schock, the documentary uses the nostalgia of comic books and the ‘60s to showcase an oddly complex legal saga and the history of the beloved “instant life” toy while uncovering lesser-known details about its creator, Harold von Braunhut. At its core is a David-and-Goliath-style battle between Signorelli, often called the “mother of Sea-Monkeys,” and Big Time Toys, which began licensing and distributing the product in 2007.

    Through interviews with Signorelli, her lawyer, journalists, former collaborators, and illustrators behind the brand’s imagery, the film tells the story while keeping the people behind it at the center. Signorelli’s love for animals and the sea monkeys shine through all aspects of the film; Schock even told us that revealing the “sea monkey secret formula” was always a concern of Signorelli while filming. Throughout the documentary, Becker and Schock never shy away from examining von Braunhut’s various controversies in vivid detail. At the same time, the filmmakers didn’t want to lump Signorelli and her passion into the worst of what her former husband had done. It was complex, captivating, and perfectly fit for a SXSW premiere. – Darcie Zudell

    American Dollhouse

    John Valley doesn’t want to reinvent cinema with his movie, American Dollhouse. The story of a struggling woman named Sarah (Hailley Laurèn), who inherits her childhood home only to be beset by unstable neighbor Kelly (Kelsey Pribilski), follows in the tradition of horror greats, including Psycho, Peeping Tom, and Black Christmas. But that doesn’t mean it doesn’t have bigger things on its mind.

    “I’m obsessed with how a slasher can be minimalist, but yet a container for huge, modern ideas,” Valley tells Den of Geek. “I stuck to the conventions and tropes, and kept telling the cast that it’s just a meat and potatoes slasher film, but we also tried to find some new, modern life in it.”

    Part of that work fell on Pribilski, who had to embody an adult who has reverted to a childlike state in frightening ways. “I thought about how an eight-year-old acts. They’re a little bit animated because we as adults have learned to contain our emotions,” Pribilski reveals. “It was about knowing when to go bigger. We had to choose very carefully the moments for me to go into ‘grizzly bear’ mode, as John would call it.”

    As Pribilski got to play big moments, it fell on her co-star Laurèn to keep things grounded in reality. “Sarah is a full person, and it was important to me that, when she makes a decision, it doesn’t come out of nowhere,” she says. That approach makes Sarah a formidable opponent, even for a killer as strange as Kelly. “Sarah’s a loyal fighter, for herself and the people she loves. She’s not going to just go down.”

    Thanks to the work of Pribilski and Lauren, American Dollhouse adds a great new killer and final girl to the slasher tradition. – Joe George

    Black Zombie

    In a pop culture saturated with zombie shows like The Walking Dead and The Last of Us, many have forgotten the roots of Z-culture. Writer-director Maya Annik Bedward is not one of them. Right down to the title of Black Zombie conjuring the racist shadow of the nearly century-old Hollywood film White Zombie (1932), Bedward’s haunting documentary looks at more than a hundred years of appropriation, reinvention, and evolution of a concept that’s rooted in the vast sweep of African diaspora, and the Black Haitian experience of revolution in particular.

    “In Haiti, everyone knows about the zombie, and in Haiti, stories of zombies and zombification are regularly talked about,” says Bedward. “It’s adjacent to vodou, but it’s not an everyday practice. Zombification is these stories of ‘I saw zombies in the field,’ and very connected to these ideas of enslavement.” These are the roots which inspired the white fiction that in turn gave the world their beloved modern flesh-easters. Black Zombie observes, critiques, and even at times celebrates this transition—within limits as the othered monster has increasingly become a symbol for tearing down sinister systems. Or just a fear of the status quo breaking… – DC

    Brian 

    Multiple castmates in Will Ropp’s directorial debut, Brian, were student body presidents in high school; a connection they didn’t make until SXSW. Ben Wang, who plays the titular character running for student body president to impress his crush, says he drew from his own awkward quirks when portraying the role. Edi Patterson and Randall Park play Wang’s parents, who care a little too much that their son finally made a friend. William H. Macy, who plays Brian’s therapist, talks Brian through his various panic attacks and struggles with mental health that cause him to lose roles in drama club, freak out his classmates, and annoy his friends. Meanwhile, his cool English teacher, portrayed by Natalie Morales, just wants to help Brian develop his party platform without indulging his inappropriate attraction to her. 

    It’s a coming of age movie about mental health and the messy aspects of high school, but even more than that, the script by Mike Scollins is effortlessly funny in a way that will have you quoting Patterson’s and Park’s one-liners long after the first watch. Speaking of Patterson and Park, they got to have a lot of fun improvising during some dining room table scenes. Brian’s a good time, but one that will yank its audiences’ right back to high school in a visceral and emotional way.  – DZ 

    Chasing Summer

    Directed by Josephine Decker and written by leading lady, Iliza Shlesinger, Chasing Summer is a romantic comedy that is sexy, realistic, and overwhelmingly nostalgic. 

    Jamie (Shlesinger) returns to her Texas hometown after a breakup, shocked to find that while the small hamlet may look the same, the people within it have changed. She picks up her old job at the roller rink, now owned by her sister (Cassidy Freeman), and meets Harper (Lola Tung), her young co-worker. Harper convinces Jamie to attend a party, where she meets a much younger Colby (Garrett Wareing). The two pursue a questionable fling while Jamie rekindles relationships with old high school classmates, including her ex-boyfriend, Chase (Tom Welling). The rest of the story is a sticky recreation of summer love that invokes the refreshing feeling of stepping into AC.

    At times predictable, the film also has moments of shock, including a soap-opera-level reveal that will leave middle-aged women everywhere swooning. It is a retrospective on summers of days past, and the inevitability of moving on. It’s conventional, easy, and delivers the promise of a classic romcom. – Alexandra Hopkins

    Crash Land 

    What happens when a group of amateur stuntmen try to make a “real movie” so they can prove their worth to a town that hates them? Crash Land, directed by Dempsey Bryk, answers that very question. 

    “It came out of COVID,” Bryk says. “I was stuck with my brother and my entire family… living behind the couches in the living room, as you do, and I was watching this Jackass marathon on loop and the idea blossomed out of there.” The film evokes classics of the genre – think Napoleon Dynamite, Superbad, and Bottle Rocket – against a Canadian backdrop of charming chaos. 

    “There was a process of trying to make it not like anything else after being so inspired your entire life by the things you love, and then you have to try to find your own voice,” Bryk says. 

    As Crash Land follows this group of boys becoming men one wipeout at a time, it simultaneously tells another coming of age story. The spirit behind Crash Land is the much more successful story of Dempsey Bryk and his brother Billy, who plays the emotional catalyst of the movie. Through his influence, the characters crash and land in a world that is fun, endearing, and unexpectedly touching. – Sophia Rooksberry 

    Drag

    Horror movies often tend to revolve around similar, oftentimes outlandish, concepts: A deranged killer, a disturbing monster, a haunted setting. But Drag, the debut feature film from Raviv Ullman and Greg Yagolnitzer, finds a scary story in an everyday, all-too-human problem. Yes, it’s a movie that’s really about the terrifying specter of lower back pain. 

    The film stars Lizzy Caplan and Lucy DeVito as a pair of frequently-at-odds siblings who find themselves in increasing trouble during a house robbery gone wrong. When one of the pair throws their back out mid-heist, the other must help her escape by dragging her from the house’s upstairs bathroom down to the waiting car. Painfully uncomfortable (and often downright grisly) acts of body horror ensue, with plenty of gross-out details in full close-up even as they discover that the man they’ve come to rob is not everything he appears to be. – Lacy Baugher

    Drift 

    SXSW needs an Action Documentary screening section solely for Drift to exist in. Described by director Deon Taylor as a real life Catch Me If You Can, the documentary stars real-life action hero Isaac Wright as he recounts his career going to unprecedented heights for his art.

    “The documentary is a lot more than just my artwork,” Wright says. “It has to do with my life and what I feel like my artwork really represents and a full portrait of what I think the goal of life is.” 

    Under the artist name Drift, he climbs to the very top of the country’s tallest buildings capturing the most breathtaking and adrenaline-spiking photos, slowly working toward his dream of summiting the Empire State Building. 

    “We went to the Instagram and these beautiful photos are just so captivating that it’s really overwhelming, and wondering why the hell he’s up there and what he’s doing was my initial reaction,” Roxanne Avent Taylor, the film’s producer, says. 

    The documentary answers those questions in an exploration of the human spirit, as the occasional illegality of Drift’s climbs resulted in a multi-year battle with law enforcement across multiple states. 

    “I believed that everyone could connect to the human story based on the fact that we’ve all been through something,” Taylor says. “We’ve all been misrepresented or someone has tried to tear down your character in some way.” 

    On its surface, Drift is an action thriller come to life. At its core, Drift is a tear-jearking testament to a freedom that only the birds and the film’s namesake have experienced. – SR

    Family Movie

    When watching Family Movie, a chipper-oddball indie directed by Kevin Bacon and starring Kevin Bacon, as well as wife Kyra Sedgwick, daughter Sosie Bacon, and son Travis Bacon, one gets the sense that this is not what a real family flick with the Bacons would look like. At least you’d hope so! But given how mirthful this splatter-comedy is when their fictional doppelgängers are forced to deal with a murderer on the set of their horror movie-within-a-horror movie, you nonetheless feel like you’re sitting across from the gang during a lively game night.

    “I think the movie definitely reflects on all our personalities individually and how we relate to each other,” Travis explains. “There’s definitely some little moments like that, but we’re still acting.”

    Kevin later adds that screenwriter Dan Beers was able to collect some surprisingly accurate depictions of interpersonal dynamics after interviewing them each separately. Says the director: “When we got the first draft of the script, we were like, ‘Holy shit, how did you know that?!’ And it’s not like it’s pieces of dirt… it’s just ways of being.” – DC

    Forbidden Fruits

    A title like Forbidden Fruits suggests a heavy drama, thick with Biblical overtones and meditations on the nature of sin. Yet, one look at the new movie starring Lili Reinhart as the leader of a witch cult of mall girls promises the exact opposite, presenting itself as a spunky horror comedy.

    However, according to director Meredith Alloway, religion weighs heavily on the story. “I love the idea that women are told that we’re quite literally the origin of evil and sin,” Alloway tells Den of Geek, a theme that goes all the way to the source material, the play “Of the woman came the beginning of sin, and through her we all die” by Lily Houghton, who cowrote the screenplay with Alloway. 

    Alloway continues, “When I read the play, I see all these women reclaiming that. Witchcraft and being in a coven are ways to make that narrative ours. I think when women get together and set an intention, whether it’s magic or literally just talking at a sleepover, that is really powerful.”

    As Apple, Reinhart gathers a team of women to join her coven, including Lola Tung as Pumpkin, Victoria Pedretti as Cherry, and Alexandra Shipp as Fig,” all characters who are more than they initially appear. 

    “No one’s playing a stereotype,” says Pedretti. “There are a lot of cues that might lead you to misjudge these women before you get to know them. Each character ends up surprising the audience with their humanity.”

    “I think we all play very complex human individuals who just so happen to be born in a female body in this lifetime, who are trying to navigate the structures of our world,” adds Shipp.

    In other words, the characters are richer and more complicated than one might think, just like Forbidden Fruits itself. – JG

    The Fox 

    Ever wanted to get in bed with a fox voiced by Olivia Colman? Well, Jai Courtney got to in Dario Russo’s The Fox, a magical realism comedy about what happens when people try to change themselves for their partners. Emily Browning reluctantly agrees to marry Courtney’s character, Nick, while secretly having an affair with her boss (Damon Herriman), who is also cheating on his wife (Claudia Doumit). In an effort to win their partner back, Nick listens to a fox he was moments away from killing in the woods, leading all the characters down a hole where they literally lose themselves. 

    Russo directed, wrote, and scored The Fox, so naturally he had a very specific vision for the fantasy and comedy elements of the film. Russo says he finds it really annoying when characters in films deliver an over-exaggerated reaction to talking animals, so in this universe, talking foxes are the least out-of-the-ordinary thing. Though they didn’t really act alongside Colman, Doumit and Browning were obsessed with her narration; Browning says she listened to a recording of Colman’s voice acting over and over again on a run one time. The film is as ambitious as it is mystical, but it never loses its ability to make fun of the chaos and camp of it all. – DZ

    He Bled Neon

    With all its neon lights and gleeful indulgence of vice, Las Vegas has always made for a popular movie setting. Still, Vegas-native and He Bled Neon producer Nate Bolotin felt there was a certain kind of movie that Sin City had not yet played host to. 

    “Twenty years ago, my step brother and best friend passed away. I got a text message from a mutual friend, just like how it happens in the movie actually, and had to go back and bury him and reconnect with the people that I had lost touch with. Someone came up to me at the funeral and said ‘hey, I think there’s some foul player here’ but we never went too deep. Fifteen years later or so it just clicked: we haven’t seen a Vegas revenge noir thriller in a world outside the Vegas strip.”

    True to Bolotin’s real life experiences, He Bled Neon picks up with successful businessman Ethan (Joe Cole) receiving the fateful text and returning home to Las Vegas where he reconnects with his old crew (which includes characters played by Marshawn Lynch, Rita Ora, and Ismael Cruz Cordova) and begins to unpack the mystery of his brother Darren’s (Paul Wesley) death.

    He Bled Neon aces the set-up, largely thanks to the colorful sensibilities of director Drew Kirsch and electronic soundscapes of composers Joe Trapanese and DJ Zhu. Once the revenge plot moves into the desert and grimy environs outside the strip, however, the film loses its distinctive sense of place and settles into a disappointingly conventional crime story. – Alec Bojalad

    Hokum

    Irish filmmaker Damian McCarthy became a SXSW Midnighter hero when 2024’s Oddity took home the festival’s Audience Award. Now he’s returned to Austin with some extra star power in tow. The horror auteur’s next spooky effort, Hokum, stars Severance’s Adam Scott as Ohm Bauman, an American novelist who absconds to a remote Irish inn to scatter his parents’ ashes and finish his latest book. Of course, it just so happens that this particular inn contains a honeymoon suite that is said to be haunted by an ancient witch…

    Speaking of hauntings, Hokum’s first act is haunted by a weak characterization of Scott’s Ohm, whose lifetime of unaddressed grief has apparently manifested itself as a need to be a real dick. Once the movie gives way to the charismatic actor’s charm and McCarthy begins to flex his horror muscles, the back half blossoms into a pleasingly macabre experience. It’s also quite dark. Literally. 

    “It’s dark in the film, and it was literally dark when we were in there making it,” Scott told Den of Geek. “It was pretty clear that this was going to be unsettling. As an actor, since your control is limited, you never know really how something is going to turn out at the end of the day. Having so much faith in Damian and seeing all the components they put together on set, I knew there was a chance that this could work.”  And indeed Hokum does work. – AB

    I Love Boosters

    Three projects into his burgeoning film career, it’s fair to say that Boots Riley has developed a house style. For a lesser creative, that level of one-note fixation might begin to grow stale. Thankfully, the marriage of the surreal with leftist politics is a note that this rapper, songwriter, and record-producer-turned-filmmaker knows how to play quite well. And he continues to do so in I Love Boosters.

    Keke Palmer stars as Corvette, an aspiring fashion designer who ekes out a living as a “booster,” pinching high-end textiles and selling them to her neighborhood at a discount. Together with her friends Sade (Naomi Ackie) and Mariah (Taylour Paige), Corvette sets her booster sights on high-end fashion entrepreneur Christie Smith (Demi Moore) to close the fashion gap between the haves and have-nots.

    Like Sorry to Bother You before it, I Love Boosters’ premise is merely a jumping off point for all the vivid imagery and offbeat twists to come. The movie that a ticket-buyer expects to see at minute 0 is very much not the movie they experience by minute 60 or so. Unlike Sorry to Bother You, however, Boosters’ absurdist twist isn’t a completely out-of-left-field human-animal hybrid situation but a far more mundane science fiction tool that we won’t spoil. Despite the relatively conventional sci-fi trappings of its back half, I Love Boosters feels satisfyingly anarchic and bizarre all the way through. – AB

    Imposters 

    The idea of changelings, body snatchers, or other false creatures that are somehow swapped into existing families without any of their members being the wiser is nightmare fuel from time immemorial. Imposters takes this idea and runs with it, crafting a story that wrestles with ideas of parenthood, commitment, and fear. 

    The film stars Jessica Rothe and Charlie Barnett as Paul and Marie, a couple forced to contend with any parent’s worst nightmare when their baby mysteriously disappears. While everyone is convinced the resident Town Creep is responsible, Marie’s not so sure, and when she turns to the mysterious Orson (Bates Wilder) for help, he sends her to a mysterious cave in the woods. But the child Marie brings back from the forest may not be the son she lost.

    A genre-bending film that’s full of the sort of twists even seasoned moviegoers will likely not see coming, that explores the all-too-human pain of love and loss through a filter that’s covered with no small amount of blood. – LB

    Kill Me

    At first glance, Kill Me seems to ask the question, “What if Charlie Kelly woke up in a bathtub of his own blood?” But with a deeply emotional and revelatory performance from Charlie Day, the horror-dramedy becomes so much more. Sure, our main character, Jimmy, lives in a dingy apartment a la Charlie and Frank, and his frenetic energy while trying to solve his own murder attempt harkens back to a Pepe Silvia obsession… but that’s not a bad thing. The audience is immediately comforted by the familiarity of Day’s impeccable comedic timing right before adjusting to the sobering reality that his character has a history of trying to take his own life—making him the perfect choice to carry the emotional weight of this film. 

    Jimmy meets his match in Allison Williams’ 911 operator, Margot. She grounds both him and a film that turns into an unlikely whodunit. Williams has also become something of a scream queen over the last decade (starring in Get Out, M3GAN, The Perfection), but pure horror this film is not. Similarly to Day, she brings the authenticity and depth needed to be Jimmy’s “straight man” as she navigates the hijinks of solving a maybe-murder while healing from her own trauma. 

    It’s a feat to address subject matter as serious as suicide while ensuring the audience knows they’re allowed to laugh—and a lot for that matter. Director Peter Warren tactfully displays both ends of the spectrum that people with mental illness can experience: the dramatics of suicide attempts but also the mundanity of ordering the same meal every day for months. In his own words, “Depression and mental illness are incredibly dangerous but can also be dumb and annoying. It’s like stepping on a rake a million times in your head.” – Britt Migs

    Lainey Wilson: Keepin’ Country Cool

    Amy Scott is the director behind Counting Crows: Have You Seen Me Lately? and Sheryl, proving an affinity for documentary storytelling about some of the music industry’s most iconic names. Now, she is bringing the story of modern country idol Lainey Wilson to Netflix.

    Lainey Wilson: Keepin’ Country Cool is set to premiere globally on April 22. It follows the country star on her ascent to major stardom while traveling the country on the “Country’s Cool Again Tour” in 2024. 

    “After a while, it was apparent that we weren’t chasing the tour, we were chasing Lainey,” Scott says. “Her life is all over the place. Her life is not a straight line, so we just tried to hold onto that mechanical bull ride.” 

    While being jostled around the country and holding on for dear life, Scott and her team discovered the heart of Wilson’s music and her musical persona. Not only are her powerful vocals and charisma emblematic of industry titans like Dolly Parton, but her compassion and grit are what give her music, and this documentary, their spirit. 

    “Vulnerablilty can come in many different flavors,” Scott says. “It can be vulnerability about struggles, but vulnerability also is when you can be funny and have a really unguarded, self-deprecating nature, and we realized early on that (Lainey) is really, really funny.”

    Throughout the documentary, Wilson is seen constantly songwriting when she isn’t rehearsing or performing. She has spent her career banking songs for upcoming records, using her determination and open heart as fodder for the future of her skyrocketing career; Scott’s latest project delivers all of these details in a way that is careful, exciting, and, of course, incredibly cool. – SR

    Leviticus

    The land down under has long been a reliable source of uniquely upsetting films, from Picnic at Hanging Rock (1975) to Lake Mungo (2008), but Australian horror has enjoyed a renaissance as of late, thanks to hits such as The Babadook (2014) and Talk to Me (2022). 

    Director Adrian Chiarella continues that tradition with his debut, Leviticus, which stars Joe Bird of Talk to Me and Stacy Clausen as two gay teens forced to participate in a conversion ritual by a fundamentalist pastor. The ritual releases a malevolent entity that takes the form of the person the victim most desires, which, for the boys, is one another.

    “When I started thinking about things that were personal to me as a gay man, I knew that homophobia was something that I wanted to tackle in a film,” Chiarella tells Den of Geek. “There’s a clue in the word: homophobia is a fear. So I started digging deeper into what that might look like through the lens of this genre.”

    The stars of Leviticus followed Chiarella’s lead by playing the reality of their characters’ plight. “I think that horror films are, in a way, dramas with horror elements because the emotions are so real and raw,” observes Bird. “It’s not like I was filming a horror film in one scene and a romance film in others. These were just real, natural people going through this experience.”

    Chiarella adds, “I knew the horror element wouldn’t work unless you were really invested in the connection between these two lead characters.” To that end, he sent Bird and Clausen out on various field trips to build their characters. Those trips included visits to the country because, like any other good piece of Australian horror, the terror in Leviticus comes from the landscape. – JG

    Mam

    Part of the appeal of film festivals like SXSW is the chance for audiences to take in both mid-budget blockbusters bound for big theaters and the indie-est indies that ever indie-d. Projects like Over Your Dead Body and I Love Boosters occupy the former space on 2026’s SXSW roster. Mam is very much part of the latter. 

    Directed by Nan Feix, Mam is an unabashed love letter to Vietnamese cuisine, New York’s Chinatown, and love itself. It’s also a novel blend of fictional narrative and documentary. While Mam is fully scripted, it recounts the real life story of chef Jerald Head as he moves to New York from Texas and tries to make it in the culinary world. Playing Head and his wife and business partner Nhung Dao Head are Head and Nhung themselves, who now own and operate Mắm on Forsyth Street. 

    Shot in a scant 16 days (usually after a shift at the restaurant, Head revealed in a post-screen Q&A), Mam wears its lowkey indie status on its sleeve. The rough-around-the-edge film is unlikely to have a second life outside of Austin. But for 81 pleasant minutes, it made festival-goers in Alamo Drafthouse’s auditorium 5 very, very hungry. – AB

    Manhood

    What is likely to earn points as the most unusual and eyebrow-raising documentary at this year’s festival (or perhaps almost any other) is Daniel Lombroso’s Manhood, a sober look at the growing popularity of “male enhancement” cosmetic procedures (read: penis enlargement). It’s a subject ripe for ridicule or late night comedy, and yet the film takes a clinical lens that flits between aloof and sympathetic, depending on the interviewee. All are part of a larger line, though, of dudes willing to sell their house or risk their mortgages to increase their girth.

    Astutely setting the film primarily in the Dallas and Miami areas of the South, Lombroso pulls at a thread from a previous documentary—his depressingly prescient study of the then fringe elements of the alt-right in the Atlantic’s White Noise (2020)—to draw a link between the manosphere culture of supplemental pills and Joe Rogan-like podcasts, and a lot of the guys desperate to add a few inches at any cost. Yet there’s perhaps a bit more empathy for the younger and more vulnerable parties who get taken for a ride and permanently disfigured by grifters with white coats and needles stuffed with filler.

    The film could draw a stronger link in its thesis between the modern culture that its title obviously evokes and the guys on the table, but it finds both pathos and condemnation to varying degrees for alleged “medical” predators, and the type of souls who end up thinking they need to have this procedure done. One wishes to spend more time with the partners who often shrug they don’t even care. – DC

    Mike & Nick & Nick & Alice

    Writer-director BenDavid Grabinski might have been too young to make movies in the ‘90s, but he was definitely watching them. And nowadays he appears to be determined to bring a flavor of them back, complete with a hard-swaggering, high-concept genre exercise starring Vince Vaughn at his most confident. A movie about gangsters, parties, and time travel, Mike & Nick & Nick & Alice is a bit like if Swingers, Go, and Back to the Future had a half-forgotten love child who we’re now only meeting as an adult. And that’s meant as a compliment.

    As the title suggests, this is a love triangle served in four parts after gangster Nick (Vaughn) time travels about half a year into the past in order to stop his slightly younger self (also Vaughn) from murdering their best friend Mike (James Marsden) for the affair he’s carrying on with Nick’s unhappy wife, Alice (Eiza González). It’s a gonzo premise that is treated with just enough seriousness to give meat to the idea of considering second chances—whether through the magic of a proverbial “undo” button,  or where you’d even want to hit it to fix a bad mistake. As González notes in our studio, “I always connect with moving forward. I think there’s something beautiful about the chaos. Some of the craziest, most beautiful things that happened in my life have come from real terrible circumstances and bad moments”

    Still, in its heart, this movie is all about the vibes, as indicated by its structure being based around the “PARTY,” “AFTERPARTY,” and “AFTER-AFTER-AFTERPARTY” which its main quartet crashes while trading barbs in a screenplay with more wisecracks than there are bullets. And trust us, this movie has a whole lotta bullets. It’s bravado and muscular mischief and suggests Grabinski is one to watch. – DC

    My Brother’s Killer

    This documentary solves a murder case gone cold. My Brother’s Killer, directed by Rachel Mason, traces the 36 years since the brutal murder of 25-year-old William “Billy London” Arnold Newton in West Hollywood. 

    This film captures the violence, trauma, and grief gay men experienced during the AIDS epidemic, as well as their resilience. Through archival material and dozens of interviews with those connected to the case, including her own mother, Mason uncovers shocking information.

    “It was a terribly violent time and I think that’s another undocumented part of gay history,” Mason says. “Sadly, it is hard to always focus on the negativity and sadness, and the resilience of gay culture is the most amazing thing. In the sea of death, you also have this vibrancy, and I really wanted to showcase that. It doesn’t always have to be dark, the fight can be joyful in a strange way.”

    Billy was an adult filmmaker, poet, and illustrative artist—he was also deeply loved by the people around him, a feeling evident throughout the documentary. This documentary is more than a true crime film, it showcases the struggle of representation and provides recognition and closure for those involved. – AH

    Bob Odenkirk in Den of Geek Studio SXSW

    Normal

    Full disclosure: We were not able to actually see director Ben Wheatley and screenwriter Derek Kolstad’s new action movie starring Bob Odenkirk—the deceptively titled Normal—in Austin. However, we were able to speak with all three men, who have described the movie as a kind of inversion of High Noon. In that classic Western, Gary Cooper stood alone as a sheriff willing to face up bad men while all the people who loved him turned their backs in fear.

    “It’s ultimately taking those themes, the sense and the appeal, and wrapping it around small-town Americana,” Kolstad observes about their new film. But it’s also doing so in a modern context with the America of today, setting the stage for an action spectacle apropos of the scribe behind John Wick.

    For his part, director Wheatley appreciates that he has brought a British sensibility to the proceedings, saying, “An outsider’s perspective is always interesting. Not to say it’s better than worse from any other point-of-view, but I think there’s a long history of people coming from the outside to film the States and to see it with different eyes.” It also might be befitting of the neo-Western. While genre icon John Wayne famously detested High Noon back in the day, refusing to believe a small town wouldn’t support a good man in need, Normal’s viewers might be much more open to the idea. As Odenkirk quips, “He should meet some small towns.”

    Over Your Dead Body

    When it comes to filmmaking, the concept of “escalation” can be just as important as acting, scripting, or even turning the damn cameras on in the first place. Few movies at the 2026 SXSW Film Festival understand the importance of raising the stakes better than action comedy thriller Over Your Dead Body

    Based on the 2021 Norwegian film The Trip and directed by The Lonely Island’s Jorma Taccone, all Over Your Dead Body knows how to do is escalate. Things start relatively simple with husband and wife Dan and Lisa (Shrinking’s Jason Segel and Ready or Not’s Samara Weaving) repairing to a remote cabin upstate to save their dying marriage… and also to kill each other. Dan and Lisa’s murderous plans are complicated by a cascade of interlopers and extenuating circumstances, leading to mass amounts of blood, gore, and perhaps even some rekindled romance. 

    Over Your Dead Body’s commitment to ratcheting absurdity means that its first act runs a bit dry. But once two prison escapees (Timothy Olyphant, Keith Jardine) and their guard conspirator (Juliette Lewis) enter the narrative, the movie really gets rolling and never looks back. And like any good partnership, Segel and Weaving excel in dabbling in the other’s home turf of horror and comedy, respectively. 

    “I’m just so proud to have made a remake that I feel like has teeth,” Taccone says. “It’s dark, it’s fucked up, and it’s more gory than the original, weirdly. It has its own tone, and I just feel very proud that we could make something that I like equally to the original.” – AB

    Pizza Movie

    In a shameless throwback to the stoner comedies of the 2000s, Pizza Movie is the type of slouched and underachieving good time that is destined to best be seen in a crowded theater or even rowdier dorm room. Which isn’t to say it’s dumb. Writers-directors Nick Kocher and Brian McEllhaney take some boldly clever swings in their high-concept (ahem) where a couple of college screw-ups (Stranger Things’ Gaten Matarazzo and Sean Giambrone) indulge in an experimental drug they find in their dorm room. However, the thing has such potent magical properties, it not only gets them high but causes them to break the space-time continuum with time-loops and fourth-wall breaks. They’re like a pair of blitzed Punxsutawney co-eds.

    The only cure? Pizza, of course, which is down in the lobby if they can get down there in time—or face severe consequences. It’s a deliberately, heavily-baked concept, which Kocher describes as based on a true story. “In college, we had the idea. Everyone’s ordered food when they’re not fully sober and it’s difficult.” You can say that again, dude. – DC

    Power Ballad

    It is said that success is the child of many fathers while failure is an orphan. But that doesn’t mean every papa gets the credit they deserve, particularly in fields as competitive (and lucrative) as songwriting. Such is the appealing conceit of John Carney’s latest bittersweet laugher that looks into the music business with as much affection as there is contempt. They, in fact, walk hand-in-hand when Paul Rudd’s washed up wedding singer Rick meets Nick Jonas’ former boy band heartthrob searching for reinvention, Danny. The two jam and jive during a joyous night over drinks in Ireland, including when Rick shows a few of the pieces he’s working on, particularly a poignant ballad that’s only missing a bridge.

    Six months later, the song has it, as does Danny who’s introduced it to the world as an instant sensation—and as a piece of musical magic he wrote solo. A bit of a music industry “Book of Job,” where Rudd’s hurt and aggrieved Rick must deal with the eye-rolls and second-guessing of nearly everyone in his life, from his bandmates to even his wife and daughter. There’s a lot of humor in the scenario, but plenty of pathos as Rudd gives one of the finest performances of his career. He’s a man losing his sanity and his even-keel, to the point where he must travel from the Emerald Isle to the City of Angels. – DC

    Pretty Lethal

    The loftiness of chasing perfection, and the physical demand of what many consider the highest performing art, has always made ballet a compelling subject for filmmakers. Storytellers often wish to track the psychic or physiological toll of achieving révérence—or at least contrasting it with gonzo, blood-splattered spectacle. Pretty Lethal attempts both in a daffy B-crowdpleaser that essentially Die Hards five prima ballerinas when they’re trapped in an eastern European den of iniquity run by a vamping Uma Thurman. 

    The movie gets a lot of mileage out of its balletic heroines being decidedly not John McClane (or Ana de Armas in a John Wick movie, for that matter). Instead, they nervously use their on pointe routine to “Waltz of the Flowers” for a blood-soaked defense in a bar involving knives, shattered wine bottles, and knives in the slippers. Says star Maddie Ziegler, “We sort of came up with a style we’re calling ballet-fu, which was really fun. Because we referenced if you put a bunch of feral cats in a box, that was what we were doing to survive. But I think we used our strengths to our advantage,” complete by combining the input of stunt coordinators and ballet choreographers. Grace is harmony. – DC

    Ready or Not 2: Here I Come

    In an age of so-called “elevated” and sober-minded horror cinema, it is a blessing from Mr. Le Bail that we have Radio Silence ready to turn up the gore and fun. The filmmaking collective behind Ready or Not, Abigail, and the best Scream movies made in this century return to their own blood-red haunts and splash new buckets of crimson in the delightfully sinister Ready or Not 2. Like its predecessor, this is a grinning romp suffused with eat-the-rich gallows humor as we revisit the Bride in the splattered dress (Samara Weaving) mere moments after she parted brutally with her groom for good. (He was in the process of trying to sacrifice her to Satan. As apparently one does in country estates.)

    Unfortunately for Grace, there are plenty of other Devil-worshipping billionaires out there. It turns out to be a blessing for the audience, though, as she inevitably slaughters them alongside sister Faith (Kathryn Newton) during a new hide-and-seek game at a country club that looks suspiciously like Mar-a-Lago. As director Gillett acknowledges, “All of the institutions that we engage with, if you follow them fall enough, you’re probably going to find some form of corruption.”

    The movie doesn’t quite reach the same highs as the first movie since we know the punchline this time, but the climax at an elite Satanic altar is every bit as giddy as the combustible billionaires last time, and Samara Weaving still knows how to deliver a killing parting shot. – DC

    The Saviors

    Filmmakers don’t get to choose whether their films will be “timely” or not. Movies take a long time to make and time itself, as you might have noticed, has a tendency to Inexorably March On. Rarely has there ever been a better case study of that phenomenon than the unusually (and completely accidentally) timely comedy thriller The Saviors

    Adam Scott and Danielle Deadwyler star as Sean and Kim, a couple in a failing marriage who look to supplement their income by renting their shed to Amir and Jahan – a brother and sister from an undisclosed Middle Eastern nation played by Theo Rossi and Nazanin Boniadi. While Amir and Jahan seem nice, they’re also suspiciously interested in the president of the United States’ whereabouts and appear to be building some sort of dangerous device. But this can’t be what Sean and Kim think it is, right? They’re not bigots and this isn’t a mediocre season of 24… right?

    Due to a confluence of events like a pandemic, two Hollywood work stoppages, and the general improbability of getting a movie produced at all, The Saviors took 10 years to make from conception to premiere. And in that 10 years, the world shifted away from Obama-era progressive optimism to a more overt return to Islamophobia with the United States even entering war with Iran just two weeks before the film’s premiere.  

    “You know, there was a period in those 10 years when I thought the world had changed a bit, and maybe we should focus on a different project,” director and co-writer Kevin Hamedani says. “And then the world changed again, and suddenly The Saviors is even more timely, unfortunately.”

    The Saviors’ incidental resonance to current events only enhances what is already a compelling narrative. Scott and Deadwyler shine as two ostensibly progressive individuals who need answers but don’t want to seem like Bradley Whitford’s “I’d have voted for Obama a third time if I could” character in Get Out. Hamedani and the script deftly guide the audience through those choppy waters, always leaving enough breadcrumbs so that the viewer doesn’t fully feel like Bradley Whitford either. The end result is a nifty little thriller that feels like The ‘Burbs for the Airbnb age. – AB

    Seekers of Infinite Love

    Though Seekers of Infinite Love may one day be a cult comedy, right now it’s literally a comedy about a cult… but also about family

    Hannah Einbinder (Hacks), John Reynolds (Search Party), and Griffin Gluck (American Vandal) star as a trio of siblings who must rescue their sister from the clutches of the Peoples Temple-esque the Seekers of Infinite Love. Helping them on their mission is ex cult member-turned-cult deprogrammer Rick (Justin Theroux) and his wardrobe of tactical vests. 

    According to writer/director Victoria Strouse, Seekers’ cult angle emerged unexpectedly late in the writing process of the script, which was featured on 2008’s Black List.

    “I’m utterly fascinated by siblings and I think some of the complexities in sibling relationships, [it] kind of ends up talking so much about all human relationships,” she says. “As I was working on it, I became really interested in cults, this idea of a secondary but corrupt family.”

    That fascination with family shines through with Einbinder, Reynolds, and Gluck evoking a believably agitated sibling unit, if not a believably genetic one with their diverse mix of heights and hair colors. While the end result could have used a little more cult whackiness to fully live up to its comedic potential, it’s hard to be disappointed with time spent on a road trip to oblivion with four very funny actors. – AB 

    See You When I See You

    Before 2025’s SXSW title The Baltimorons, indie filmmaking titan Jay Duplass had not directed a movie since 2012’s The Do-Deca-Pentathlon, opting to help shepherd other storytellers’ visions alongside his brother and producing partner Mark Duplass. But when fellow producing family Kumail Nanjiani and Emily V. Gordon brought the script for See You When I See You his way, he knew he had to get behind the camera once again. “It just felt big and scary and like I couldn’t say no,” he says. 

    It’s easy to see why the project appealed to Duplass. Written by comedian Adam Cayton-Holland, and based on his memoir Tragedy Plus Time, See You When I See You is an intensely personal story about Cayton-Holland’s PTSD following the death of his sister by suicide. Cooper Raiff (director and star of Cha Cha Real Smooth) steps in as the film’s Cayton-Holland analogue, Aaron, and does marvelous work unpacking the young man’s confused journey through grief.

    Outside of some creative visual choices representing Aaron’s struggle to reclaim happy memories of his sister, See You When I See You doesn’t have much new to say about the grieving process. Ultimately though, that’s a feature, not a bug, as the rhythms of pain should resonate with anyone who has experienced real tragedy. Even if those experiences involve significantly less Third-Eye Blind and Sum 41 than Aaron’s. – AB

    Sender

    What’s the strangest thing you’ve ever been sent in the mail? The answer to that question can range from the mundane—a gardening hat you didn’t order—to the truly bizarre. Actor David Dastmalchian, for one, tells us he was mailed dirty underwear more than once by an anonymous…. fan? “It was accompanied by a really bizarre letter,” the actor grimaces.

    That’s obviously immediately creepy, yet writer-director Russell Goldman’s Sender takes an initially more innocuous stance before turning the screws. And according to Goldman, this too is based on real-life. “[Folks will] send you cheap objects that are related, most likely, to your search history online and any cookies or data can take from what you’re looking like. They send it to your home so they can write reviews in your name that are five stars, so those products can then get a boost on the algorithms.”

    Sender takes that conceit to its most ominous, Hitchockian extreme when Britt Lower’s Julia receives a mysterious package from an even more mysterious, and threatening, source. – DC

    Sinner Supper Club

    Described as a “gay mumblecore ghost story” directors of Sinner Supper Club, Daisy Rosato and Nora Kaye, deliver exactly what is promised. Shot on an iPhone within six days and rooted in improvisation, the film is a scrappy documentation of a NYC-based friend group on the fritz during a heat wave. 

    Gathered in a small apartment for Genevieve’s (Genevieve Simon) “eviction funeral,” tensions arise amidst the group over things big and small. On top of navigating their shared-traumatic experience, the death of a best friend, nothing seems to go right — a melted ice cream cake, the power going out, and worst of all, an uninvited partner is brought to the gathering. The night culminates in an unexpected, yet restorative, paranormal experience. 

    Sinner Supper Club explores the surrealness of grief from an intrinsically queer perspective. It delivers comedic beats and moments of grief with the fluidity of a high-budget film. While there are moments of hesitance from the actors, the ensemble cast delivers a performance where you feel dropped in the middle of their hangout. – AH

    They Will Kill You

    The idea for They Will Kill You blossomed after director Kirill Sokolov stayed in an eerie hotel that he believed to be inhabited by a cult of old women. The fictionalized and much gorier version stars Zazie Beetz as the new housekeeper at a decadent hotel with a history of mysterious disappearances. As Sokolov brings her violent journey through the mysterious building to the big screen, the company explores countless genres, from mystery to slapstick to fantasy. 

    “Kirill was always reminding us that yes, there’s action, and yes, there’s comedy, but also, at least for me, the most important thing was the truth at the moment,” Myha’la, who plays Beetz’s sister, says. “Then, if it feels truthful and honest and real to me and us in this moment, the comedy is going to come in the edit.” 

    The highlight of the movie is the performances from actors like Patricia Arquette, Tom Felton, Heather Graham, and the aforementioned sister duo. They expertly balance multiple styles and tropes, giving the movie an edge in an arguably oversaturated genre.  

    “It is genre-defying because it is a love story about two sisters, and that’s really at the core of everything, and then you mix in the brilliant Kung Fu and gore and martial arts and heroism,” Felton says. “It’s a unique blend. I don’t think a film has ever been made quite like this.” – SR 

    Time and Water

    “The future we were warned about is no longer distant, it is here.” This is the message that Oscar-nominated director Sara Dosa shares in her newest documentary Time and Water. Through archival material and the writings of Icelandic author, Andri Snær Magnason, Dosa puts together an expansive story focused on generational memory and humanity’s relationship with nature. 

    Centered on Magnason’s own family ties, Time and Water captures the vast existence of Icelandic glaciers and the tremendous loss felt by the author as he witnesses the disappearance of these titans, and the passing of his grandparents. The audience is transported through the passing of time and experiences the indelible impression humans make on the world and people around them.  

    “There is something radical about love, especially in a time that is so polarizing,” Dosa says. “Wherever we can center love and joy amid the doom and the apocalyptic stories abound, I think it could inspire hope…I think it can give a sense of a light in the dark to keep people working toward the change that we so badly need.” 

    Time and Water is a stark wake-up call, not only to protect the planet we call home, but to cherish our time with loved ones. The future is now, and Dosa captures the course we took to get here. – AH

    Wishful Thinking

    It sometimes feels impossible to be happy—even with someone you love—when there’s so much bad news in the world. So imagine the pressure Julia and Charlie (Maya Hawke and Lewis Pullman) are under in Wishful Thinking, a supremely clever and wholesome romantic comedy where the fate of Portland, Oregon, if not the world, rests on the straining romance of two young people at a crossroads in their life. As it slowly dawns on them, when things are good in their domestic life, Julia is suddenly up for a promotion at work, and Charlie’s crypto investments are skyrocketing. When they’re unhappy with each other, literal earthquakes can occur.

    It’s a shrewd use of magical realism to entertain wish fulfillment—like literally getting rich off crypto after a particularly sexy date night—but also comment on the pressures we put on each other in the modern world, particularly for those who are as socially entangled and plugged in as the film’s Gen-Z antiheroes. It’s an indie rom-com about the challenge of early adult romances, complete with a big swing ending. But it finds an innovative way to engage these elements, especially when Hawke and Pullman are simpatico—and perhaps even more so when they’re not. – DC

    Woodstockers

    We were delighted to welcome film and TV mainstay Corbin Bernsen back to the Den of Geek Studio at SXSW to chat about his indie TV pilot, Woodstockers. This time, Bernsen–who is the showrunner, writer, and star– was joined by his son, Oliver, who directed the pilot episode (he also had a feature-length directorial debut, Bagworm, play at the festival).

    The delightfully funny dramedy puts the audience in the headspace of an aging hippie confronting life, death, and a bygone era and its legacy set against scenic upstate New York. Our conversation was introspective as the Bernsen’s grappled with deep conversations on set, Corbin’s own career journey, which launched during that period in 1967, and their excitement for independent filmmaking in the television space. Their commitment to the form paid off: Woodstockers took home an Audience award. Now that’s Flower Power. – Chris Longo

    Television

    Are We Still Married?

    Indie TV pilot Are We Still Married? stars Dustin Milligan as Jack, a husband who has been turned into a vampire via a bite from a mysterious bat, and Taylor Misiak as Laura, his wife who isn’t sure whether we should let him back in the house. While that is undoubtedly a bold genre concept, the inspiration for the story came from a real life experience for writer/director Kit Steinkellner (who also created the Facebook Watch series Sorry For Your Loss).

    “My husband did get bit by a bat,” she tells Den of Geek. “It was that kind of crazy thing that doesn’t happen except when it does. He got a rabies shot and was OK. I don’t know how you process trauma in your marriage but comedic bits are our go-to. So we just started cracking vampire jokes. At a certain point, he was like ‘but if I were a vampire, you would let me back in the house, right?’ I paused and he didn’t like that pause.”

    Through the safety of her closed kitchen window, Laura peppers Jack with questions about vampirism that he doesn’t have the answers to (the bat didn’t exactly explain all the rules of this whole thing). Steinkellner and the actors make beautiful work of the premise, both having fun with the genre silliness of it all while also delving into the pathos of a loving marriage interrupted by a truly unforeseeable calamity. Coming in at just 15 minutes long, Are We Still Married? serves as a compelling proof of concept for whatever direction, and medium, Steinkellner wants to take the story from here.

    “I did write a feature inspired by this that was on this past year’s Blacklist. At the same time, in having this conversation with South by, a part of the independent pilot requirement is to submit a series bible. I’ve actually not done this with other ideas before but I have pretty thoroughly explored both options. Ultimately I just want to keep telling this story.” – AB

    The Audacity

    Have you ever wondered where the wunderkind techbros of Silicon Valley get the audacity? Thankfully so has Jonathan Glatzer, a former writer on Succession and Better Call Saul and now the creator and showrunner of AMC’s fittingly named The Audacity

    “For years, I thought of audacity as a kind of superpower that we all have, but few of us actually employ because it involves crashing through norms of behavior. Most of us are not bulls in China shops, but in Silicon Valley, it is kind of regarded as an attribute. There’s a lot of broken dishes around there, but that’s what they like: move fast and break things,” he says.

    Through its first three episodes, The Audacity doesn’t move fast, but it does break some things. Billy Magnussen (a compelling character actor probably best known for Game Night and Made For Love) steps into the megalomaniacal shoes of Hypergnosis CEO Duncan Park. Eager to prove that he’s much more than the product of some well-timed good luck, Duncan leverages his relationship with his therapist Joanne Felder to gain some (flagrantly illegal) advantages over his competition. 

    The Audacity excels as a slice of life look at the excesses (and yes, audacity) of Silicon Valley’s elite in the rapidly expanding AI era. While its early episodes come off as more of a vibe in search of a story, the level of talent both behind and in front of the camera suggests that it has plenty of room to grow. – AB

    The Comeback Season 3

    Since The Comeback first premiered in 2005, Valerie Cherish has always returned to TV when it needs her the most. The first season of the HBO comedy found the aging sitcom diva played by Lisa Kudrow trying to navigate the brave new world of reality TV, recording her comeback as Aunt Sassy on new sitcom “Room and Bored.” In 2014’s season 2, Valerie attempted to get in on the Bravo-fication of the medium by pitching a pilot to Andy Cohen. Now, with The Comeback’s third and final season, Valerie is set to tackle television’s gravest challenge yet: artificial intelligence.

    “[The Comeback] began with what everyone thought was the first extinction event [of TV], which was reality TV, eliminating scripted television for the more economical – no rules, no unions,” Kudrow says. “Happily that wasn’t the end. [Co-creator Michael Patrick King and I] were having lunch and he was like ‘What about this: Valerie is finally offered the lead in a multi-camera sitcom but it’s written about AI.’”

    The Comeback season 3’s eight episodes present yet another comedic masterclass of industry satire and character-building. There’s never been someone quite like Valerie Cherish on television and there is unlikely to be ever again. The fading superstar is desperate for fame yet uniquely ill-equipped to handle it, spending much of her adult life flourishing in front of sitcom studio multicams while putting her foot in her mouth in front of documentary single cams. 

    The Comeback’s satire is so subtle as to be barely visible. Really it’s the story of a singular character who refuses to let her story end regardless of how many times the industry tries to close the book… and all the humiliation she endures with a grin because of it. Lisa Kudrow and Michael Patrick King give Valerie Cherish the ending she so richly deserves, but we’ll miss her all the same. – AB

    The Dark Wizard

    The history of rock climbing is rife with larger-than-life characters and adventure sport trailblazers, but few loom as large as Dean Potter. A climber, high-liner, BASE jumper and all around Yosemite Valley Renaissance man, Potter set speed records and free soloed daunting walls at an unprecedented caliber for the duration of his career. Now, Peter Mortimer and Nick Rosen – documentary filmmakers and Potter’s old friends – are bringing his story to HBO Max with their new docuseries, “The Dark Wizard.” 

    “His aura and myth dominated the sport, both because he was pioneering all these crazy things … but there was also a much broader story there, the behind the scenes and what was going on in his personal world that was really compelling that no one had really heard about,” Rosen says. 

    “The Dark Wizard” not only details Potter’s Herculean feats and the impact he had on the climbing community, but also his mental health journey that took place behind the closed doors of an alpha facade. 

    “It’s unbelievable seeing all these Olympic athletes talk about their mental health and the struggles,” Mortimer says. “That just was not happening back in the time.” 

    The only documentation of Potter’s internal dialogues were in his journals, which his sister donated to the filmmaking duo so they could platform the realities of his life. Using stylistic images and animations from these diaries, alongside interviews with Potter and his inner circle, Mortimer and Rosen crafted a chilling recapitulation of the climber’s life. – SR 

    Margo’s Got Money Troubles

    Margo’s got money troubles, sure. But she’s also got some big expectations to meet. The Apple TV dramedy entered the 2026 SXSW Film & TV Festival as the undisputed TV headliner, thanks to the involvement of two prestige studios (A24 and the aforementioned Apple TV), a legendary TV showrunner (David E. Kelley), and a high-powered cast that would fit right in at this year’s Oscars ceremony (Elle Fanning, Michelle Pfeiffer, Nick Offerman, Nicole Kidman, and more). Still, it’s one thing to have a lot of expensive toys; it’s another thing entirely to know how to play with them. Thankfully, Margo’s Got Money Troubles puts forward an eight-episode experience well worthy of its creative firepower. 

    Based on a novel of the same name by Rufi Thorpe, Margo stars Elle Fanning as the titular young woman with money problems due to an unexpected pregnancy following a tryst with her douchey literature professor. Anticipating little help from her ex-Hooters waitress mother (Pfeiffer) or professional wrestler estranged father (Offerman), Margo gets creative (and sometimes nekkid) to pay the bills. Margo’s Got Money Troubles is Juno for the OnlyFans generation. It’s also the rare “prestige” episodic experience that doesn’t feel like a two-hour movie script that got out of hand. That’s all thanks to a preposterously charming lead performance from Fanning and her equally likable supporting cast. – AB

    The post Everything We Saw at SXSW 2026: I Love Boosters, Wishful Thinking, The Comeback, and More appeared first on Den of Geek.

  • Designing for the Unexpected

    Designing for the Unexpected

    I’m not sure when I first heard this quote, but it’s something that has stayed with me over the years. How do you create services for situations you can’t imagine? Or design products that work on devices yet to be invented?

    Flash, Photoshop, and responsive design

    When I first started designing websites, my go-to software was Photoshop. I created a 960px canvas and set about creating a layout that I would later drop content in. The development phase was about attaining pixel-perfect accuracy using fixed widths, fixed heights, and absolute positioning.

    Ethan Marcotte’s talk at An Event Apart and subsequent article “Responsive Web Design” in A List Apart in 2010 changed all this. I was sold on responsive design as soon as I heard about it, but I was also terrified. The pixel-perfect designs full of magic numbers that I had previously prided myself on producing were no longer good enough.

    The fear wasn’t helped by my first experience with responsive design. My first project was to take an existing fixed-width website and make it responsive. What I learned the hard way was that you can’t just add responsiveness at the end of a project. To create fluid layouts, you need to plan throughout the design phase.

    A new way to design

    Designing responsive or fluid sites has always been about removing limitations, producing content that can be viewed on any device. It relies on the use of percentage-based layouts, which I initially achieved with native CSS and utility classes:

    .column-span-6 {
      width: 49%;
      float: left;
      margin-right: 0.5%;
      margin-left: 0.5%;
    }
    
    
    .column-span-4 {
      width: 32%;
      float: left;
      margin-right: 0.5%;
      margin-left: 0.5%;
    }
    
    .column-span-3 {
      width: 24%;
      float: left;
      margin-right: 0.5%;
      margin-left: 0.5%;
    }

    Then with Sass so I could take advantage of @includes to re-use repeated blocks of code and move back to more semantic markup:

    .logo {
      @include colSpan(6);
    }
    
    .search {
      @include colSpan(3);
    }
    
    .social-share {
      @include colSpan(3);
    }

    Media queries

    The second ingredient for responsive design is media queries. Without them, content would shrink to fit the available space regardless of whether that content remained readable (The exact opposite problem occurred with the introduction of a mobile-first approach).

    Media queries prevented this by allowing us to add breakpoints where the design could adapt. Like most people, I started out with three breakpoints: one for desktop, one for tablets, and one for mobile. Over the years, I added more and more for phablets, wide screens, and so on. 

    For years, I happily worked this way and improved both my design and front-end skills in the process. The only problem I encountered was making changes to content, since with our Sass grid system in place, there was no way for the site owners to add content without amending the markup—something a small business owner might struggle with. This is because each row in the grid was defined using a div as a container. Adding content meant creating new row markup, which requires a level of HTML knowledge.

    Row markup was a staple of early responsive design, present in all the widely used frameworks like Bootstrap and Skeleton.

    1 of 7
    2 of 7
    3 of 7
    4 of 7
    5 of 7
    6 of 7
    7 of 7

    Another problem arose as I moved from a design agency building websites for small- to medium-sized businesses, to larger in-house teams where I worked across a suite of related sites. In those roles I started to work much more with reusable components. 

    Our reliance on media queries resulted in components that were tied to common viewport sizes. If the goal of component libraries is reuse, then this is a real problem because you can only use these components if the devices you’re designing for correspond to the viewport sizes used in the pattern library—in the process not really hitting that “devices that don’t yet exist”  goal.

    Then there’s the problem of space. Media queries allow components to adapt based on the viewport size, but what if I put a component into a sidebar, like in the figure below?

    Container queries: our savior or a false dawn?

    Container queries have long been touted as an improvement upon media queries, but at the time of writing are unsupported in most browsers. There are JavaScript workarounds, but they can create dependency and compatibility issues. The basic theory underlying container queries is that elements should change based on the size of their parent container and not the viewport width, as seen in the following illustrations.

    One of the biggest arguments in favor of container queries is that they help us create components or design patterns that are truly reusable because they can be picked up and placed anywhere in a layout. This is an important step in moving toward a form of component-based design that works at any size on any device.

    In other words, responsive components to replace responsive layouts.

    Container queries will help us move from designing pages that respond to the browser or device size to designing components that can be placed in a sidebar or in the main content, and respond accordingly.

    My concern is that we are still using layout to determine when a design needs to adapt. This approach will always be restrictive, as we will still need pre-defined breakpoints. For this reason, my main question with container queries is, How would we decide when to change the CSS used by a component? 

    A component library removed from context and real content is probably not the best place for that decision. 

    As the diagrams below illustrate, we can use container queries to create designs for specific container widths, but what if I want to change the design based on the image size or ratio?

    In this example, the dimensions of the container are not what should dictate the design; rather, the image is.

    It’s hard to say for sure whether container queries will be a success story until we have solid cross-browser support for them. Responsive component libraries would definitely evolve how we design and would improve the possibilities for reuse and design at scale. But maybe we will always need to adjust these components to suit our content.

    CSS is changing

    Whilst the container query debate rumbles on, there have been numerous advances in CSS that change the way we think about design. The days of fixed-width elements measured in pixels and floated div elements used to cobble layouts together are long gone, consigned to history along with table layouts. Flexbox and CSS Grid have revolutionized layouts for the web. We can now create elements that wrap onto new rows when they run out of space, not when the device changes.

    .wrapper {
      display: grid;
      grid-template-columns: repeat(auto-fit, 450px);
      gap: 10px;
    }

    The repeat() function paired with auto-fit or auto-fill allows us to specify how much space each column should use while leaving it up to the browser to decide when to spill the columns onto a new line. Similar things can be achieved with Flexbox, as elements can wrap over multiple rows and “flex” to fill available space. 

    .wrapper {
      display: flex;
      flex-wrap: wrap;
      justify-content: space-between;
    }
    
    .child {
      flex-basis: 32%;
      margin-bottom: 20px;
    }

    The biggest benefit of all this is you don’t need to wrap elements in container rows. Without rows, content isn’t tied to page markup in quite the same way, allowing for removals or additions of content without additional development.

    This is a big step forward when it comes to creating designs that allow for evolving content, but the real game changer for flexible designs is CSS Subgrid. 

    Remember the days of crafting perfectly aligned interfaces, only for the customer to add an unbelievably long header almost as soon as they’re given CMS access, like the illustration below?

    Subgrid allows elements to respond to adjustments in their own content and in the content of sibling elements, helping us create designs more resilient to change.

    .wrapper {
      display: grid;
      grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
         grid-template-rows: auto 1fr auto;
      gap: 10px;
    }
    
    .sub-grid {
      display: grid;
      grid-row: span 3;
      grid-template-rows: subgrid; /* sets rows to parent grid */
    }

    CSS Grid allows us to separate layout and content, thereby enabling flexible designs. Meanwhile, Subgrid allows us to create designs that can adapt in order to suit morphing content. Subgrid at the time of writing is only supported in Firefox but the above code can be implemented behind an @supports feature query. 

    Intrinsic layouts 

    I’d be remiss not to mention intrinsic layouts, the term created by Jen Simmons to describe a mixture of new and old CSS features used to create layouts that respond to available space. 

    Responsive layouts have flexible columns using percentages. Intrinsic layouts, on the other hand, use the fr unit to create flexible columns that won’t ever shrink so much that they render the content illegible.

    fr units is a way to say I want you to distribute the extra space in this way, but…don’t ever make it smaller than the content that’s inside of it.

    —Jen Simmons, “Designing Intrinsic Layouts”

    Intrinsic layouts can also utilize a mixture of fixed and flexible units, allowing the content to dictate the space it takes up.

    What makes intrinsic design stand out is that it not only creates designs that can withstand future devices but also helps scale design without losing flexibility. Components and patterns can be lifted and reused without the prerequisite of having the same breakpoints or the same amount of content as in the previous implementation. 

    We can now create designs that adapt to the space they have, the content within them, and the content around them. With an intrinsic approach, we can construct responsive components without depending on container queries.

    Another 2010 moment?

    This intrinsic approach should in my view be every bit as groundbreaking as responsive web design was ten years ago. For me, it’s another “everything changed” moment. 

    But it doesn’t seem to be moving quite as fast; I haven’t yet had that same career-changing moment I had with responsive design, despite the widely shared and brilliant talk that brought it to my attention. 

    One reason for that could be that I now work in a large organization, which is quite different from the design agency role I had in 2010. In my agency days, every new project was a clean slate, a chance to try something new. Nowadays, projects use existing tools and frameworks and are often improvements to existing websites with an existing codebase. 

    Another could be that I feel more prepared for change now. In 2010 I was new to design in general; the shift was frightening and required a lot of learning. Also, an intrinsic approach isn’t exactly all-new; it’s about using existing skills and existing CSS knowledge in a different way. 

    You can’t framework your way out of a content problem

    Another reason for the slightly slower adoption of intrinsic design could be the lack of quick-fix framework solutions available to kick-start the change. 

    Responsive grid systems were all over the place ten years ago. With a framework like Bootstrap or Skeleton, you had a responsive design template at your fingertips.

    Intrinsic design and frameworks do not go hand in hand quite so well because the benefit of having a selection of units is a hindrance when it comes to creating layout templates. The beauty of intrinsic design is combining different units and experimenting with techniques to get the best for your content.

    And then there are design tools. We probably all, at some point in our careers, used Photoshop templates for desktop, tablet, and mobile devices to drop designs in and show how the site would look at all three stages.

    How do you do that now, with each component responding to content and layouts flexing as and when they need to? This type of design must happen in the browser, which personally I’m a big fan of. 

    The debate about “whether designers should code” is another that has rumbled on for years. When designing a digital product, we should, at the very least, design for a best- and worst-case scenario when it comes to content. To do this in a graphics-based software package is far from ideal. In code, we can add longer sentences, more radio buttons, and extra tabs, and watch in real time as the design adapts. Does it still work? Is the design too reliant on the current content?

    Personally, I look forward to the day intrinsic design is the standard for design, when a design component can be truly flexible and adapt to both its space and content with no reliance on device or container dimensions.

    Content first 

    Content is not constant. After all, to design for the unknown or unexpected we need to account for content changes like our earlier Subgrid card example that allowed the cards to respond to adjustments to their own content and the content of sibling elements.

    Thankfully, there’s more to CSS than layout, and plenty of properties and values can help us put content first. Subgrid and pseudo-elements like ::first-line and ::first-letter help to separate design from markup so we can create designs that allow for changes.

    Instead of old markup hacks like this—

    First line of text with different styling...

    —we can target content based on where it appears.

    .element::first-line {
      font-size: 1.4em;
    }
    
    .element::first-letter {
      color: red;
    }

    Much bigger additions to CSS include logical properties, which change the way we construct designs using logical dimensions (start and end) instead of physical ones (left and right), something CSS Grid also does with functions like min(), max(), and clamp().

    This flexibility allows for directional changes according to content, a common requirement when we need to present content in multiple languages. In the past, this was often achieved with Sass mixins but was often limited to switching from left-to-right to right-to-left orientation.

    In the Sass version, directional variables need to be set.

    $direction: rtl;
    $opposite-direction: ltr;
    
    $start-direction: right;
    $end-direction: left;

    These variables can be used as values—

    body {
      direction: $direction;
      text-align: $start-direction;
    }

    —or as properties.

    margin-#{$end-direction}: 10px;
    padding-#{$start-direction}: 10px;

    However, now we have native logical properties, removing the reliance on both Sass (or a similar tool) and pre-planning that necessitated using variables throughout a codebase. These properties also start to break apart the tight coupling between a design and strict physical dimensions, creating more flexibility for changes in language and in direction.

    margin-block-end: 10px;
    padding-block-start: 10px;

    There are also native start and end values for properties like text-align, which means we can replace text-align: right with text-align: start.

    Like the earlier examples, these properties help to build out designs that aren’t constrained to one language; the design will reflect the content’s needs.

    Fixed and fluid 

    We briefly covered the power of combining fixed widths with fluid widths with intrinsic layouts. The min() and max() functions are a similar concept, allowing you to specify a fixed value with a flexible alternative. 

    For min() this means setting a fluid minimum value and a maximum fixed value.

    .element {
      width: min(50%, 300px);
    }

    The element in the figure above will be 50% of its container as long as the element’s width doesn’t exceed 300px.

    For max() we can set a flexible max value and a minimum fixed value.

    .element {
      width: max(50%, 300px);
    }

    Now the element will be 50% of its container as long as the element’s width is at least 300px. This means we can set limits but allow content to react to the available space. 

    The clamp() function builds on this by allowing us to set a preferred value with a third parameter. Now we can allow the element to shrink or grow if it needs to without getting to a point where it becomes unusable.

    .element {
      width: clamp(300px, 50%, 600px);
    }

    This time, the element’s width will be 50% (the preferred value) of its container but never less than 300px and never more than 600px.

    With these techniques, we have a content-first approach to responsive design. We can separate content from markup, meaning the changes users make will not affect the design. We can start to future-proof designs by planning for unexpected changes in language or direction. And we can increase flexibility by setting desired dimensions alongside flexible alternatives, allowing for more or less content to be displayed correctly.

    Situation first

    Thanks to what we’ve discussed so far, we can cover device flexibility by changing our approach, designing around content and space instead of catering to devices. But what about that last bit of Jeffrey Zeldman’s quote, “…situations you haven’t imagined”?

    It’s a very different thing to design for someone seated at a desktop computer as opposed to someone using a mobile phone and moving through a crowded street in glaring sunshine. Situations and environments are hard to plan for or predict because they change as people react to their own unique challenges and tasks.

    This is why choice is so important. One size never fits all, so we need to design for multiple scenarios to create equal experiences for all our users.

    Thankfully, there is a lot we can do to provide choice.

    Responsible design 

    “There are parts of the world where mobile data is prohibitively expensive, and where there is little or no broadband infrastructure.”

    I Used the Web for a Day on a 50 MB Budget

    Chris Ashton

    One of the biggest assumptions we make is that people interacting with our designs have a good wifi connection and a wide screen monitor. But in the real world, our users may be commuters traveling on trains or other forms of transport using smaller mobile devices that can experience drops in connectivity. There is nothing more frustrating than a web page that won’t load, but there are ways we can help users use less data or deal with sporadic connectivity.

    The srcset attribute allows the browser to decide which image to serve. This means we can create smaller ‘cropped’ images to display on mobile devices in turn using less bandwidth and less data.

    Image alt text

    The preload attribute can also help us to think about how and when media is downloaded. It can be used to tell a browser about any critical assets that need to be downloaded with high priority, improving perceived performance and the user experience. 

     
     

    There’s also native lazy loading, which indicates assets that should only be downloaded when they are needed.

    …

    With srcset, preload, and lazy loading, we can start to tailor a user’s experience based on the situation they find themselves in. What none of this does, however, is allow the user themselves to decide what they want downloaded, as the decision is usually the browser’s to make. 

    So how can we put users in control?

    The return of media queries 

    Media queries have always been about much more than device sizes. They allow content to adapt to different situations, with screen size being just one of them.

    We’ve long been able to check for media types like print and speech and features such as hover, resolution, and color. These checks allow us to provide options that suit more than one scenario; it’s less about one-size-fits-all and more about serving adaptable content. 

    As of this writing, the Media Queries Level 5 spec is still under development. It introduces some really exciting queries that in the future will help us design for multiple other unexpected situations.

    For example, there’s a light-level feature that allows you to modify styles if a user is in sunlight or darkness. Paired with custom properties, these features allow us to quickly create designs or themes for specific environments.

    @media (light-level: normal) {
      --background-color: #fff;
      --text-color: #0b0c0c;  
    }
    
    @media (light-level: dim) {
      --background-color: #efd226;
      --text-color: #0b0c0c;
    }

    Another key feature of the Level 5 spec is personalization. Instead of creating designs that are the same for everyone, users can choose what works for them. This is achieved by using features like prefers-reduced-data, prefers-color-scheme, and prefers-reduced-motion, the latter two of which already enjoy broad browser support. These features tap into preferences set via the operating system or browser so people don’t have to spend time making each site they visit more usable. 

    Media queries like this go beyond choices made by a browser to grant more control to the user.

    Expect the unexpected

    In the end, the one thing we should always expect is for things to change. Devices in particular change faster than we can keep up, with foldable screens already on the market.

    We can’t design the same way we have for this ever-changing landscape, but we can design for content. By putting content first and allowing that content to adapt to whatever space surrounds it, we can create more robust, flexible designs that increase the longevity of our products. 

    A lot of the CSS discussed here is about moving away from layouts and putting content at the heart of design. From responsive components to fixed and fluid units, there is so much more we can do to take a more intrinsic approach. Even better, we can test these techniques during the design phase by designing in-browser and watching how our designs adapt in real-time.

    When it comes to unexpected situations, we need to make sure our products are usable when people need them, whenever and wherever that might be. We can move closer to achieving this by involving users in our design decisions, by creating choice via browsers, and by giving control to our users with user-preference-based media queries. 

    Good design for the unexpected should allow for change, provide choice, and give control to those we serve: our users themselves.

  • Voice Content and Usability

    Voice Content and Usability

    We’ve been having conversations for thousands of years. Whether to convey information, conduct transactions, or simply to check in on one another, people have yammered away, chattering and gesticulating, through spoken conversation for countless generations. Only in the last few millennia have we begun to commit our conversations to writing, and only in the last few decades have we begun to outsource them to the computer, a machine that shows much more affinity for written correspondence than for the slangy vagaries of spoken language.

    Computers have trouble because between spoken and written language, speech is more primordial. To have successful conversations with us, machines must grapple with the messiness of human speech: the disfluencies and pauses, the gestures and body language, and the variations in word choice and spoken dialect that can stymie even the most carefully crafted human-computer interaction. In the human-to-human scenario, spoken language also has the privilege of face-to-face contact, where we can readily interpret nonverbal social cues.

    In contrast, written language immediately concretizes as we commit it to record and retains usages long after they become obsolete in spoken communication (the salutation “To whom it may concern,” for example), generating its own fossil record of outdated terms and phrases. Because it tends to be more consistent, polished, and formal, written text is fundamentally much easier for machines to parse and understand.

    Spoken language has no such luxury. Besides the nonverbal cues that decorate conversations with emphasis and emotional context, there are also verbal cues and vocal behaviors that modulate conversation in nuanced ways: how something is said, not what. Whether rapid-fire, low-pitched, or high-decibel, whether sarcastic, stilted, or sighing, our spoken language conveys much more than the written word could ever muster. So when it comes to voice interfaces—the machines we conduct spoken conversations with—we face exciting challenges as designers and content strategists.

    Voice Interactions

    We interact with voice interfaces for a variety of reasons, but according to Michael McTear, Zoraida Callejas, and David Griol in The Conversational Interface, those motivations by and large mirror the reasons we initiate conversations with other people, too (). Generally, we start up a conversation because:

    • we need something done (such as a transaction),
    • we want to know something (information of some sort), or
    • we are social beings and want someone to talk to (conversation for conversation’s sake).

    These three categories—which I call transactional, informational, and prosocial—also characterize essentially every voice interaction: a single conversation from beginning to end that realizes some outcome for the user, starting with the voice interface’s first greeting and ending with the user exiting the interface. Note here that a conversation in our human sense—a chat between people that leads to some result and lasts an arbitrary length of time—could encompass multiple transactional, informational, and prosocial voice interactions in succession. In other words, a voice interaction is a conversation, but a conversation is not necessarily a single voice interaction.

    Purely prosocial conversations are more gimmicky than captivating in most voice interfaces, because machines don’t yet have the capacity to really want to know how we’re doing and to do the sort of glad-handing humans crave. There’s also ongoing debate as to whether users actually prefer the sort of organic human conversation that begins with a prosocial voice interaction and shifts seamlessly into other types. In fact, in Voice User Interface Design, Michael Cohen, James Giangola, and Jennifer Balogh recommend sticking to users’ expectations by mimicking how they interact with other voice interfaces rather than trying too hard to be human—potentially alienating them in the process ().

    That leaves two genres of conversations we can have with one another that a voice interface can easily have with us, too: a transactional voice interaction realizing some outcome (“buy iced tea”) and an informational voice interaction teaching us something new (“discuss a musical”).

    Transactional voice interactions

    Unless you’re tapping buttons on a food delivery app, you’re generally having a conversation—and therefore a voice interaction—when you order a Hawaiian pizza with extra pineapple. Even when we walk up to the counter and place an order, the conversation quickly pivots from an initial smattering of neighborly small talk to the real mission at hand: ordering a pizza (generously topped with pineapple, as it should be).

    Alison: Hey, how’s it going?

    Burhan: Hi, welcome to Crust Deluxe! It’s cold out there. How can I help you?

    Alison: Can I get a Hawaiian pizza with extra pineapple?

    Burhan: Sure, what size?

    Alison: Large.

    Burhan: Anything else?

    Alison: No thanks, that’s it.

    Burhan: Something to drink?

    Alison: I’ll have a bottle of Coke.

    Burhan: You got it. That’ll be $13.55 and about fifteen minutes.

    Each progressive disclosure in this transactional conversation reveals more and more of the desired outcome of the transaction: a service rendered or a product delivered. Transactional conversations have certain key traits: they’re direct, to the point, and economical. They quickly dispense with pleasantries.

    Informational voice interactions

    Meanwhile, some conversations are primarily about obtaining information. Though Alison might visit Crust Deluxe with the sole purpose of placing an order, she might not actually want to walk out with a pizza at all. She might be just as interested in whether they serve halal or kosher dishes, gluten-free options, or something else. Here, though we again have a prosocial mini-conversation at the beginning to establish politeness, we’re after much more.

    Alison: Hey, how’s it going?

    Burhan: Hi, welcome to Crust Deluxe! It’s cold out there. How can I help you?

    Alison: Can I ask a few questions?

    Burhan: Of course! Go right ahead.

    Alison: Do you have any halal options on the menu?

    Burhan: Absolutely! We can make any pie halal by request. We also have lots of vegetarian, ovo-lacto, and vegan options. Are you thinking about any other dietary restrictions?

    Alison: What about gluten-free pizzas?

    Burhan: We can definitely do a gluten-free crust for you, no problem, for both our deep-dish and thin-crust pizzas. Anything else I can answer for you?

    Alison: That’s it for now. Good to know. Thanks!

    Burhan: Anytime, come back soon!

    This is a very different dialogue. Here, the goal is to get a certain set of facts. Informational conversations are investigative quests for the truth—research expeditions to gather data, news, or facts. Voice interactions that are informational might be more long-winded than transactional conversations by necessity. Responses tend to be lengthier, more informative, and carefully communicated so the customer understands the key takeaways.

    Voice Interfaces

    At their core, voice interfaces employ speech to support users in reaching their goals. But simply because an interface has a voice component doesn’t mean that every user interaction with it is mediated through voice. Because multimodal voice interfaces can lean on visual components like screens as crutches, we’re most concerned in this book with pure voice interfaces, which depend entirely on spoken conversation, lack any visual component whatsoever, and are therefore much more nuanced and challenging to tackle.

    Though voice interfaces have long been integral to the imagined future of humanity in science fiction, only recently have those lofty visions become fully realized in genuine voice interfaces.

    Interactive voice response (IVR) systems

    Though written conversational interfaces have been fixtures of computing for many decades, voice interfaces first emerged in the early 1990s with text-to-speech (TTS) dictation programs that recited written text aloud, as well as speech-enabled in-car systems that gave directions to a user-provided address. With the advent of interactive voice response (IVR) systems, intended as an alternative to overburdened customer service representatives, we became acquainted with the first true voice interfaces that engaged in authentic conversation.

    IVR systems allowed organizations to reduce their reliance on call centers but soon became notorious for their clunkiness. Commonplace in the corporate world, these systems were primarily designed as metaphorical switchboards to guide customers to a real phone agent (“Say Reservations to book a flight or check an itinerary”); chances are you will enter a conversation with one when you call an airline or hotel conglomerate. Despite their functional issues and users’ frustration with their inability to speak to an actual human right away, IVR systems proliferated in the early 1990s across a variety of industries (, PDF).

    While IVR systems are great for highly repetitive, monotonous conversations that generally don’t veer from a single format, they have a reputation for less scintillating conversation than we’re used to in real life (or even in science fiction).

    Screen readers

    Parallel to the evolution of IVR systems was the invention of the screen reader, a tool that transcribes visual content into synthesized speech. For Blind or visually impaired website users, it’s the predominant method of interacting with text, multimedia, or form elements. Screen readers represent perhaps the closest equivalent we have today to an out-of-the-box implementation of content delivered through voice.

    Among the first screen readers known by that moniker was the Screen Reader for the BBC Micro and NEEC Portable developed by the Research Centre for the Education of the Visually Handicapped (RCEVH) at the University of Birmingham in 1986 (). That same year, Jim Thatcher created the first IBM Screen Reader for text-based computers, later recreated for computers with graphical user interfaces (GUIs) ().

    With the rapid growth of the web in the 1990s, the demand for accessible tools for websites exploded. Thanks to the introduction of semantic HTML and especially ARIA roles beginning in 2008, screen readers started facilitating speedy interactions with web pages that ostensibly allow disabled users to traverse the page as an aural and temporal space rather than a visual and physical one. In other words, screen readers for the web “provide mechanisms that translate visual design constructs—proximity, proportion, etc.—into useful information,” writes Aaron Gustafson in A List Apart. “At least they do when documents are authored thoughtfully” ().

    Though deeply instructive for voice interface designers, there’s one significant problem with screen readers: they’re difficult to use and unremittingly verbose. The visual structures of websites and web navigation don’t translate well to screen readers, sometimes resulting in unwieldy pronouncements that name every manipulable HTML element and announce every formatting change. For many screen reader users, working with web-based interfaces exacts a cognitive toll.

    In Wired, accessibility advocate and voice engineer Chris Maury considers why the screen reader experience is ill-suited to users relying on voice:

    From the beginning, I hated the way that Screen Readers work. Why are they designed the way they are? It makes no sense to present information visually and then, and only then, translate that into audio. All of the time and energy that goes into creating the perfect user experience for an app is wasted, or even worse, adversely impacting the experience for blind users. ()

    In many cases, well-designed voice interfaces can speed users to their destination better than long-winded screen reader monologues. After all, visual interface users have the benefit of darting around the viewport freely to find information, ignoring areas irrelevant to them. Blind users, meanwhile, are obligated to listen to every utterance synthesized into speech and therefore prize brevity and efficiency. Disabled users who have long had no choice but to employ clunky screen readers may find that voice interfaces, particularly more modern voice assistants, offer a more streamlined experience.

    Voice assistants

    When we think of voice assistants (the subset of voice interfaces now commonplace in living rooms, smart homes, and offices), many of us immediately picture HAL from 2001: A Space Odyssey or hear Majel Barrett’s voice as the omniscient computer in Star Trek. Voice assistants are akin to personal concierges that can answer questions, schedule appointments, conduct searches, and perform other common day-to-day tasks. And they’re rapidly gaining more attention from accessibility advocates for their assistive potential.

    Before the earliest IVR systems found success in the enterprise, Apple published a demonstration video in 1987 depicting the Knowledge Navigator, a voice assistant that could transcribe spoken words and recognize human speech to a great degree of accuracy. Then, in 2001, Tim Berners-Lee and others formulated their vision for a Semantic Web “agent” that would perform typical errands like “checking calendars, making appointments, and finding locations” (, behind paywall). It wasn’t until 2011 that Apple’s Siri finally entered the picture, making voice assistants a tangible reality for consumers.

    Thanks to the plethora of voice assistants available today, there is considerable variation in how programmable and customizable certain voice assistants are over others (Fig 1.1). At one extreme, everything except vendor-provided features is locked down; for example, at the time of their release, the core functionality of Apple’s Siri and Microsoft’s Cortana couldn’t be extended beyond their existing capabilities. Even today, it isn’t possible to program Siri to perform arbitrary functions, because there’s no means by which developers can interact with Siri at a low level, apart from predefined categories of tasks like sending messages, hailing rideshares, making restaurant reservations, and certain others.

    At the opposite end of the spectrum, voice assistants like Amazon Alexa and Google Home offer a core foundation on which developers can build custom voice interfaces. For this reason, programmable voice assistants that lend themselves to customization and extensibility are becoming increasingly popular for developers who feel stifled by the limitations of Siri and Cortana. Amazon offers the Alexa Skills Kit, a developer framework for building custom voice interfaces for Amazon Alexa, while Google Home offers the ability to program arbitrary Google Assistant skills. Today, users can choose from among thousands of custom-built skills within both the Amazon Alexa and Google Assistant ecosystems.

    As corporations like Amazon, Apple, Microsoft, and Google continue to stake their territory, they’re also selling and open-sourcing an unprecedented array of tools and frameworks for designers and developers that aim to make building voice interfaces as easy as possible, even without code.

    Often by necessity, voice assistants like Amazon Alexa tend to be monochannel—they’re tightly coupled to a device and can’t be accessed on a computer or smartphone instead. By contrast, many development platforms like Google’s Dialogflow have introduced omnichannel capabilities so users can build a single conversational interface that then manifests as a voice interface, textual chatbot, and IVR system upon deployment. I don’t prescribe any specific implementation approaches in this design-focused book, but in Chapter 4 we’ll get into some of the implications these variables might have on the way you build out your design artifacts.

    Voice Content

    Simply put, voice content is content delivered through voice. To preserve what makes human conversation so compelling in the first place, voice content needs to be free-flowing and organic, contextless and concise—everything written content isn’t.

    Our world is replete with voice content in various forms: screen readers reciting website content, voice assistants rattling off a weather forecast, and automated phone hotline responses governed by IVR systems. In this book, we’re most concerned with content delivered auditorily—not as an option, but as a necessity.

    For many of us, our first foray into informational voice interfaces will be to deliver content to users. There’s only one problem: any content we already have isn’t in any way ready for this new habitat. So how do we make the content trapped on our websites more conversational? And how do we write new copy that lends itself to voice interactions?

    Lately, we’ve begun slicing and dicing our content in unprecedented ways. Websites are, in many respects, colossal vaults of what I call macrocontent: lengthy prose that can extend for infinitely scrollable miles in a browser window, like microfilm viewers of newspaper archives. Back in 2002, well before the present-day ubiquity of voice assistants, technologist Anil Dash defined microcontent as permalinked pieces of content that stay legible regardless of environment, such as email or text messages:

    A day’s weather forcast [sic], the arrival and departure times for an airplane flight, an abstract from a long publication, or a single instant message can all be examples of microcontent. ()

    I’d update Dash’s definition of microcontent to include all examples of bite-sized content that go well beyond written communiqués. After all, today we encounter microcontent in interfaces where a small snippet of copy is displayed alone, unmoored from the browser, like a textbot confirmation of a restaurant reservation. Microcontent offers the best opportunity to gauge how your content can be stretched to the very edges of its capabilities, informing delivery channels both established and novel.

    As microcontent, voice content is unique because it’s an example of how content is experienced in time rather than in space. We can glance at a digital sign underground for an instant and know when the next train is arriving, but voice interfaces hold our attention captive for periods of time that we can’t easily escape or skip, something screen reader users are all too familiar with.

    Because microcontent is fundamentally made up of isolated blobs with no relation to the channels where they’ll eventually end up, we need to ensure that our microcontent truly performs well as voice content—and that means focusing on the two most important traits of robust voice content: voice content legibility and voice content discoverability.

    Fundamentally, the legibility and discoverability of our voice content both have to do with how voice content manifests in perceived time and space.

  • Sustainable Web Design, An Excerpt

    Sustainable Web Design, An Excerpt

    In the 1950s, many in the elite running community had begun to believe it wasn’t possible to run a mile in less than four minutes. Runners had been attempting it since the late 19th century and were beginning to draw the conclusion that the human body simply wasn’t built for the task. 

    But on May 6, 1956, Roger Bannister took everyone by surprise. It was a cold, wet day in Oxford, England—conditions no one expected to lend themselves to record-setting—and yet Bannister did just that, running a mile in 3:59.4 and becoming the first person in the record books to run a mile in under four minutes. 

    This shift in the benchmark had profound effects; the world now knew that the four-minute mile was possible. Bannister’s record lasted only forty-six days, when it was snatched away by Australian runner John Landy. Then a year later, three runners all beat the four-minute barrier together in the same race. Since then, over 1,400 runners have officially run a mile in under four minutes; the current record is 3:43.13, held by Moroccan athlete Hicham El Guerrouj.

    We achieve far more when we believe that something is possible, and we will believe it’s possible only when we see someone else has already done it—and as with human running speed, so it is with what we believe are the hard limits for how a website needs to perform.

    Establishing standards for a sustainable web

    In most major industries, the key metrics of environmental performance are fairly well established, such as miles per gallon for cars or energy per square meter for homes. The tools and methods for calculating those metrics are standardized as well, which keeps everyone on the same page when doing environmental assessments. In the world of websites and apps, however, we aren’t held to any particular environmental standards, and only recently have gained the tools and methods we need to even make an environmental assessment.

    The primary goal in sustainable web design is to reduce carbon emissions. However, it’s almost impossible to actually measure the amount of CO2 produced by a web product. We can’t measure the fumes coming out of the exhaust pipes on our laptops. The emissions of our websites are far away, out of sight and out of mind, coming out of power stations burning coal and gas. We have no way to trace the electrons from a website or app back to the power station where the electricity is being generated and actually know the exact amount of greenhouse gas produced. So what do we do? 

    If we can’t measure the actual carbon emissions, then we need to find what we can measure. The primary factors that could be used as indicators of carbon emissions are:

    1. Data transfer 
    2. Carbon intensity of electricity

    Let’s take a look at how we can use these metrics to quantify the energy consumption, and in turn the carbon footprint, of the websites and web apps we create.

    Data transfer

    Most researchers use kilowatt-hours per gigabyte (kWh/GB) as a metric of energy efficiency when measuring the amount of data transferred over the internet when a website or application is used. This provides a great reference point for energy consumption and carbon emissions. As a rule of thumb, the more data transferred, the more energy used in the data center, telecoms networks, and end user devices.

    For web pages, data transfer for a single visit can be most easily estimated by measuring the page weight, meaning the transfer size of the page in kilobytes the first time someone visits the page. It’s fairly easy to measure using the developer tools in any modern web browser. Often your web hosting account will include statistics for the total data transfer of any web application (Fig 2.1).

    The nice thing about page weight as a metric is that it allows us to compare the efficiency of web pages on a level playing field without confusing the issue with constantly changing traffic volumes. 

    Reducing page weight requires a large scope. By early 2020, the median page weight was 1.97 MB for setups the HTTP Archive classifies as “desktop” and 1.77 MB for “mobile,” with desktop increasing 36 percent since January 2016 and mobile page weights nearly doubling in the same period (Fig 2.2). Roughly half of this data transfer is image files, making images the single biggest source of carbon emissions on the average website. 

    History clearly shows us that our web pages can be smaller, if only we set our minds to it. While most technologies become ever more energy efficient, including the underlying technology of the web such as data centers and transmission networks, websites themselves are a technology that becomes less efficient as time goes on.

    You might be familiar with the concept of performance budgeting as a way of focusing a project team on creating faster user experiences. For example, we might specify that the website must load in a maximum of one second on a broadband connection and three seconds on a 3G connection. Much like speed limits while driving, performance budgets are upper limits rather than vague suggestions, so the goal should always be to come in under budget.

    Designing for fast performance does often lead to reduced data transfer and emissions, but it isn’t always the case. Web performance is often more about the subjective perception of load times than it is about the true efficiency of the underlying system, whereas page weight and transfer size are more objective measures and more reliable benchmarks for sustainable web design. 

    We can set a page weight budget in reference to a benchmark of industry averages, using data from sources like HTTP Archive. We can also benchmark page weight against competitors or the old version of the website we’re replacing. For example, we might set a maximum page weight budget as equal to our most efficient competitor, or we could set the benchmark lower to guarantee we are best in class. 

    If we want to take it to the next level, then we could also start looking at the transfer size of our web pages for repeat visitors. Although page weight for the first time someone visits is the easiest thing to measure, and easy to compare on a like-for-like basis, we can learn even more if we start looking at transfer size in other scenarios too. For example, visitors who load the same page multiple times will likely have a high percentage of the files cached in their browser, meaning they don’t need to transfer all of the files on subsequent visits. Likewise, a visitor who navigates to new pages on the same website will likely not need to load the full page each time, as some global assets from areas like the header and footer may already be cached in their browser. Measuring transfer size at this next level of detail can help us learn even more about how we can optimize efficiency for users who regularly visit our pages, and enable us to set page weight budgets for additional scenarios beyond the first visit.

    Page weight budgets are easy to track throughout a design and development process. Although they don’t actually tell us carbon emission and energy consumption analytics directly, they give us a clear indication of efficiency relative to other websites. And as transfer size is an effective analog for energy consumption, we can actually use it to estimate energy consumption too.

    In summary, reduced data transfer translates to energy efficiency, a key factor to reducing carbon emissions of web products. The more efficient our products, the less electricity they use, and the less fossil fuels need to be burned to produce the electricity to power them. But as we’ll see next, since all web products demand some power, it’s important to consider the source of that electricity, too.

    Carbon intensity of electricity

    Regardless of energy efficiency, the level of pollution caused by digital products depends on the carbon intensity of the energy being used to power them. Carbon intensity is a term used to define the grams of CO2 produced for every kilowatt-hour of electricity (gCO2/kWh). This varies widely, with renewable energy sources and nuclear having an extremely low carbon intensity of less than 10 gCO2/kWh (even when factoring in their construction); whereas fossil fuels have very high carbon intensity of approximately 200–400 gCO2/kWh. 

    Most electricity comes from national or state grids, where energy from a variety of different sources is mixed together with varying levels of carbon intensity. The distributed nature of the internet means that a single user of a website or app might be using energy from multiple different grids simultaneously; a website user in Paris uses electricity from the French national grid to power their home internet and devices, but the website’s data center could be in Dallas, USA, pulling electricity from the Texas grid, while the telecoms networks use energy from everywhere between Dallas and Paris.

    We don’t have control over the full energy supply of web services, but we do have some control over where we host our projects. With a data center using a significant proportion of the energy of any website, locating the data center in an area with low carbon energy will tangibly reduce its carbon emissions. Danish startup Tomorrow reports and maps this user-contributed data, and a glance at their map shows how, for example, choosing a data center in France will have significantly lower carbon emissions than a data center in the Netherlands (Fig 2.3).

    That said, we don’t want to locate our servers too far away from our users; it takes energy to transmit data through the telecom’s networks, and the further the data travels, the more energy is consumed. Just like food miles, we can think of the distance from the data center to the website’s core user base as “megabyte miles”—and we want it to be as small as possible.

    Using the distance itself as a benchmark, we can use website analytics to identify the country, state, or even city where our core user group is located and measure the distance from that location to the data center used by our hosting company. This will be a somewhat fuzzy metric as we don’t know the precise center of mass of our users or the exact location of a data center, but we can at least get a rough idea. 

    For example, if a website is hosted in London but the primary user base is on the West Coast of the USA, then we could look up the distance from London to San Francisco, which is 5,300 miles. That’s a long way! We can see that hosting it somewhere in North America, ideally on the West Coast, would significantly reduce the distance and thus the energy used to transmit the data. In addition, locating our servers closer to our visitors helps reduce latency and delivers better user experience, so it’s a win-win.

    Converting it back to carbon emissions

    If we combine carbon intensity with a calculation for energy consumption, we can calculate the carbon emissions of our websites and apps. A tool my team created does this by measuring the data transfer over the wire when loading a web page, calculating the amount of electricity associated, and then converting that into a figure for CO2 (Fig 2.4). It also factors in whether or not the web hosting is powered by renewable energy.

    If you want to take it to the next level and tailor the data more accurately to the unique aspects of your project, the Energy and Emissions Worksheet accompanying this book shows you how.

    With the ability to calculate carbon emissions for our projects, we could actually take a page weight budget one step further and set carbon budgets as well. CO2 is not a metric commonly used in web projects; we’re more familiar with kilobytes and megabytes, and can fairly easily look at design options and files to assess how big they are. Translating that into carbon adds a layer of abstraction that isn’t as intuitive—but carbon budgets do focus our minds on the primary thing we’re trying to reduce, and support the core objective of sustainable web design: reducing carbon emissions.

    Browser Energy

    Data transfer might be the simplest and most complete analog for energy consumption in our digital projects, but by giving us one number to represent the energy used in the data center, the telecoms networks, and the end user’s devices, it can’t offer us insights into the efficiency in any specific part of the system.

    One part of the system we can look at in more detail is the energy used by end users’ devices. As front-end web technologies become more advanced, the computational load is increasingly moving from the data center to users’ devices, whether they be phones, tablets, laptops, desktops, or even smart TVs. Modern web browsers allow us to implement more complex styling and animation on the fly using CSS and JavaScript. Furthermore, JavaScript libraries such as Angular and React allow us to create applications where the “thinking” work is done partly or entirely in the browser. 

    All of these advances are exciting and open up new possibilities for what the web can do to serve society and create positive experiences. However, more computation in the user’s web browser means more energy used by their devices. This has implications not just environmentally, but also for user experience and inclusivity. Applications that put a heavy processing load on the user’s device can inadvertently exclude users with older, slower devices and cause batteries on phones and laptops to drain faster. Furthermore, if we build web applications that require the user to have up-to-date, powerful devices, people throw away old devices much more frequently. This isn’t just bad for the environment, but it puts a disproportionate financial burden on the poorest in society.

    In part because the tools are limited, and partly because there are so many different models of devices, it’s difficult to measure website energy consumption on end users’ devices. One tool we do currently have is the Energy Impact monitor inside the developer console of the Safari browser (Fig 2.5).

    You know when you load a website and your computer’s cooling fans start spinning so frantically you think it might actually take off? That’s essentially what this tool is measuring. 

    It shows us the percentage of CPU used and the duration of CPU usage when loading the web page, and uses these figures to generate an energy impact rating. It doesn’t give us precise data for the amount of electricity used in kilowatts, but the information it does provide can be used to benchmark how efficiently your websites use energy and set targets for improvement.

  • Design for Safety, An Excerpt

    Design for Safety, An Excerpt

    Antiracist economist Kim Crayton says that “intention without strategy is chaos.” We’ve discussed how our biases, assumptions, and inattention toward marginalized and vulnerable groups lead to dangerous and unethical tech—but what, specifically, do we need to do to fix it? The intention to make our tech safer is not enough; we need a strategy.

    This chapter will equip you with that plan of action. It covers how to integrate safety principles into your design work in order to create tech that’s safe, how to convince your stakeholders that this work is necessary, and how to respond to the critique that what we actually need is more diversity. (Spoiler: we do, but diversity alone is not the antidote to fixing unethical, unsafe tech.)

    The process for inclusive safety

    When you are designing for safety, your goals are to:

    • identify ways your product can be used for abuse,
    • design ways to prevent the abuse, and
    • provide support for vulnerable users to reclaim power and control.

    The Process for Inclusive Safety is a tool to help you reach those goals (Fig 5.1). It’s a methodology I created in 2018 to capture the various techniques I was using when designing products with safety in mind. Whether you are creating an entirely new product or adding to an existing feature, the Process can help you make your product safe and inclusive. The Process includes five general areas of action:

    • Conducting research
    • Creating archetypes
    • Brainstorming problems
    • Designing solutions
    • Testing for safety

    The Process is meant to be flexible—it won’t make sense for teams to implement every step in some situations. Use the parts that are relevant to your unique work and context; this is meant to be something you can insert into your existing design practice.

    And once you use it, if you have an idea for making it better or simply want to provide context of how it helped your team, please get in touch with me. It’s a living document that I hope will continue to be a useful and realistic tool that technologists can use in their day-to-day work.

    If you’re working on a product specifically for a vulnerable group or survivors of some form of trauma, such as an app for survivors of domestic violence, sexual assault, or drug addiction, be sure to read Chapter 7, which covers that situation explicitly and should be handled a bit differently. The guidelines here are for prioritizing safety when designing a more general product that will have a wide user base (which, we already know from statistics, will include certain groups that should be protected from harm). Chapter 7 is focused on products that are specifically for vulnerable groups and people who have experienced trauma.

    Step 1: Conduct research

    Design research should include a broad analysis of how your tech might be weaponized for abuse as well as specific insights into the experiences of survivors and perpetrators of that type of abuse. At this stage, you and your team will investigate issues of interpersonal harm and abuse, and explore any other safety, security, or inclusivity issues that might be a concern for your product or service, like data security, racist algorithms, and harassment.

    Broad research

    Your project should begin with broad, general research into similar products and issues around safety and ethical concerns that have already been reported. For example, a team building a smart home device would do well to understand the multitude of ways that existing smart home devices have been used as tools of abuse. If your product will involve AI, seek to understand the potentials for racism and other issues that have been reported in existing AI products. Nearly all types of technology have some kind of potential or actual harm that’s been reported on in the news or written about by academics. Google Scholar is a useful tool for finding these studies.

    Specific research: Survivors

    When possible and appropriate, include direct research (surveys and interviews) with people who are experts in the forms of harm you have uncovered. Ideally, you’ll want to interview advocates working in the space of your research first so that you have a more solid understanding of the topic and are better equipped to not retraumatize survivors. If you’ve uncovered possible domestic violence issues, for example, the experts you’ll want to speak with are survivors themselves, as well as workers at domestic violence hotlines, shelters, other related nonprofits, and lawyers.

    Especially when interviewing survivors of any kind of trauma, it is important to pay people for their knowledge and lived experiences. Don’t ask survivors to share their trauma for free, as this is exploitative. While some survivors may not want to be paid, you should always make the offer in the initial ask. An alternative to payment is to donate to an organization working against the type of violence that the interviewee experienced. We’ll talk more about how to appropriately interview survivors in Chapter 6.

    Specific research: Abusers

    It’s unlikely that teams aiming to design for safety will be able to interview self-proclaimed abusers or people who have broken laws around things like hacking. Don’t make this a goal; rather, try to get at this angle in your general research. Aim to understand how abusers or bad actors weaponize technology to use against others, how they cover their tracks, and how they explain or rationalize the abuse.

    Step 2: Create archetypes

    Once you’ve finished conducting your research, use your insights to create abuser and survivor archetypes. Archetypes are not personas, as they’re not based on real people that you interviewed and surveyed. Instead, they’re based on your research into likely safety issues, much like when we design for accessibility: we don’t need to have found a group of blind or low-vision users in our interview pool to create a design that’s inclusive of them. Instead, we base those designs on existing research into what this group needs. Personas typically represent real users and include many details, while archetypes are broader and can be more generalized.

    The abuser archetype is someone who will look at the product as a tool to perform harm (Fig 5.2). They may be trying to harm someone they don’t know through surveillance or anonymous harassment, or they may be trying to control, monitor, abuse, or torment someone they know personally.

    The survivor archetype is someone who is being abused with the product. There are various situations to consider in terms of the archetype’s understanding of the abuse and how to put an end to it: Do they need proof of abuse they already suspect is happening, or are they unaware they’ve been targeted in the first place and need to be alerted (Fig 5.3)?

    You may want to make multiple survivor archetypes to capture a range of different experiences. They may know that the abuse is happening but not be able to stop it, like when an abuser locks them out of IoT devices; or they know it’s happening but don’t know how, such as when a stalker keeps figuring out their location (Fig 5.4). Include as many of these scenarios as you need to in your survivor archetype. You’ll use these later on when you design solutions to help your survivor archetypes achieve their goals of preventing and ending abuse.

    It may be useful for you to create persona-like artifacts for your archetypes, such as the three examples shown. Instead of focusing on the demographic information we often see in personas, focus on their goals. The goals of the abuser will be to carry out the specific abuse you’ve identified, while the goals of the survivor will be to prevent abuse, understand that abuse is happening, make ongoing abuse stop, or regain control over the technology that’s being used for abuse. Later, you’ll brainstorm how to prevent the abuser’s goals and assist the survivor’s goals.

    And while the “abuser/survivor” model fits most cases, it doesn’t fit all, so modify it as you need to. For example, if you uncovered an issue with security, such as the ability for someone to hack into a home camera system and talk to children, the malicious hacker would get the abuser archetype and the child’s parents would get survivor archetype.

    Step 3: Brainstorm problems

    After creating archetypes, brainstorm novel abuse cases and safety issues. “Novel” means things not found in your research; you’re trying to identify completely new safety issues that are unique to your product or service. The goal with this step is to exhaust every effort of identifying harms your product could cause. You aren’t worrying about how to prevent the harm yet—that comes in the next step.

    How could your product be used for any kind of abuse, outside of what you’ve already identified in your research? I recommend setting aside at least a few hours with your team for this process.

    If you’re looking for somewhere to start, try doing a Black Mirror brainstorm. This exercise is based on the show Black Mirror, which features stories about the dark possibilities of technology. Try to figure out how your product would be used in an episode of the show—the most wild, awful, out-of-control ways it could be used for harm. When I’ve led Black Mirror brainstorms, participants usually end up having a good deal of fun (which I think is great—it’s okay to have fun when designing for safety!). I recommend time-boxing a Black Mirror brainstorm to half an hour, and then dialing it back and using the rest of the time thinking of more realistic forms of harm.

    After you’ve identified as many opportunities for abuse as possible, you may still not feel confident that you’ve uncovered every potential form of harm. A healthy amount of anxiety is normal when you’re doing this kind of work. It’s common for teams designing for safety to worry, “Have we really identified every possible harm? What if we’ve missed something?” If you’ve spent at least four hours coming up with ways your product could be used for harm and have run out of ideas, go to the next step.

    It’s impossible to guarantee you’ve thought of everything; instead of aiming for 100 percent assurance, recognize that you’ve taken this time and have done the best you can, and commit to continuing to prioritize safety in the future. Once your product is released, your users may identify new issues that you missed; aim to receive that feedback graciously and course-correct quickly.

    Step 4: Design solutions

    At this point, you should have a list of ways your product can be used for harm as well as survivor and abuser archetypes describing opposing user goals. The next step is to identify ways to design against the identified abuser’s goals and to support the survivor’s goals. This step is a good one to insert alongside existing parts of your design process where you’re proposing solutions for the various problems your research uncovered.

    Some questions to ask yourself to help prevent harm and support your archetypes include:

    • Can you design your product in such a way that the identified harm cannot happen in the first place? If not, what roadblocks can you put up to prevent the harm from happening?
    • How can you make the victim aware that abuse is happening through your product?
    • How can you help the victim understand what they need to do to make the problem stop?
    • Can you identify any types of user activity that would indicate some form of harm or abuse? Could your product help the user access support?

    In some products, it’s possible to proactively recognize that harm is happening. For example, a pregnancy app might be modified to allow the user to report that they were the victim of an assault, which could trigger an offer to receive resources for local and national organizations. This sort of proactiveness is not always possible, but it’s worth taking a half hour to discuss if any type of user activity would indicate some form of harm or abuse, and how your product could assist the user in receiving help in a safe manner.

    That said, use caution: you don’t want to do anything that could put a user in harm’s way if their devices are being monitored. If you do offer some kind of proactive help, always make it voluntary, and think through other safety issues, such as the need to keep the user in-app in case an abuser is checking their search history. We’ll walk through a good example of this in the next chapter.

    Step 5: Test for safety

    The final step is to test your prototypes from the point of view of your archetypes: the person who wants to weaponize the product for harm and the victim of the harm who needs to regain control over the technology. Just like any other kind of product testing, at this point you’ll aim to rigorously test out your safety solutions so that you can identify gaps and correct them, validate that your designs will help keep your users safe, and feel more confident releasing your product into the world.

    Ideally, safety testing happens along with usability testing. If you’re at a company that doesn’t do usability testing, you might be able to use safety testing to cleverly perform both; a user who goes through your design attempting to weaponize the product against someone else can also be encouraged to point out interactions or other elements of the design that don’t make sense to them.

    You’ll want to conduct safety testing on either your final prototype or the actual product if it’s already been released. There’s nothing wrong with testing an existing product that wasn’t designed with safety goals in mind from the onset—“retrofitting” it for safety is a good thing to do.

    Remember that testing for safety involves testing from the perspective of both an abuser and a survivor, though it may not make sense for you to do both. Alternatively, if you made multiple survivor archetypes to capture multiple scenarios, you’ll want to test from the perspective of each one.

    As with other sorts of usability testing, you as the designer are most likely too close to the product and its design by this point to be a valuable tester; you know the product too well. Instead of doing it yourself, set up testing as you would with other usability testing: find someone who is not familiar with the product and its design, set the scene, give them a task, encourage them to think out loud, and observe how they attempt to complete it.

    Abuser testing

    The goal of this testing is to understand how easy it is for someone to weaponize your product for harm. Unlike with usability testing, you want to make it impossible, or at least difficult, for them to achieve their goal. Reference the goals in the abuser archetype you created earlier, and use your product in an attempt to achieve them.

    For example, for a fitness app with GPS-enabled location features, we can imagine that the abuser archetype would have the goal of figuring out where his ex-girlfriend now lives. With this goal in mind, you’d try everything possible to figure out the location of another user who has their privacy settings enabled. You might try to see her running routes, view any available information on her profile, view anything available about her location (which she has set to private), and investigate the profiles of any other users somehow connected with her account, such as her followers.

    If by the end of this you’ve managed to uncover some of her location data, despite her having set her profile to private, you know now that your product enables stalking. Your next step is to go back to step 4 and figure out how to prevent this from happening. You may need to repeat the process of designing solutions and testing them more than once.

    Survivor testing

    Survivor testing involves identifying how to give information and power to the survivor. It might not always make sense based on the product or context. Thwarting the attempt of an abuser archetype to stalk someone also satisfies the goal of the survivor archetype to not be stalked, so separate testing wouldn’t be needed from the survivor’s perspective.

    However, there are cases where it makes sense. For example, for a smart thermostat, a survivor archetype’s goals would be to understand who or what is making the temperature change when they aren’t doing it themselves. You could test this by looking for the thermostat’s history log and checking for usernames, actions, and times; if you couldn’t find that information, you would have more work to do in step 4.

    Another goal might be regaining control of the thermostat once the survivor realizes the abuser is remotely changing its settings. Your test would involve attempting to figure out how to do this: are there instructions that explain how to remove another user and change the password, and are they easy to find? This might again reveal that more work is needed to make it clear to the user how they can regain control of the device or account.

    Stress testing

    To make your product more inclusive and compassionate, consider adding stress testing. This concept comes from Design for Real Life by Eric Meyer and Sara Wachter-Boettcher. The authors pointed out that personas typically center people who are having a good day—but real users are often anxious, stressed out, having a bad day, or even experiencing tragedy. These are called “stress cases,” and testing your products for users in stress-case situations can help you identify places where your design lacks compassion. Design for Real Life has more details about what it looks like to incorporate stress cases into your design as well as many other great tactics for compassionate design.

  • A Content Model Is Not a Design System

    A Content Model Is Not a Design System

    Do you remember when having a great website was enough? Now, people are getting answers from Siri, Google search snippets, and mobile apps, not just our websites. Forward-thinking organizations have adopted an omnichannel content strategy, whose mission is to reach audiences across multiple digital channels and platforms.

    But how do you set up a content management system (CMS) to reach your audience now and in the future? I learned the hard way that creating a content model—a definition of content types, attributes, and relationships that let people and systems understand content—with my more familiar design-system thinking would capsize my customer’s omnichannel content strategy. You can avoid that outcome by creating content models that are semantic and that also connect related content. 

    I recently had the opportunity to lead the CMS implementation for a Fortune 500 company. The client was excited by the benefits of an omnichannel content strategy, including content reuse, multichannel marketing, and robot delivery—designing content to be intelligible to bots, Google knowledge panels, snippets, and voice user interfaces. 

    A content model is a critical foundation for an omnichannel content strategy, and for our content to be understood by multiple systems, the model needed semantic types—types named according to their meaning instead of their presentation. Our goal was to let authors create content and reuse it wherever it was relevant. But as the project proceeded, I realized that supporting content reuse at the scale that my customer needed required the whole team to recognize a new pattern.

    Despite our best intentions, we kept drawing from what we were more familiar with: design systems. Unlike web-focused content strategies, an omnichannel content strategy can’t rely on WYSIWYG tools for design and layout. Our tendency to approach the content model with our familiar design-system thinking constantly led us to veer away from one of the primary purposes of a content model: delivering content to audiences on multiple marketing channels.

    Two essential principles for an effective content model

    We needed to help our designers, developers, and stakeholders understand that we were doing something very different from their prior web projects, where it was natural for everyone to think about content as visual building blocks fitting into layouts. The previous approach was not only more familiar but also more intuitive—at least at first—because it made the designs feel more tangible. We discovered two principles that helped the team understand how a content model differs from the design systems that we were used to:

    1. Content models must define semantics instead of layout.
    2. And content models should connect content that belongs together.

    Semantic content models

    A semantic content model uses type and attribute names that reflect the meaning of the content, not how it will be displayed. For example, in a nonsemantic model, teams might create types like teasers, media blocks, and cards. Although these types might make it easy to lay out content, they don’t help delivery channels understand the content’s meaning, which in turn would have opened the door to the content being presented in each marketing channel. In contrast, a semantic content model uses type names like product, service, and testimonial so that each delivery channel can understand the content and use it as it sees fit. 

    When you’re creating a semantic content model, a great place to start is to look over the types and properties defined by Schema.org, a community-driven resource for type definitions that are intelligible to platforms like Google search.

    A semantic content model has several benefits:

    • Even if your team doesn’t care about omnichannel content, a semantic content model decouples content from its presentation so that teams can evolve the website’s design without needing to refactor its content. In this way, content can withstand disruptive website redesigns. 
    • A semantic content model also provides a competitive edge. By adding structured data based on Schema.org’s types and properties, a website can provide hints to help Google understand the content, display it in search snippets or knowledge panels, and use it to answer voice-interface user questions. Potential visitors could discover your content without ever setting foot in your website.
    • Beyond those practical benefits, you’ll also need a semantic content model if you want to deliver omnichannel content. To use the same content in multiple marketing channels, delivery channels need to be able to understand it. For example, if your content model were to provide a list of questions and answers, it could easily be rendered on a frequently asked questions (FAQ) page, but it could also be used in a voice interface or by a bot that answers common questions.

    For example, using a semantic content model for articles, events, people, and locations lets A List Apart provide cleanly structured data for search engines so that users can read the content on the website, in Google knowledge panels, and even with hypothetical voice interfaces in the future.

    Content models that connect

    After struggling to describe what makes a good content model, I’ve come to realize that the best models are those that are semantic and that also connect related content components (such as a FAQ item’s question and answer pair), instead of slicing up related content across disparate content components. A good content model connects content that should remain together so that multiple delivery channels can use it without needing to first put those pieces back together.

    Think about writing an article or essay. An article’s meaning and usefulness depends upon its parts being kept together. Would one of the headings or paragraphs be meaningful on their own without the context of the full article? On our project, our familiar design-system thinking often led us to want to create content models that would slice content into disparate chunks to fit the web-centric layout. This had a similar impact to an article that were to have been separated from its headline. Because we were slicing content into standalone pieces based on layout, content that belonged together became difficult to manage and nearly impossible for multiple delivery channels to understand.

    To illustrate, let’s look at how connecting related content applies in a real-world scenario. The design team for our customer presented a complex layout for a software product page that included multiple tabs and sections. Our instincts were to follow suit with the content model. Shouldn’t we make it as easy and as flexible as possible to add any number of tabs in the future?

    Because our design-system instincts were so familiar, it felt like we had needed a content type called “tab section” so that multiple tab sections could be added to a page. Each tab section would display various types of content. One tab might provide the software’s overview or its specifications. Another tab might provide a list of resources. 

    Our inclination to break down the content model into “tab section” pieces would have led to an unnecessarily complex model and a cumbersome editing experience, and it would have also created content that couldn’t have been understood by additional delivery channels. For example, how would another system have been able to tell which “tab section” referred to a product’s specifications or its resource list—would that other system have to have resorted to counting tab sections and content blocks? This would have prevented the tabs from ever being reordered, and it would have required adding logic in every other delivery channel to interpret the design system’s layout. Furthermore, if the customer were to have no longer wanted to display this content in a tab layout, it would have been tedious to migrate to a new content model to reflect the new page redesign.

    We had a breakthrough when we discovered that our customer had a specific purpose in mind for each tab: it would reveal specific information such as the software product’s overview, specifications, related resources, and pricing. Once implementation began, our inclination to focus on what’s visual and familiar had obscured the intent of the designs. With a little digging, it didn’t take long to realize that the concept of tabs wasn’t relevant to the content model. The meaning of the content that they were planning to display in the tabs was what mattered.

    In fact, the customer could have decided to display this content in a different way—without tabs—somewhere else. This realization prompted us to define content types for the software product based on the meaningful attributes that the customer had wanted to render on the web. There were obvious semantic attributes like name and description as well as rich attributes like screenshots, software requirements, and feature lists. The software’s product information stayed together because it wasn’t sliced across separate components like “tab sections” that were derived from the content’s presentation. Any delivery channel—including future ones—could understand and present this content.

    Conclusion

    In this omnichannel marketing project, we discovered that the best way to keep our content model on track was to ensure that it was semantic (with type and attribute names that reflected the meaning of the content) and that it kept content together that belonged together (instead of fragmenting it). These two concepts curtailed our temptation to shape the content model based on the design. So if you’re working on a content model to support an omnichannel content strategy—or even if you just want to make sure that Google and other interfaces understand your content—remember:

    • A design system isn’t a content model. Team members may be tempted to conflate them and to make your content model mirror your design system, so you should protect the semantic value and contextual structure of the content strategy during the entire implementation process. This will let every delivery channel consume the content without needing a magic decoder ring.
    • If your team is struggling to make this transition, you can still reap some of the benefits by using Schema.org–based structured data in your website. Even if additional delivery channels aren’t on the immediate horizon, the benefit to search engine optimization is a compelling reason on its own.
    • Additionally, remind the team that decoupling the content model from the design will let them update the designs more easily because they won’t be held back by the cost of content migrations. They’ll be able to create new designs without the obstacle of compatibility between the design and the content, and ​they’ll be ready for the next big thing. 

    By rigorously advocating for these principles, you’ll help your team treat content the way that it deserves—as the most critical asset in your user experience and the best way to connect with your audience.

  • How to Sell UX Research with Two Simple Questions

    How to Sell UX Research with Two Simple Questions

    Do you find yourself designing screens with only a vague idea of how the things on the screen relate to the things elsewhere in the system? Do you leave stakeholder meetings with unclear directives that often seem to contradict previous conversations? You know a better understanding of user needs would help the team get clear on what you are actually trying to accomplish, but time and budget for research is tight. When it comes to asking for more direct contact with your users, you might feel like poor Oliver Twist, timidly asking, “Please, sir, I want some more.” 

    Here’s the trick. You need to get stakeholders themselves to identify high-risk assumptions and hidden complexity, so that they become just as motivated as you to get answers from users. Basically, you need to make them think it’s their idea. 

    In this article, I’ll show you how to collaboratively expose misalignment and gaps in the team’s shared understanding by bringing the team together around two simple questions:

    1. What are the objects?
    2. What are the relationships between those objects?

    A gauntlet between research and screen design

    These two questions align to the first two steps of the ORCA process, which might become your new best friend when it comes to reducing guesswork. Wait, what’s ORCA?! Glad you asked.

    ORCA stands for Objects, Relationships, CTAs, and Attributes, and it outlines a process for creating solid object-oriented user experiences. Object-oriented UX is my design philosophy. ORCA is an iterative methodology for synthesizing user research into an elegant structural foundation to support screen and interaction design. OOUX and ORCA have made my work as a UX designer more collaborative, effective, efficient, fun, strategic, and meaningful.

    The ORCA process has four iterative rounds and a whopping fifteen steps. In each round we get more clarity on our Os, Rs, Cs, and As.

    I sometimes say that ORCA is a “garbage in, garbage out” process. To ensure that the testable prototype produced in the final round actually tests well, the process needs to be fed by good research. But if you don’t have a ton of research, the beginning of the ORCA process serves another purpose: it helps you sell the need for research.

    In other words, the ORCA process serves as a gauntlet between research and design. With good research, you can gracefully ride the killer whale from research into design. But without good research, the process effectively spits you back into research and with a cache of specific open questions.

    Getting in the same curiosity-boat

    What gets us into trouble is not what we don’t know. It’s what we know for sure that just ain’t so.

    Mark Twain

    The first two steps of the ORCA process—Object Discovery and Relationship Discovery—shine a spotlight on the dark, dusty corners of your team’s misalignments and any inherent complexity that’s been swept under the rug. It begins to expose what this classic comic so beautifully illustrates:

    This is one reason why so many UX designers are frustrated in their job and why many projects fail. And this is also why we often can’t sell research: every decision-maker is confident in their own mental picture. 

    Once we expose hidden fuzzy patches in each picture and the differences between them all, the case for user research makes itself.

    But how we do this is important. However much we might want to, we can’t just tell everyone, “YOU ARE WRONG!” Instead, we need to facilitate and guide our team members to self-identify holes in their picture. When stakeholders take ownership of assumptions and gaps in understanding, BAM! Suddenly, UX research is not such a hard sell, and everyone is aboard the same curiosity-boat.

    Say your users are doctors. And you have no idea how doctors use the system you are tasked with redesigning.

    You might try to sell research by honestly saying: “We need to understand doctors better! What are their pain points? How do they use the current app?” But here’s the problem with that. Those questions are vague, and the answers to them don’t feel acutely actionable.

    Instead, you want your stakeholders themselves to ask super-specific questions. This is more like the kind of conversation you need to facilitate. Let’s listen in:

    “Wait a sec, how often do doctors share patients? Does a patient in this system have primary and secondary doctors?”

    “Can a patient even have more than one primary doctor?”

    “Is it a ‘primary doctor’ or just a ‘primary caregiver’… Can’t that role be a nurse practitioner?”

    “No, caregivers are something else… That’s the patient’s family contacts, right?”

    “So are caregivers in scope for this redesign?”

    “Yeah, because if a caregiver is present at an appointment, the doctor needs to note that. Like, tag the caregiver on the note… Or on the appointment?”

    Now we are getting somewhere. Do you see how powerful it can be getting stakeholders to debate these questions themselves? The diabolical goal here is to shake their confidence—gently and diplomatically.

    When these kinds of questions bubble up collaboratively and come directly from the mouths of your stakeholders and decision-makers, suddenly, designing screens without knowing the answers to these questions seems incredibly risky, even silly.

    If we create software without understanding the real-world information environment of our users, we will likely create software that does not align to the real-world information environment of our users. And this will, hands down, result in a more confusing, more complex, and less intuitive software product.

    The two questions

    But how do we get to these kinds of meaty questions diplomatically, efficiently, collaboratively, and reliably

    We can do this by starting with those two big questions that align to the first two steps of the ORCA process:

    1. What are the objects?
    2. What are the relationships between those objects?

    In practice, getting to these answers is easier said than done. I’m going to show you how these two simple questions can provide the outline for an Object Definition Workshop. During this workshop, these “seed” questions will blossom into dozens of specific questions and shine a spotlight on the need for more user research.

    Prep work: Noun foraging

    In the next section, I’ll show you how to run an Object Definition Workshop with your stakeholders (and entire cross-functional team, hopefully). But first, you need to do some prep work.

    Basically, look for nouns that are particular to the business or industry of your project, and do it across at least a few sources. I call this noun foraging.

    Here are just a few great noun foraging sources:

    • the product’s marketing site
    • the product’s competitors’ marketing sites (competitive analysis, anyone?)
    • the existing product (look at labels!)
    • user interview transcripts
    • notes from stakeholder interviews or vision docs from stakeholders

    Put your detective hat on, my dear Watson. Get resourceful and leverage what you have. If all you have is a marketing website, some screenshots of the existing legacy system, and access to customer service chat logs, then use those.

    As you peruse these sources, watch for the nouns that are used over and over again, and start listing them (preferably on blue sticky notes if you’ll be creating an object map later!).

    You’ll want to focus on nouns that might represent objects in your system. If you are having trouble determining if a noun might be object-worthy, remember the acronym SIP and test for:

    1. Structure
    2. Instances
    3. Purpose

    Think of a library app, for example. Is “book” an object?

    Structure: can you think of a few attributes for this potential object? Title, author, publish date… Yep, it has structure. Check!

    Instance: what are some examples of this potential “book” object? Can you name a few? The Alchemist, Ready Player One, Everybody Poops… OK, check!

    Purpose: why is this object important to the users and business? Well, “book” is what our library client is providing to people and books are why people come to the library… Check, check, check!

    As you are noun foraging, focus on capturing the nouns that have SIP. Avoid capturing components like dropdowns, checkboxes, and calendar pickers—your UX system is not your design system! Components are just the packaging for objects—they are a means to an end. No one is coming to your digital place to play with your dropdown! They are coming for the VALUABLE THINGS and what they can do with them. Those things, or objects, are what we are trying to identify.

    Let’s say we work for a startup disrupting the email experience. This is how I’d start my noun foraging.

    First I’d look at my own email client, which happens to be Gmail. I’d then look at Outlook and the new HEY email. I’d look at Yahoo, Hotmail…I’d even look at Slack and Basecamp and other so-called “email replacers.” I’d read some articles, reviews, and forum threads where people are complaining about email. While doing all this, I would look for and write down the nouns.

    (Before moving on, feel free to go noun foraging for this hypothetical product, too, and then scroll down to see how much our lists match up. Just don’t get lost in your own emails! Come back to me!)

    Drumroll, please…

    Here are a few nouns I came up with during my noun foraging:

    • email message
    • thread
    • contact
    • client
    • rule/automation
    • email address that is not a contact?
    • contact groups
    • attachment
    • Google doc file / other integrated file
    • newsletter? (HEY treats this differently)
    • saved responses and templates

    Scan your list of nouns and pick out words that you are completely clueless about. In our email example, it might be client or automation. Do as much homework as you can before your session with stakeholders: google what’s googleable. But other terms might be so specific to the product or domain that you need to have a conversation about them.

    Aside: here are some real nouns foraged during my own past project work that I needed my stakeholders to help me understand:

    • Record Locator
    • Incentive Home
    • Augmented Line Item
    • Curriculum-Based Measurement Probe

    This is really all you need to prepare for the workshop session: a list of nouns that represent potential objects and a short list of nouns that need to be defined further.

    Facilitate an Object Definition Workshop

    You could actually start your workshop with noun foraging—this activity can be done collaboratively. If you have five people in the room, pick five sources, assign one to every person, and give everyone ten minutes to find the objects within their source. When the time’s up, come together and find the overlap. Affinity mapping is your friend here!

    If your team is short on time and might be reluctant to do this kind of grunt work (which is usually the case) do your own noun foraging beforehand, but be prepared to show your work. I love presenting screenshots of documents and screens with all the nouns already highlighted. Bring the artifacts of your process, and start the workshop with a five-minute overview of your noun foraging journey.

    HOT TIP: before jumping into the workshop, frame the conversation as a requirements-gathering session to help you better understand the scope and details of the system. You don’t need to let them know that you’re looking for gaps in the team’s understanding so that you can prove the need for more user research—that will be our little secret. Instead, go into the session optimistically, as if your knowledgeable stakeholders and PMs and biz folks already have all the answers. 

    Then, let the question whack-a-mole commence.

    1. What is this thing?

    Want to have some real fun? At the beginning of your session, ask stakeholders to privately write definitions for the handful of obscure nouns you might be uncertain about. Then, have everyone show their cards at the same time and see if you get different definitions (you will). This is gold for exposing misalignment and starting great conversations.

    As your discussion unfolds, capture any agreed-upon definitions. And when uncertainty emerges, quietly (but visibly) start an “open questions” parking lot. 😉

    After definitions solidify, here’s a great follow-up:

    2. Do our users know what these things are? What do users call this thing?

    Stakeholder 1: They probably call email clients “apps.” But I’m not sure.

    Stakeholder 2: Automations are often called “workflows,” I think. Or, maybe users think workflows are something different.

    If a more user-friendly term emerges, ask the group if they can agree to use only that term moving forward. This way, the team can better align to the users’ language and mindset.

    OK, moving on. 

    If you have two or more objects that seem to overlap in purpose, ask one of these questions:

    3. Are these the same thing? Or are these different? If they are not the same, how are they different?

    You: Is a saved response the same as a template?

    Stakeholder 1: Yes! Definitely.

    Stakeholder 2: I don’t think so… A saved response is text with links and variables, but a template is more about the look and feel, like default fonts, colors, and placeholder images. 

    Continue to build out your growing glossary of objects. And continue to capture areas of uncertainty in your “open questions” parking lot.

    If you successfully determine that two similar things are, in fact, different, here’s your next follow-up question:

    4. What’s the relationship between these objects?

    You: Are saved responses and templates related in any way?

    Stakeholder 3:  Yeah, a template can be applied to a saved response.

    You, always with the follow-ups: When is the template applied to a saved response? Does that happen when the user is constructing the saved response? Or when they apply the saved response to an email? How does that actually work?

    Listen. Capture uncertainty. Once the list of “open questions” grows to a critical mass, pause to start assigning questions to groups or individuals. Some questions might be for the dev team (hopefully at least one developer is in the room with you). One question might be specifically for someone who couldn’t make it to the workshop. And many questions will need to be labeled “user.” 

    Do you see how we are building up to our UXR sales pitch?

    5. Is this object in scope?

    Your next question narrows the team’s focus toward what’s most important to your users. You can simply ask, “Are saved responses in scope for our first release?,” but I’ve got a better, more devious strategy.

    By now, you should have a list of clearly defined objects. Ask participants to sort these objects from most to least important, either in small breakout groups or individually. Then, like you did with the definitions, have everyone reveal their sort order at once. Surprisingly—or not so surprisingly—it’s not unusual for the VP to rank something like “saved responses” as #2 while everyone else puts it at the bottom of the list. Try not to look too smug as you inevitably expose more misalignment.

    I did this for a startup a few years ago. We posted the three groups’ wildly different sort orders on the whiteboard.

    The CEO stood back, looked at it, and said, “This is why we haven’t been able to move forward in two years.”

    Admittedly, it’s tragic to hear that, but as a professional, it feels pretty awesome to be the one who facilitated a watershed realization.

    Once you have a good idea of in-scope, clearly defined things, this is when you move on to doing more relationship mapping.

    6. Create a visual representation of the objects’ relationships

    We’ve already done a bit of this while trying to determine if two things are different, but this time, ask the team about every potential relationship. For each object, ask how it relates to all the other objects. In what ways are the objects connected? To visualize all the connections, pull out your trusty boxes-and-arrows technique. Here, we are connecting our objects with verbs. I like to keep my verbs to simple “has a” and “has many” statements.

    This system modeling activity brings up all sorts of new questions:

    • Can a saved response have attachments?
    • Can a saved response use a template? If so, if an email uses a saved response with a template, can the user override that template?
    • Do users want to see all the emails they sent that included a particular attachment? For example, “show me all the emails I sent with ProfessionalImage.jpg attached. I’ve changed my professional photo and I want to alert everyone to update it.” 

    Solid answers might emerge directly from the workshop participants. Great! Capture that new shared understanding. But when uncertainty surfaces, continue to add questions to your growing parking lot.

    Light the fuse

    You’ve positioned the explosives all along the floodgates. Now you simply have to light the fuse and BOOM. Watch the buy-in for user research flooooow.

    Before your workshop wraps up, have the group reflect on the list of open questions. Make plans for getting answers internally, then focus on the questions that need to be brought before users.

    Here’s your final step. Take those questions you’ve compiled for user research and discuss the level of risk associated with NOT answering them. Ask, “if we design without an answer to this question, if we make up our own answer and we are wrong, how bad might that turn out?” 

    With this methodology, we are cornering our decision-makers into advocating for user research as they themselves label questions as high-risk. Sorry, not sorry. 

    Now is your moment of truth. With everyone in the room, ask for a reasonable budget of time and money to conduct 6–8 user interviews focused specifically on these questions. 

    HOT TIP: if you are new to UX research, please note that you’ll likely need to rephrase the questions that came up during the workshop before you present them to users. Make sure your questions are open-ended and don’t lead the user into any default answers.

    Final words: Hold the screen design!

    Seriously, if at all possible, do not ever design screens again without first answering these fundamental questions: what are the objects and how do they relate?

    I promise you this: if you can secure a shared understanding between the business, design, and development teams before you start designing screens, you will have less heartache and save more time and money, and (it almost feels like a bonus at this point!) users will be more receptive to what you put out into the world. 

    I sincerely hope this helps you win time and budget to go talk to your users and gain clarity on what you are designing before you start building screens. If you find success using noun foraging and the Object Definition Workshop, there’s more where that came from in the rest of the ORCA process, which will help prevent even more late-in-the-game scope tugs-of-war and strategy pivots. 

    All the best of luck! Now go sell research!

  • Breaking Out of the Box

    Breaking Out of the Box

    CSS is about styling boxes. In fact, the whole web is made of boxes, from the browser viewport to elements on a page. But every once in a while a new feature comes along that makes us rethink our design approach.

    Round displays, for example, make it fun to play with circular clip areas. Mobile screen notches and virtual keyboards offer challenges to best organize content that stays clear of them. And dual screen or foldable devices make us rethink how to best use available space in a number of different device postures.

    These recent evolutions of the web platform made it both more challenging and more interesting to design products. They’re great opportunities for us to break out of our rectangular boxes.

    I’d like to talk about a new feature similar to the above: the Window Controls Overlay for Progressive Web Apps (PWAs).

    Progressive Web Apps are blurring the lines between apps and websites. They combine the best of both worlds. On one hand, they’re stable, linkable, searchable, and responsive just like websites. On the other hand, they provide additional powerful capabilities, work offline, and read files just like native apps.

    As a design surface, PWAs are really interesting because they challenge us to think about what mixing web and device-native user interfaces can be. On desktop devices in particular, we have more than 40 years of history telling us what applications should look like, and it can be hard to break out of this mental model.

    At the end of the day though, PWAs on desktop are constrained to the window they appear in: a rectangle with a title bar at the top.

    Here’s what a typical desktop PWA app looks like:

    Sure, as the author of a PWA, you get to choose the color of the title bar (using the Web Application Manifest theme_color property), but that’s about it.

    What if we could think outside this box, and reclaim the real estate of the app’s entire window? Doing so would give us a chance to make our apps more beautiful and feel more integrated in the operating system.

    This is exactly what the Window Controls Overlay offers. This new PWA functionality makes it possible to take advantage of the full surface area of the app, including where the title bar normally appears.

    About the title bar and window controls

    Let’s start with an explanation of what the title bar and window controls are.

    The title bar is the area displayed at the top of an app window, which usually contains the app’s name. Window controls are the affordances, or buttons, that make it possible to minimize, maximize, or close the app’s window, and are also displayed at the top.

    Window Controls Overlay removes the physical constraint of the title bar and window controls areas. It frees up the full height of the app window, enabling the title bar and window control buttons to be overlaid on top of the application’s web content. 

    If you are reading this article on a desktop computer, take a quick look at other apps. Chances are they’re already doing something similar to this. In fact, the very web browser you are using to read this uses the top area to display tabs.

    Spotify displays album artwork all the way to the top edge of the application window.

    Microsoft Word uses the available title bar space to display the auto-save and search functionalities, and more.

    The whole point of this feature is to allow you to make use of this space with your own content while providing a way to account for the window control buttons. And it enables you to offer this modified experience on a range of platforms while not adversely affecting the experience on browsers or devices that don’t support Window Controls Overlay. After all, PWAs are all about progressive enhancement, so this feature is a chance to enhance your app to use this extra space when it’s available.

    Let’s use the feature

    For the rest of this article, we’ll be working on a demo app to learn more about using the feature.

    The demo app is called 1DIV. It’s a simple CSS playground where users can create designs using CSS and a single HTML element.

    The app has two pages. The first lists the existing CSS designs you’ve created:

    The second page enables you to create and edit CSS designs:

    Since I’ve added a simple web manifest and service worker, we can install the app as a PWA on desktop. Here is what it looks like on macOS:

    And on Windows:

    Our app is looking good, but the white title bar in the first page is wasted space. In the second page, it would be really nice if the design area went all the way to the top of the app window.

    Let’s use the Window Controls Overlay feature to improve this.

    Enabling Window Controls Overlay

    The feature is still experimental at the moment. To try it, you need to enable it in one of the supported browsers.

    As of now, it has been implemented in Chromium, as a collaboration between Microsoft and Google. We can therefore use it in Chrome or Edge by going to the internal about://flags page, and enabling the Desktop PWA Window Controls Overlay flag.

    Using Window Controls Overlay

    To use the feature, we need to add the following display_override member to our web app’s manifest file:

    {
      "name": "1DIV",
      "description": "1DIV is a mini CSS playground",
      "lang": "en-US",
      "start_url": "/",
      "theme_color": "#ffffff",
      "background_color": "#ffffff",
      "display_override": [
        "window-controls-overlay"
      ],
      "icons": [
        ...
      ]
    }
    

    On the surface, the feature is really simple to use. This manifest change is the only thing we need to make the title bar disappear and turn the window controls into an overlay.

    However, to provide a great experience for all users regardless of what device or browser they use, and to make the most of the title bar area in our design, we’ll need a bit of CSS and JavaScript code.

    Here is what the app looks like now:

    The title bar is gone, which is what we wanted, but our logo, search field, and NEW button are partially covered by the window controls because now our layout starts at the top of the window.

    It’s similar on Windows, with the difference that the close, maximize, and minimize buttons appear on the right side, grouped together with the PWA control buttons:

    Screenshot of the 1DIV app thumbnail display using Window Controls Overlay on the Windows operating system. The separate top bar area is gone, but the window controls are now blocking some of the app’s content.

    Using CSS to keep clear of the window controls

    Along with the feature, new CSS environment variables have been introduced:

    • titlebar-area-x
    • titlebar-area-y
    • titlebar-area-width
    • titlebar-area-height

    You use these variables with the CSS env() function to position your content where the title bar would have been while ensuring it won’t overlap with the window controls. In our case, we’ll use two of the variables to position our header, which contains the logo, search bar, and NEW button. 

    header {
      position: absolute;
      left: env(titlebar-area-x, 0);
      width: env(titlebar-area-width, 100%);
      height: var(--toolbar-height);
    }
    

    The titlebar-area-x variable gives us the distance from the left of the viewport to where the title bar would appear, and titlebar-area-width is its width. (Remember, this is not equivalent to the width of the entire viewport, just the title bar portion, which as noted earlier, doesn’t include the window controls.)

    By doing this, we make sure our content remains fully visible. We’re also defining fallback values (the second parameter in the env() function) for when the variables are not defined (such as on non-supporting browsers, or when the Windows Control Overlay feature is disabled).

    Now our header adapts to its surroundings, and it doesn’t feel like the window control buttons have been added as an afterthought. The app looks a lot more like a native app.

    Changing the window controls background color so it blends in

    Now let’s take a closer look at our second page: the CSS playground editor.

    Not great. Our CSS demo area does go all the way to the top, which is what we wanted, but the way the window controls appear as white rectangles on top of it is quite jarring.

    We can fix this by changing the app’s theme color. There are a couple of ways to define it:

    • PWAs can define a theme color in the web app manifest file using the theme_color manifest member. This color is then used by the OS in different ways. On desktop platforms, it is used to provide a background color to the title bar and window controls.
    • Websites can use the theme-color meta tag as well. It’s used by browsers to customize the color of the UI around the web page. For PWAs, this color can override the manifest theme_color.

    In our case, we can set the manifest theme_color to white to provide the right default color for our app. The OS will read this color value when the app is installed and use it to make the window controls background color white. This color works great for our main page with the list of demos.

    The theme-color meta tag can be changed at runtime, using JavaScript. So we can do that to override the white with the right demo background color when one is opened.

    Here is the function we’ll use:

    function themeWindow(bgColor) {
      document.querySelector("meta[name=theme-color]").setAttribute('content', bgColor);
    }

    With this in place, we can imagine how using color and CSS transitions can produce a smooth change from the list page to the demo page, and enable the window control buttons to blend in with the rest of the app’s interface.

    Dragging the window

    Now, getting rid of the title bar entirely does have an important accessibility consequence: it’s much more difficult to move the application window around.

    The title bar provides a sizable area for users to click and drag, but by using the Window Controls Overlay feature, this area becomes limited to where the control buttons are, and users have to very precisely aim between these buttons to move the window.

    Fortunately, this can be fixed using CSS with the app-region property. This property is, for now, only supported in Chromium-based browsers and needs the -webkit- vendor prefix. 

    To make any element of the app become a dragging target for the window, we can use the following: 

    -webkit-app-region: drag;

    It is also possible to explicitly make an element non-draggable: 

    -webkit-app-region: no-drag; 

    These options can be useful for us. We can make the entire header a dragging target, but make the search field and NEW button within it non-draggable so they can still be used as normal.

    However, because the editor page doesn’t display the header, users wouldn’t be able to drag the window while editing code. So let’s use a different approach. We’ll create another element before our header, also absolutely positioned, and dedicated to dragging the window.

    ...
    .drag {
      position: absolute;
      top: 0;
      width: 100%;
      height: env(titlebar-area-height, 0);
      -webkit-app-region: drag;
    }

    With the above code, we’re making the draggable area span the entire viewport width, and using the titlebar-area-height variable to make it as tall as what the title bar would have been. This way, our draggable area is aligned with the window control buttons as shown below.

    And, now, to make sure our search field and button remain usable:

    header .search,
    header .new {
      -webkit-app-region: no-drag;
    }

    With the above code, users can click and drag where the title bar used to be. It is an area that users expect to be able to use to move windows on desktop, and we’re not breaking this expectation, which is good.

    Adapting to window resize

    It may be useful for an app to know both whether the window controls overlay is visible and when its size changes. In our case, if the user made the window very narrow, there wouldn’t be enough space for the search field, logo, and button to fit, so we’d want to push them down a bit.

    The Window Controls Overlay feature comes with a JavaScript API we can use to do this: navigator.windowControlsOverlay.

    The API provides three interesting things:

    • navigator.windowControlsOverlay.visible lets us know whether the overlay is visible.
    • navigator.windowControlsOverlay.getBoundingClientRect() lets us know the position and size of the title bar area.
    • navigator.windowControlsOverlay.ongeometrychange lets us know when the size or visibility changes.

    Let’s use this to be aware of the size of the title bar area and move the header down if it’s too narrow.

    if (navigator.windowControlsOverlay) {
      navigator.windowControlsOverlay.addEventListener('geometrychange', () => {
        const { width } = navigator.windowControlsOverlay.getBoundingClientRect();
        document.body.classList.toggle('narrow', width < 250);
      });
    }

    In the example above, we set the narrow class on the body of the app if the title bar area is narrower than 250px. We could do something similar with a media query, but using the windowControlsOverlay API has two advantages for our use case:

    • It’s only fired when the feature is supported and used; we don’t want to adapt the design otherwise.
    • We get the size of the title bar area across operating systems, which is great because the size of the window controls is different on Mac and Windows. Using a media query wouldn’t make it possible for us to know exactly how much space remains.
    .narrow header {
      top: env(titlebar-area-height, 0);
      left: 0;
      width: 100%;
    }

    Using the above CSS code, we can move our header down to stay clear of the window control buttons when the window is too narrow, and move the thumbnails down accordingly.

    Thirty pixels of exciting design opportunities


    Using the Window Controls Overlay feature, we were able to take our simple demo app and turn it into something that feels so much more integrated on desktop devices. Something that reaches out of the usual window constraints and provides a custom experience for its users.

    In reality, this feature only gives us about 30 pixels of extra room and comes with challenges on how to deal with the window controls. And yet, this extra room and those challenges can be turned into exciting design opportunities.

    More devices of all shapes and forms get invented all the time, and the web keeps on evolving to adapt to them. New features get added to the web platform to allow us, web authors, to integrate more and more deeply with those devices. From watches or foldable devices to desktop computers, we need to evolve our design approach for the web. Building for the web now lets us think outside the rectangular box.

    So let’s embrace this. Let’s use the standard technologies already at our disposal, and experiment with new ideas to provide tailored experiences for all devices, all from a single codebase!


    If you get a chance to try the Window Controls Overlay feature and have feedback about it, you can open issues on the spec’s repository. It’s still early in the development of this feature, and you can help make it even better. Or, you can take a look at the feature’s existing documentation, or this demo app and its source code

  • Designers, (Re)define Success First

    Designers, (Re)define Success First

    About two and a half years ago, I introduced the idea of daily ethical design. It was born out of my frustration with the many obstacles to achieving design that’s usable and equitable; protects people’s privacy, agency, and focus; benefits society; and restores nature. I argued that we need to overcome the inconveniences that prevent us from acting ethically and that we need to elevate design ethics to a more practical level by structurally integrating it into our daily work, processes, and tools.

    Unfortunately, we’re still very far from this ideal. 

    At the time, I didn’t know yet how to structurally integrate ethics. Yes, I had found some tools that had worked for me in previous projects, such as using checklists, assumption tracking, and “dark reality” sessions, but I didn’t manage to apply those in every project. I was still struggling for time and support, and at best I had only partially achieved a higher (moral) quality of design—which is far from my definition of structurally integrated.

    I decided to dig deeper for the root causes in business that prevent us from practicing daily ethical design. Now, after much research and experimentation, I believe that I’ve found the key that will let us structurally integrate ethics. And it’s surprisingly simple! But first we need to zoom out to get a better understanding of what we’re up against.

    Influence the system

    Sadly, we’re trapped in a capitalistic system that reinforces consumerism and inequality, and it’s obsessed with the fantasy of endless growth. Sea levels, temperatures, and our demand for energy continue to rise unchallenged, while the gap between rich and poor continues to widen. Shareholders expect ever-higher returns on their investments, and companies feel forced to set short-term objectives that reflect this. Over the last decades, those objectives have twisted our well-intended human-centered mindset into a powerful machine that promotes ever-higher levels of consumption. When we’re working for an organization that pursues “double-digit growth” or “aggressive sales targets” (which is 99 percent of us), that’s very hard to resist while remaining human friendly. Even with our best intentions, and even though we like to say that we create solutions for people, we’re a part of the problem.

    What can we do to change this?

    We can start by acting on the right level of the system. Donella H. Meadows, a system thinker, once listed ways to influence a system in order of effectiveness. When you apply these to design, you get:

    • At the lowest level of effectiveness, you can affect numbers such as usability scores or the number of design critiques. But none of that will change the direction of a company.
    • Similarly, affecting buffers (such as team budgets), stocks (such as the number of designers), flows (such as the number of new hires), and delays (such as the time that it takes to hear about the effect of design) won’t significantly affect a company.
    • Focusing instead on feedback loops such as management control, employee recognition, or design-system investments can help a company become better at achieving its objectives. But that doesn’t change the objectives themselves, which means that the organization will still work against your ethical-design ideals.
    • The next level, information flows, is what most ethical-design initiatives focus on now: the exchange of ethical methods, toolkits, articles, conferences, workshops, and so on. This is also where ethical design has remained mostly theoretical. We’ve been focusing on the wrong level of the system all this time.
    • Take rules, for example—they beat knowledge every time. There can be widely accepted rules, such as how finance works, or a scrum team’s definition of done. But ethical design can also be smothered by unofficial rules meant to maintain profits, often revealed through comments such as “the client didn’t ask for it” or “don’t make it too big.”
    • Changing the rules without holding official power is very hard. That’s why the next level is so influential: self-organization. Experimentation, bottom-up initiatives, passion projects, self-steering teams—all of these are examples of self-organization that improve the resilience and creativity of a company. It’s exactly this diversity of viewpoints that’s needed to structurally tackle big systemic issues like consumerism, wealth inequality, and climate change.
    • Yet even stronger than self-organization are objectives and metrics. Our companies want to make more money, which means that everything and everyone in the company does their best to… make the company more money. And once I realized that profit is nothing more than a measurement, I understood how crucial a very specific, defined metric can be toward pushing a company in a certain direction.

    The takeaway? If we truly want to incorporate ethics into our daily design practice, we must first change the measurable objectives of the company we work for, from the bottom up.

    Redefine success

    Traditionally, we consider a product or service successful if it’s desirable to humans, technologically feasible, and financially viable. You tend to see these represented as equals; if you type the three words in a search engine, you’ll find diagrams of three equally sized, evenly arranged circles.

    But in our hearts, we all know that the three dimensions aren’t equally weighted: it’s viability that ultimately controls whether a product will go live. So a more realistic representation might look like this:

    Desirability and feasibility are the means; viability is the goal. Companies—outside of nonprofits and charities—exist to make money.

    A genuinely purpose-driven company would try to reverse this dynamic: it would recognize finance for what it was intended for: a means. So both feasibility and viability are means to achieve what the company set out to achieve. It makes intuitive sense: to achieve most anything, you need resources, people, and money. (Fun fact: the Italian language knows no difference between feasibility and viability; both are simply fattibilità.)

    But simply swapping viable for desirable isn’t enough to achieve an ethical outcome. Desirability is still linked to consumerism because the associated activities aim to identify what people want—whether it’s good for them or not. Desirability objectives, such as user satisfaction or conversion, don’t consider whether a product is healthy for people. They don’t prevent us from creating products that distract or manipulate people or stop us from contributing to society’s wealth inequality. They’re unsuitable for establishing a healthy balance with nature.

    There’s a fourth dimension of success that’s missing: our designs also need to be ethical in the effect that they have on the world.

    This is hardly a new idea. Many similar models exist, some calling the fourth dimension accountability, integrity, or responsibility. What I’ve never seen before, however, is the necessary step that comes after: to influence the system as designers and to make ethical design more practical, we must create objectives for ethical design that are achievable and inspirational. There’s no one way to do this because it highly depends on your culture, values, and industry. But I’ll give you the version that I developed with a group of colleagues at a design agency. Consider it a template to get started.

    Pursue well-being, equity, and sustainability

    We created objectives that address design’s effect on three levels: individual, societal, and global.

    An objective on the individual level tells us what success is beyond the typical focus of usability and satisfaction—instead considering matters such as how much time and attention is required from users. We pursued well-being:

    We create products and services that allow for people’s health and happiness. Our solutions are calm, transparent, nonaddictive, and nonmisleading. We respect our users’ time, attention, and privacy, and help them make healthy and respectful choices.

    An objective on the societal level forces us to consider our impact beyond just the user, widening our attention to the economy, communities, and other indirect stakeholders. We called this objective equity:

    We create products and services that have a positive social impact. We consider economic equality, racial justice, and the inclusivity and diversity of people as teams, users, and customer segments. We listen to local culture, communities, and those we affect.

    Finally, the objective on the global level aims to ensure that we remain in balance with the only home we have as humanity. Referring to it simply as sustainability, our definition was:

    We create products and services that reward sufficiency and reusability. Our solutions support the circular economy: we create value from waste, repurpose products, and prioritize sustainable choices. We deliver functionality instead of ownership, and we limit energy use.

    In short, ethical design (to us) meant achieving wellbeing for each user and an equitable value distribution within society through a design that can be sustained by our living planet. When we introduced these objectives in the company, for many colleagues, design ethics and responsible design suddenly became tangible and achievable through practical—and even familiar—actions.

    Measure impact 

    But defining these objectives still isn’t enough. What truly caught the attention of senior management was the fact that we created a way to measure every design project’s well-being, equity, and sustainability.

    This overview lists example metrics that you can use as you pursue well-being, equity, and sustainability:

    There’s a lot of power in measurement. As the saying goes, what gets measured gets done. Donella Meadows once shared this example:

    “If the desired system state is national security, and that is defined as the amount of money spent on the military, the system will produce military spending. It may or may not produce national security.”

    This phenomenon explains why desirability is a poor indicator of success: it’s typically defined as the increase in customer satisfaction, session length, frequency of use, conversion rate, churn rate, download rate, and so on. But none of these metrics increase the health of people, communities, or ecosystems. What if instead we measured success through metrics for (digital) well-being, such as (reduced) screen time or software energy consumption?

    There’s another important message here. Even if we set an objective to build a calm interface, if we were to choose the wrong metric for calmness—say, the number of interface elements—we could still end up with a screen that induces anxiety. Choosing the wrong metric can completely undo good intentions. 

    Additionally, choosing the right metric is enormously helpful in focusing the design team. Once you go through the exercise of choosing metrics for our objectives, you’re forced to consider what success looks like concretely and how you can prove that you’ve reached your ethical objectives. It also forces you to consider what we as designers have control over: what can I include in my design or change in my process that will lead to the right type of success? The answer to this question brings a lot of clarity and focus.

    And finally, it’s good to remember that traditional businesses run on measurements, and managers love to spend much time discussing charts (ideally hockey-stick shaped)—especially if they concern profit, the one-above-all of metrics. For good or ill, to improve the system, to have a serious discussion about ethical design with managers, we’ll need to speak that business language.

    Practice daily ethical design

    Once you’ve defined your objectives and you have a reasonable idea of the potential metrics for your design project, only then do you have a chance to structurally practice ethical design. It “simply” becomes a matter of using your creativity and choosing from all the knowledge and toolkits already available to you.

    I think this is quite exciting! It opens a whole new set of challenges and considerations for the design process. Should you go with that energy-consuming video or would a simple illustration be enough? Which typeface is the most calm and inclusive? Which new tools and methods do you use? When is the website’s end of life? How can you provide the same service while requiring less attention from users? How do you make sure that those who are affected by decisions are there when those decisions are made? How can you measure our effects?

    The redefinition of success will completely change what it means to do good design.

    There is, however, a final piece of the puzzle that’s missing: convincing your client, product owner, or manager to be mindful of well-being, equity, and sustainability. For this, it’s essential to engage stakeholders in a dedicated kickoff session.

    Kick it off or fall back to status quo

    The kickoff is the most important meeting that can be so easy to forget to include. It consists of two major phases: 1) the alignment of expectations, and 2) the definition of success.

    In the first phase, the entire (design) team goes over the project brief and meets with all the relevant stakeholders. Everyone gets to know one another and express their expectations on the outcome and their contributions to achieving it. Assumptions are raised and discussed. The aim is to get on the same level of understanding and to in turn avoid preventable miscommunications and surprises later in the project.

    For example, for a recent freelance project that aimed to design a digital platform that facilitates US student advisors’ documentation and communication, we conducted an online kickoff with the client, a subject-matter expert, and two other designers. We used a combination of canvases on Miro: one with questions from “Manual of Me” (to get to know each other), a Team Canvas (to express expectations), and a version of the Project Canvas to align on scope, timeline, and other practical matters.

    The above is the traditional purpose of a kickoff. But just as important as expressing expectations is agreeing on what success means for the project—in terms of desirability, viability, feasibility, and ethics. What are the objectives in each dimension?

    Agreement on what success means at such an early stage is crucial because you can rely on it for the remainder of the project. If, for example, the design team wants to build an inclusive app for a diverse user group, they can raise diversity as a specific success criterion during the kickoff. If the client agrees, the team can refer back to that promise throughout the project. “As we agreed in our first meeting, having a diverse user group that includes A and B is necessary to build a successful product. So we do activity X and follow research process Y.” Compare those odds to a situation in which the team didn’t agree to that beforehand and had to ask for permission halfway through the project. The client might argue that that came on top of the agreed scope—and she’d be right.

    In the case of this freelance project, to define success I prepared a round canvas that I call the Wheel of Success. It consists of an inner ring, meant to capture ideas for objectives, and a set of outer rings, meant to capture ideas on how to measure those objectives. The rings are divided into five dimensions of successful design: healthy, equitable, sustainable, desirable, feasible, and viable.

    We went through each dimension, writing down ideas on digital sticky notes. Then we discussed our ideas and verbally agreed on the most important ones. For example, our client agreed that sustainability and progressive enhancement are important success criteria for the platform. And the subject-matter expert emphasized the importance of including students from low-income and disadvantaged groups in the design process.

    After the kickoff, we summarized our ideas and shared understanding in a project brief that captured these aspects:

    • the project’s origin and purpose: why are we doing this project?
    • the problem definition: what do we want to solve?
    • the concrete goals and metrics for each success dimension: what do we want to achieve?
    • the scope, process, and role descriptions: how will we achieve it?

    With such a brief in place, you can use the agreed-upon objectives and concrete metrics as a checklist of success, and your design team will be ready to pursue the right objective—using the tools, methods, and metrics at their disposal to achieve ethical outcomes.

    Conclusion

    Over the past year, quite a few colleagues have asked me, “Where do I start with ethical design?” My answer has always been the same: organize a session with your stakeholders to (re)define success. Even though you might not always be 100 percent successful in agreeing on goals that cover all responsibility objectives, that beats the alternative (the status quo) every time. If you want to be an ethical, responsible designer, there’s no skipping this step.

    To be even more specific: if you consider yourself a strategic designer, your challenge is to define ethical objectives, set the right metrics, and conduct those kick-off sessions. If you consider yourself a system designer, your starting point is to understand how your industry contributes to consumerism and inequality, understand how finance drives business, and brainstorm which levers are available to influence the system on the highest level. Then redefine success to create the space to exercise those levers.

    And for those who consider themselves service designers or UX designers or UI designers: if you truly want to have a positive, meaningful impact, stay away from the toolkits and meetups and conferences for a while. Instead, gather your colleagues and define goals for well-being, equity, and sustainability through design. Engage your stakeholders in a workshop and challenge them to think of ways to achieve and measure those ethical goals. Take their input, make it concrete and visible, ask for their agreement, and hold them to it.

    Otherwise, I’m genuinely sorry to say, you’re wasting your precious time and creative energy.

    Of course, engaging your stakeholders in this way can be uncomfortable. Many of my colleagues expressed doubts such as “What will the client think of this?,” “Will they take me seriously?,” and “Can’t we just do it within the design team instead?” In fact, a product manager once asked me why ethics couldn’t just be a structured part of the design process—to just do it without spending the effort to define ethical objectives. It’s a tempting idea, right? We wouldn’t have to have difficult discussions with stakeholders about what values or which key-performance indicators to pursue. It would let us focus on what we like and do best: designing.

    But as systems theory tells us, that’s not enough. For those of us who aren’t from marginalized groups and have the privilege to be able to speak up and be heard, that uncomfortable space is exactly where we need to be if we truly want to make a difference. We can’t remain within the design-for-designers bubble, enjoying our privileged working-from-home situation, disconnected from the real world out there. For those of us who have the possibility to speak up and be heard: if we solely keep talking about ethical design and it remains at the level of articles and toolkits—we’re not designing ethically. It’s just theory. We need to actively engage our colleagues and clients by challenging them to redefine success in business.

    With a bit of courage, determination, and focus, we can break out of this cage that finance and business-as-usual have built around us and become facilitators of a new type of business that can see beyond financial value. We just need to agree on the right objectives at the start of each design project, find the right metrics, and realize that we already have everything that we need to get started. That’s what it means to do daily ethical design.

    For their inspiration and support over the years, I would like to thank Emanuela Cozzi Schettini, José Gallegos, Annegret Bönemann, Ian Dorr, Vera Rademaker, Virginia Rispoli, Cecilia Scolaro, Rouzbeh Amini, and many others.

  • Mobile-First CSS: Is It Time for a Rethink?

    Mobile-First CSS: Is It Time for a Rethink?

    The mobile-first design methodology is great—it focuses on what really matters to the user, it’s well-practiced, and it’s been a common design pattern for years. So developing your CSS mobile-first should also be great, too…right? 

    Well, not necessarily. Classic mobile-first CSS development is based on the principle of overwriting style declarations: you begin your CSS with default style declarations, and overwrite and/or add new styles as you add breakpoints with min-width media queries for larger viewports (for a good overview see “What is Mobile First CSS and Why Does It Rock?”). But all those exceptions create complexity and inefficiency, which in turn can lead to an increased testing effort and a code base that’s harder to maintain. Admit it—how many of us willingly want that?

    On your own projects, mobile-first CSS may yet be the best tool for the job, but first you need to evaluate just how appropriate it is in light of the visual design and user interactions you’re working on. To help you get started, here’s how I go about tackling the factors you need to watch for, and I’ll discuss some alternate solutions if mobile-first doesn’t seem to suit your project.

    Advantages of mobile-first

    Some of the things to like with mobile-first CSS development—and why it’s been the de facto development methodology for so long—make a lot of sense:

    Development hierarchy. One thing you undoubtedly get from mobile-first is a nice development hierarchy—you just focus on the mobile view and get developing. 

    Tried and tested. It’s a tried and tested methodology that’s worked for years for a reason: it solves a problem really well.

    Prioritizes the mobile view. The mobile view is the simplest and arguably the most important, as it encompasses all the key user journeys, and often accounts for a higher proportion of user visits (depending on the project). 

    Prevents desktop-centric development. As development is done using desktop computers, it can be tempting to initially focus on the desktop view. But thinking about mobile from the start prevents us from getting stuck later on; no one wants to spend their time retrofitting a desktop-centric site to work on mobile devices!

    Disadvantages of mobile-first

    Setting style declarations and then overwriting them at higher breakpoints can lead to undesirable ramifications:

    More complexity. The farther up the breakpoint hierarchy you go, the more unnecessary code you inherit from lower breakpoints. 

    Higher CSS specificity. Styles that have been reverted to their browser default value in a class name declaration now have a higher specificity. This can be a headache on large projects when you want to keep the CSS selectors as simple as possible.

    Requires more regression testing. Changes to the CSS at a lower view (like adding a new style) requires all higher breakpoints to be regression tested.

    The browser can’t prioritize CSS downloads. At wider breakpoints, classic mobile-first min-width media queries don’t leverage the browser’s capability to download CSS files in priority order.

    The problem of property value overrides

    There is nothing inherently wrong with overwriting values; CSS was designed to do just that. Still, inheriting incorrect values is unhelpful and can be burdensome and inefficient. It can also lead to increased style specificity when you have to overwrite styles to reset them back to their defaults, something that may cause issues later on, especially if you are using a combination of bespoke CSS and utility classes. We won’t be able to use a utility class for a style that has been reset with a higher specificity.

    With this in mind, I’m developing CSS with a focus on the default values much more these days. Since there’s no specific order, and no chains of specific values to keep track of, this frees me to develop breakpoints simultaneously. I concentrate on finding common styles and isolating the specific exceptions in closed media query ranges (that is, any range with a max-width set). 

    This approach opens up some opportunities, as you can look at each breakpoint as a clean slate. If a component’s layout looks like it should be based on Flexbox at all breakpoints, it’s fine and can be coded in the default style sheet. But if it looks like Grid would be much better for large screens and Flexbox for mobile, these can both be done entirely independently when the CSS is put into closed media query ranges. Also, developing simultaneously requires you to have a good understanding of any given component in all breakpoints up front. This can help surface issues in the design earlier in the development process. We don’t want to get stuck down a rabbit hole building a complex component for mobile, and then get the designs for desktop and find they are equally complex and incompatible with the HTML we created for the mobile view! 

    Though this approach isn’t going to suit everyone, I encourage you to give it a try. There are plenty of tools out there to help with concurrent development, such as Responsively App, Blisk, and many others. 

    Having said that, I don’t feel the order itself is particularly relevant. If you are comfortable with focusing on the mobile view, have a good understanding of the requirements for other breakpoints, and prefer to work on one device at a time, then by all means stick with the classic development order. The important thing is to identify common styles and exceptions so you can put them in the relevant stylesheet—a sort of manual tree-shaking process! Personally, I find this a little easier when working on a component across breakpoints, but that’s by no means a requirement.

    Closed media query ranges in practice 

    In classic mobile-first CSS we overwrite the styles, but we can avoid this by using media query ranges. To illustrate the difference (I’m using SCSS for brevity), let’s assume there are three visual designs: 

    • smaller than 768
    • from 768 to below 1024
    • 1024 and anything larger 

    Take a simple example where a block-level element has a default padding of “20px,” which is overwritten at tablet to be “40px” and set back to “20px” on desktop.

    Classic min-width mobile-first

    .my-block {
      padding: 20px;
      @media (min-width: 768px) {
        padding: 40px;
      }
      @media (min-width: 1024px) {
        padding: 20px;
      }
    }

    Closed media query range

    .my-block {
      padding: 20px;
      @media (min-width: 768px) and (max-width: 1023.98px) {
        padding: 40px;
      }
    }

    The subtle difference is that the mobile-first example sets the default padding to “20px” and then overwrites it at each breakpoint, setting it three times in total. In contrast, the second example sets the default padding to “20px” and only overrides it at the relevant breakpoint where it isn’t the default value (in this instance, tablet is the exception).

    The goal is to: 

    • Only set styles when needed. 
    • Not set them with the expectation of overwriting them later on, again and again. 

    To this end, closed media query ranges are our best friend. If we need to make a change to any given view, we make it in the CSS media query range that applies to the specific breakpoint. We’ll be much less likely to introduce unwanted alterations, and our regression testing only needs to focus on the breakpoint we have actually edited. 

    Taking the above example, if we find that .my-block spacing on desktop is already accounted for by the margin at that breakpoint, and since we want to remove the padding altogether, we could do this by setting the mobile padding in a closed media query range.

    .my-block {
      @media (max-width: 767.98px) {
        padding: 20px;
      }
      @media (min-width: 768px) and (max-width: 1023.98px) {
        padding: 40px;
      }
    }

    The browser default padding for our block is “0,” so instead of adding a desktop media query and using unset or “0” for the padding value (which we would need with mobile-first), we can wrap the mobile padding in a closed media query (since it is now also an exception) so it won’t get picked up at wider breakpoints. At the desktop breakpoint, we won’t need to set any padding style, as we want the browser default value.

    Bundling versus separating the CSS

    Back in the day, keeping the number of requests to a minimum was very important due to the browser’s limit of concurrent requests (typically around six). As a consequence, the use of image sprites and CSS bundling was the norm, with all the CSS being downloaded in one go, as one stylesheet with highest priority. 

    With HTTP/2 and HTTP/3 now on the scene, the number of requests is no longer the big deal it used to be. This allows us to separate the CSS into multiple files by media query. The clear benefit of this is the browser can now request the CSS it currently needs with a higher priority than the CSS it doesn’t. This is more performant and can reduce the overall time page rendering is blocked.

    Which HTTP version are you using?

    To determine which version of HTTP you’re using, go to your website and open your browser’s dev tools. Next, select the Network tab and make sure the Protocol column is visible. If “h2” is listed under Protocol, it means HTTP/2 is being used. 

    Note: to view the Protocol in your browser’s dev tools, go to the Network tab, reload your page, right-click any column header (e.g., Name), and check the Protocol column.

    Also, if your site is still using HTTP/1...WHY?!! What are you waiting for? There is excellent user support for HTTP/2.

    Splitting the CSS

    Separating the CSS into individual files is a worthwhile task. Linking the separate CSS files using the relevant media attribute allows the browser to identify which files are needed immediately (because they’re render-blocking) and which can be deferred. Based on this, it allocates each file an appropriate priority.

    In the following example of a website visited on a mobile breakpoint, we can see the mobile and default CSS are loaded with “Highest” priority, as they are currently needed to render the page. The remaining CSS files (print, tablet, and desktop) are still downloaded in case they’ll be needed later, but with “Lowest” priority. 

    With bundled CSS, the browser will have to download the CSS file and parse it before rendering can start.

    While, as noted, with the CSS separated into different files linked and marked up with the relevant media attribute, the browser can prioritize the files it currently needs. Using closed media query ranges allows the browser to do this at all widths, as opposed to classic mobile-first min-width queries, where the desktop browser would have to download all the CSS with Highest priority. We can’t assume that desktop users always have a fast connection. For instance, in many rural areas, internet connection speeds are still slow. 

    The media queries and number of separate CSS files will vary from project to project based on project requirements, but might look similar to the example below.

    Bundled CSS



    This single file contains all the CSS, including all media queries, and it will be downloaded with Highest priority.

    Separated CSS



    Separating the CSS and specifying a media attribute value on each link tag allows the browser to prioritize what it currently needs. Out of the five files listed above, two will be downloaded with Highest priority: the default file, and the file that matches the current media query. The others will be downloaded with Lowest priority.

    Depending on the project’s deployment strategy, a change to one file (mobile.css, for example) would only require the QA team to regression test on devices in that specific media query range. Compare that to the prospect of deploying the single bundled site.css file, an approach that would normally trigger a full regression test.

    Moving on

    The uptake of mobile-first CSS was a really important milestone in web development; it has helped front-end developers focus on mobile web applications, rather than developing sites on desktop and then attempting to retrofit them to work on other devices.

    I don’t think anyone wants to return to that development model again, but it’s important we don’t lose sight of the issue it highlighted: that things can easily get convoluted and less efficient if we prioritize one particular device—any device—over others. For this reason, focusing on the CSS in its own right, always mindful of what is the default setting and what’s an exception, seems like the natural next step. I’ve started noticing small simplifications in my own CSS, as well as other developers’, and that testing and maintenance work is also a bit more simplified and productive. 

    In general, simplifying CSS rule creation whenever we can is ultimately a cleaner approach than going around in circles of overrides. But whichever methodology you choose, it needs to suit the project. Mobile-first may—or may not—turn out to be the best choice for what’s involved, but first you need to solidly understand the trade-offs you’re stepping into.