Picture this: You’re in a meeting room at your tech company, and two people are having what looks like the same conversation about the same design problem. One is talking about whether the team has the right skills to tackle it. The other is diving deep into whether the solution actually solves the user’s problem. Same room, same problem, completely different lenses.
This is the beautiful, sometimes messy reality of having both a Design Manager and a Lead Designer on the same team. And if you’re wondering how to make this work without creating confusion, overlap, or the dreaded “too many cooks” scenario, you’re asking the right question.
The traditional answer has been to draw clean lines on an org chart. The Design Manager handles people, the Lead Designer handles craft. Problem solved, right? Except clean org charts are fantasy. In reality, both roles care deeply about team health, design quality, and shipping great work.
The magic happens when you embrace the overlap instead of fighting it—when you start thinking of your design org as a design organism.
The Anatomy of a Healthy Design Team
Here’s what I’ve learned from years of being on both sides of this equation: think of your design team as a living organism. The Design Manager tends to the mind (the psychological safety, the career growth, the team dynamics). The Lead Designer tends to the body (the craft skills, the design standards, the hands-on work that ships to users).
But just like mind and body aren’t completely separate systems, so, too, do these roles overlap in important ways. You can’t have a healthy person without both working in harmony. The trick is knowing where those overlaps are and how to navigate them gracefully.
When we look at how healthy teams actually function, three critical systems emerge. Each requires both roles to work together, but with one taking primary responsibility for keeping that system strong.
The Nervous System: People & Psychology
Primary caretaker: Design Manager Supporting role: Lead Designer
The nervous system is all about signals, feedback, and psychological safety. When this system is healthy, information flows freely, people feel safe to take risks, and the team can adapt quickly to new challenges.
The Design Manager is the primary caretaker here. They’re monitoring the team’s psychological pulse, ensuring feedback loops are healthy, and creating the conditions for people to grow. They’re hosting career conversations, managing workload, and making sure no one burns out.
But the Lead Designer plays a crucial supporting role. They’re providing sensory input about craft development needs, spotting when someone’s design skills are stagnating, and helping identify growth opportunities that the Design Manager might miss.
Design Manager tends to:
Career conversations and growth planning
Team psychological safety and dynamics
Workload management and resource allocation
Performance reviews and feedback systems
Creating learning opportunities
Lead Designer supports by:
Providing craft-specific feedback on team member development
Identifying design skill gaps and growth opportunities
Offering design mentorship and guidance
Signaling when team members are ready for more complex challenges
The Muscular System: Craft & Execution
Primary caretaker: Lead Designer Supporting role: Design Manager
The muscular system is about strength, coordination, and skill development. When this system is healthy, the team can execute complex design work with precision, maintain consistent quality, and adapt their craft to new challenges.
The Lead Designer is the primary caretaker here. They’re setting design standards, providing craft coaching, and ensuring that shipping work meets the quality bar. They’re the ones who can tell you if a design decision is sound or if we’re solving the right problem.
But the Design Manager plays a crucial supporting role. They’re ensuring the team has the resources and support to do their best craft work, like proper nutrition and recovery time for an athlete.
Lead Designer tends to:
Definition of design standards and system usage
Feedback on what design work meets the standard
Experience direction for the product
Design decisions and product-wide alignment
Innovation and craft advancement
Design Manager supports by:
Ensuring design standards are understood and adopted across the team
Confirming experience direction is being followed
Supporting practices and systems that scale without bottlenecking
Facilitating design alignment across teams
Providing resources and removing obstacles to great craft work
The Circulatory System: Strategy & Flow
Shared caretakers: Both Design Manager and Lead Designer
The circulatory system is about how information, decisions, and energy flow through the team. When this system is healthy, strategic direction is clear, priorities are aligned, and the team can respond quickly to new opportunities or challenges.
This is where true partnership happens. Both roles are responsible for keeping the circulation strong, but they’re bringing different perspectives to the table.
Lead Designer contributes:
User needs are met by the product
Overall product quality and experience
Strategic design initiatives
Research-based user needs for each initiative
Design Manager contributes:
Communication to team and stakeholders
Stakeholder management and alignment
Cross-functional team accountability
Strategic business initiatives
Both collaborate on:
Co-creation of strategy with leadership
Team goals and prioritization approach
Organizational structure decisions
Success measures and frameworks
Keeping the Organism Healthy
The key to making this partnership sing is understanding that all three systems need to work together. A team with great craft skills but poor psychological safety will burn out. A team with great culture but weak craft execution will ship mediocre work. A team with both but poor strategic circulation will work hard on the wrong things.
Be Explicit About Which System You’re Tending
When you’re in a meeting about a design problem, it helps to acknowledge which system you’re primarily focused on. “I’m thinking about this from a team capacity perspective” (nervous system) or “I’m looking at this through the lens of user needs” (muscular system) gives everyone context for your input.
This isn’t about staying in your lane. It’s about being transparent as to which lens you’re using, so the other person knows how to best add their perspective.
Create Healthy Feedback Loops
The most successful partnerships I’ve seen establish clear feedback loops between the systems:
Nervous system signals to muscular system: “The team is struggling with confidence in their design skills” → Lead Designer provides more craft coaching and clearer standards.
Muscular system signals to nervous system: “The team’s craft skills are advancing faster than their project complexity” → Design Manager finds more challenging growth opportunities.
Both systems signal to circulatory system: “We’re seeing patterns in team health and craft development that suggest we need to adjust our strategic priorities.”
Handle Handoffs Gracefully
The most critical moments in this partnership are when something moves from one system to another. This might be when a design standard (muscular system) needs to be rolled out across the team (nervous system), or when a strategic initiative (circulatory system) needs specific craft execution (muscular system).
Make these transitions explicit. “I’ve defined the new component standards. Can you help me think through how to get the team up to speed?” or “We’ve agreed on this strategic direction. I’m going to focus on the specific user experience approach from here.”
Stay Curious, Not Territorial
The Design Manager who never thinks about craft, or the Lead Designer who never considers team dynamics, is like a doctor who only looks at one body system. Great design leadership requires both people to care about the whole organism, even when they’re not the primary caretaker.
This means asking questions rather than making assumptions. “What do you think about the team’s craft development in this area?” or “How do you see this impacting team morale and workload?” keeps both perspectives active in every decision.
When the Organism Gets Sick
Even with clear roles, this partnership can go sideways. Here are the most common failure modes I’ve seen:
System Isolation
The Design Manager focuses only on the nervous system and ignores craft development. The Lead Designer focuses only on the muscular system and ignores team dynamics. Both people retreat to their comfort zones and stop collaborating.
The symptoms: Team members get mixed messages, work quality suffers, morale drops.
The treatment: Reconnect around shared outcomes. What are you both trying to achieve? Usually it’s great design work that ships on time from a healthy team. Figure out how both systems serve that goal.
Poor Circulation
Strategic direction is unclear, priorities keep shifting, and neither role is taking responsibility for keeping information flowing.
The symptoms: Team members are confused about priorities, work gets duplicated or dropped, deadlines are missed.
The treatment: Explicitly assign responsibility for circulation. Who’s communicating what to whom? How often? What’s the feedback loop?
Autoimmune Response
One person feels threatened by the other’s expertise. The Design Manager thinks the Lead Designer is undermining their authority. The Lead Designer thinks the Design Manager doesn’t understand craft.
The symptoms: Defensive behavior, territorial disputes, team members caught in the middle.
The treatment: Remember that you’re both caretakers of the same organism. When one system fails, the whole team suffers. When both systems are healthy, the team thrives.
The Payoff
Yes, this model requires more communication. Yes, it requires both people to be secure enough to share responsibility for team health. But the payoff is worth it: better decisions, stronger teams, and design work that’s both excellent and sustainable.
When both roles are healthy and working well together, you get the best of both worlds: deep craft expertise and strong people leadership. When one person is out sick, on vacation, or overwhelmed, the other can help maintain the team’s health. When a decision requires both the people perspective and the craft perspective, you’ve got both right there in the room.
Most importantly, the framework scales. As your team grows, you can apply the same system thinking to new challenges. Need to launch a design system? Lead Designer tends to the muscular system (standards and implementation), Design Manager tends to the nervous system (team adoption and change management), and both tend to circulation (communication and stakeholder alignment).
The Bottom Line
The relationship between a Design Manager and Lead Designer isn’t about dividing territories. It’s about multiplying impact. When both roles understand they’re tending to different aspects of the same healthy organism, magic happens.
The mind and body work together. The team gets both the strategic thinking and the craft excellence they need. And most importantly, the work that ships to users benefits from both perspectives.
So the next time you’re in that meeting room, wondering why two people are talking about the same problem from different angles, remember: you’re watching shared leadership in action. And if it’s working well, both the mind and body of your design team are getting stronger.
I’ve lost count of the times I’ve watched promising thoughts go from zero to warrior in a few days before failing to deliver within weeks as a product developer for very long.
Financial items, which is the industry in which I work, are no exception. It’s tempting to put as many features at the ceiling as possible and expect something sticks because people’s true, hard-earned money is on the line, user expectations are high, and crowded market. However, this strategy is a formula for disaster. Why? How’s why:
The drawbacks of feature-first creation
It’s easy to get swept up in the enthusiasm of developing innovative features when you start developing a financial product from scratch or are migrating existing client journeys from papers or phone channels to online bank or mobile applications. You might be thinking,” If I can only put one more thing that solves this particular person problem, they’ll appreciate me”! But what happens if you eventually encounter a roadblock as a result of your safety team’s negligence? not like it? When a battle-tested film isn’t as well-known as you anticipated or when it fails due to unforeseen difficulty?
The concept of Minimum Viable Product ( MVP ) comes into play in this area. Even if Jason Fried doesn’t usually refer to this concept, his book Getting Real and his audio Redo frequently discuss it. An MVP is a product that offers only sufficient value to your users to keep them interested, but not so much that it becomes difficult to keep up. Although it seems like an easy idea, it requires a razor-sharp eye, a ruthless edge, and the courage to stand up for your position because it is easy to fall for” the Columbo Effect” when there is always” just one more thing …” to add.
The issue with most fund apps is that they frequently turn out to be reflections of the company’s internal politics rather than an knowledge created specifically for the customer. This implies that the priority should be given to delivering as many features and functionalities as possible in order to satisfy the requirements and needs of competing internal departments as opposed to crafting a compelling value statement that is focused on what people in the real world actually want. As a result, these products can very quickly became a mixed bag of misleading, related, and finally unhappy customer experiences—a feature salad, you might say.
The significance of the foundation
What is a better strategy, then? How may we create products that are user-friendly, firm, and, most importantly, stick?
The concept of “bedrock” comes into play in this context. Rock is the main feature of your solution that really matters to customers. The foundation of worth and relevance over time is built upon it.
The rock has got to be in and around the standard servicing journeys in the world of retail bank, which is where I work. People only look at their existing account once every blue sky, but they do so every day. They purchase a credit card every year or every other year, but they at least once a month assess their stability and pay their bills.
The key is in identifying the main jobs that people want to complete and working relentlessly to render them simple, reliable, and trustworthy.
How can you reach the foundation, though? By focusing on the” MVP” strategy, giving convenience precedence, and working iteratively toward a clear value proposition. This means avoiding pointless extras and putting your customers first, making the most of them.
It also requires some nerve, as your coworkers might not always agree on your eyesight right away. And dubiously, occasionally it can even suggest making it clear to customers that you won’t be coming to their house and making their breakfast. Sometimes you need to use the sporadic “opinionated user interface design” ( i .e. clunky workaround for edge cases ) to test a concept or to give yourself some more time to work on something more crucial.
Functional methods for creating stick-like economic products
What are the main learnings I’ve made from my own research and practice?
What trouble are you trying to solve first and foremost with a distinct “why”? Who is it for? Make sure your goal is unmistakable before beginning any work. Make certain it also aligns with the goals of your business.
Avoid putting too many features on the list at after; instead, focus on getting that right first. Choose one that actually adds price, and work from that.
Give ease the precedence it deserves over difficulty when it comes to financial products. Eliminate unwanted details and concentrate solely on what matters most.
Accept ongoing iteration: Bedrock is not a fixed destination; it is a fluid process. Continuously collect customer feedback, make product improvements, and advance in that direction.
Cease, look, and listen: You don’t just have to test your product during the delivery process; you must also test it consistently in the field. Use it for yourself. Work A/B testing. User opinions on Gear. Speak to those who use it, and change things up correctly.
The foundational conundrum
Building towards rock implies sacrificing some short-term expansion potential in favor of long-term balance, which is an interesting paradox at play here. But the reward is worthwhile because products created with a concentrate on core will outlive and outperform their competitors and provide people with ongoing value over time.
How do you begin your quest to rock, then? Take it slowly. Start by identifying the essential components that your customers actually care about. Concentrate on developing and improving a second, potent function that delivers real value. And most importantly, make an obsessive effort because, in the words of Abraham Lincoln, Alan Kay, or Peter Drucker ( whew! The best way to foretell the future is to build it, he said.
Real life couples playing big screen lovers has been a fascination for audiences since the days of Fairbanks and Pickford, Tracy and Hepburn, Cruise and Kidman. How much of what we are watching is indicative of the real-life dynamic between these two performers? Do they love like that, lean on the other’s shoulder in the […]
Another San Diego Comic-Con is in the books. Going into what amounts to Nerd Culture Mecca last week, some margins of social media and the ceaseless online commentariat pondered whether this would be a quieter year without a Marvel or a DC Studios film slate. However, one glance at the euphoric reception Peacemaker alone received Saturday evening (as well as, ahem, on the cover of our own July issue of Den of Geek magazine), suggests there was nothing quiet at all about 135,000 fans, cosplayers, and general pop culture enthusiasts descending onto Southern California.
During the course of the convention, Den of Geek hosted a murderer’s row of talent from the worlds of film, television, comics, and more at our SDCC studio, as well as got out on the field to check out panels, activations, and events. Below is a round up of all the sights seen and memories made.
Alien is not only one of the most important science fiction franchises in history, it’s also one of the most cinematic. Beginning with Ridley Scott’s iconic 1979 film, each and every story about H.R. Giger’s terrifying xenomorph has proven to be a perfect fit for the feature-length film format. Alien movies are tense games of cat-and-mouse that end with the cat killing every mouse save for Sigourney Weaver. How, possibly, could that approach translate to episodic television? According to the folks behind FX’s Alien: Earth, it’s all pretty easy as long as you find the right personnel to pilot your Weyland-Yutani vessel.
“The experience of this speaks to the experience of Noah [Hawley],” Scott Free Productions producer David. W. Zucker says of the Alien: Earth showrunner. “He’s really in rarefied air when it comes to creators. The topic of this title has come up a lot of times over the years, but through Noah, we’re able to deliver something that’s really beyond our wildest imagination.”
As the creator of the Fargo TV series and the equally heady take on X-Men mythos with Legion for FX, Hawley indeed has a penchant for unique adaptation. He explains his approach to these projects as a sentimental exercise: “I start with feelings. ‘What did I feel about the original movie?’ I don’t go back and rewatch it. I just try to remember what stuck with me about the first two films. And then my goal is to recreate those feelings in you by telling you a totally different story.”
Joining Hawley and Zucker in the Den of Geek studio were the cast of Alien: Earth—Timothy Olyphant (Kirsh), Babou Ceesay (Morrow), Alex Lawther (Hermit), Samuel Blenkin (Boy Kavalier), and Sydney Chandler (Wendy)—the last of whom plays a first for the franchise: a child’s mind in the body of an android, aka a “hybrid.”
“Kids are great acting teachers,” Chandler says. “Noah really allowed me to find comfort and take the freedom to explore and fail and try again and succeed. It was play. For pre-production we did musical chairs and then learned how to kill people on set.” – Alec Bojalad
Ben Trivett/Shutterstock for Den of Geek
Anne Rice’s Talamasca: The Secret Order
Building on Anne Rice’s Immortal Universe is Talamasca: The Secret Order, a show that’s billed as a supernatural spy thriller. The Talamasca is a centuries‑old secret society tasked with tracking and containing witches, vampires, werewolves, and other paranormal beings in present‑day society. Fortunately for us, the cast and showrunners who came to our studio were very forthcoming about what we could expect when the show premieres in October.
“It’s nice to have Anne’s work as a backstop and to know that she created this organization and talks about it a decent amount,” says co-showrunner John Lee Hancock. “But we don’t have to follow the strict plot construction of anything regarding a Talamasca book. All the actors here play characters that are original characters.”
Nicholas Denton’s character enters the secret world of the Talamasca and brings the viewers with him. “I play Guy Anatole who’s a figure who gives the audience a perspective on what’s going on,” he says. “He’s a skeptic. He’s gone through a lot in his life, and at this point when we meet him, he’s kind of gotten it all together only to have it taken away from him by the Talamasca.”– Michael Ahr
Batman Azteca
TheBatmancharacter has seen no shortage of reprisals, homages, and renditions since his creation in the 1930s. Come September, the newest version will be Batman Azteca: Choque de Imperios (Aztec Batman: Clash of Empires). The animated Mexican-American film takes the iconic characters of the Batmanuniverse and, with the film being set during the Aztec Empire, gives them a historical and cultural spin.
“For a lot of us growing up learning about the Aztec culture and also being Batman fans, having the opportunity to combine the two was a unique and excellent opportunity,” the film’s director Juan Meza-León says. Combining the two worlds required lots of research within the realms of Aztec history and the Batmanuniverse, and also came with a fair amount of responsibility for the voice of Batman, Horacio García Rojas.
“I love representation, but I don’t want to be a token in a big production,” he says. “I want to represent my own culture in my own context, and that’s Aztec Batman.” – Sophia Rooksberry
Ben Trivett/Shutterstock for Den of Geek
Butterfly
In a spy thriller, a protagonist’s greatest fear is the discovery of his weaknesses. In the case of David Jung, the hardened former U.S. intelligence operative portrayed by Daniel Dae Kim in the upcoming series Butterfly, that weakness is his family. Kim, along with costars Reina Hardesty and Piper Perabo, stopped by the Den of Geek studio to dive into the complicated family dynamics of their show, and the ways in which it sets Butterfly apart from the extensive catalog of onscreen spy stories.
“In the beginning of our show, you meet Rebecca in the middle of a mission, something she is very used to doing … I am pursuing a target and then I find out that it is my dad who I thought died 9 years before, and so then my whole world turns upside down,” Hardesty said.
The resulting storyline follows a father and daughter as they rediscover their relationship while running from the dangerous organization that created them both, orchestrated by Juno (Perabo). The show balances themes of family and drama with the classic staples of a spy thriller, from car chases to shootouts to hand-to-hand combat scenes that Jason Bourne would marvel at.
“I grew up watching people like James Bond and Jason Bourne, but on the other hand, I never saw anyone that looked like me do this in America,” Kim said. “It was a little bit of taking what I could from the characters I know and loved and trying to make it my own and trying to create a new archetype.”
Adapted from the graphic novel of the same name, Butterfly maintains originality as an adaptation, an installment of a beloved genre and a platform for AAPI representation. Visit Prime Video on Aug. 13 to dive into the subversive and emotional world. – SR
Ben Trivett/Shutterstock for Den of Geek
David Dastmalchian
David Dastmalchian had been coming to comic-cons, San Diego or otherwise, as a fan for far longer than he’s been a guest up on the stage. In fact, when he enters our studio space he can be faintly nostalgic about the times he would see other folks dressed up as Thomas Schiff, a minor but memorable character he played in his first film, Christopher Nolan’s The Dark Knight. He also can be proudly affectionate of those that are slightly more recent cuts, like cosplayers spotted as Polka-Dot Man from The Sucide Squad or Jack Delroy in the new cult classic Late Night with the Devil.
Yet when we talked with the actor and comic book author last week, Dastmalchian might’ve been proudest of how genre and the things he loves can be used as a way to talk about the personal elements of life. Take for instance Through, Dastmalchian’s new graphic novel as a writer, and which is illustrated by Cat Staggs. “Sometimes you just sit at the campfire with a cool idea, and you’re like, ‘Ooh, I’m going to creep people out. Ooh, I got a journey I want to take people on.’ … I just thought it was a cool story, it didn’t hit me until halfway through scripting how personal it was.”
That journey involves a woman falling through the ice above Lake Michigan on a cold Chicago day, seemingly on purpose, only to discover she was saved by an elderly and dying stranger, who might have been following her all her life. It will reveal one side of Dastmalchian’s personality when it releases in 2026, but there are many others, informing projects that run the gamut from Murderbot to the highly anticipated Street Fighter where Dastmalchian will next be seen playing the villainous M. Bison.
“I am deep in the process right now,” Dastmalchian says of the physical training it takes to become the greatest villain in fighting game history. “I can’t say much other than how it’s so amazing, and I love getting to start building and preparing.”
Keep an eye on Den of Geek for more of our conversation with Dastmalchian, from Through to Street Fighter, and everything in between. – David Crow
Ben Trivett/Shutterstock for Den of Geek
Defiant
We had the privilege of speaking with the passionate and creative team behind the graphic novel Defiant:The Story of Robert Smalls. The novel, based on a true story, explores the journey of a man who broke free from slavery, stole a Confederate ship, and sailed to freedom during the Civil War. He later became the first Black naval captain in American history along with many historically significant accolades down the line. It is, in other words, fertile ground for a visual work of art dreamed up by storyteller Rob Edwards, as well as the subject of an upcoming new film from the production company Legion M.
“A story like this needs to be told in a dynamic way,” Edwards says after stopping by the studio. Throughout this conversation, it was clear that something was very different from most big-time epic history stories being told through an IP like this. This has a dedication to community and truth.
As Legion M founder Chris Cooper explains, “We’re the largest fan-owned company for developing and producing films right now with crowdfunding being the very thing to help bring this story to light.” Fan input also extends to creative decisions such as casting and directing. Adds Cooper, “Legion M is a studio for fans, by fans. So I’d love to hear what the people think… All of us have had conversations with your favorite actors, your favorite actresses, [and] your favorite directors [about involvement].”
This project holds value not only for its importance in American history but also as a piece of culture that could’ve been left to time but instead has been revitalized and given a platform to be told and celebrated as it deserves.
Marvin Jones III, producer of the live-action adaptation, says, “It’s always been important to tell stories or be a part of stories that have an impact, especially for Black people in our community from a fictitious standpoint, from a superhero standpoint. Robert Smalls is a real-life superhero, especially for us as a people and our culture.” – Caleb Miller
Digman!: Andy Samberg Reveals Favorite Lonely Island Sketches
The timing of Digman! creators Andy Samberg and Neil Campbell’s visit to the Den of Geek studio at San Diego Comic-Con 2025 could not have been more auspicious. Not only had season 2 of the pair’s animated archaeology comedy premiered the night before on Comedy Central, it was preceded by the debut of a very particular episode of South Park. You know which one… So was the duo looking forward to the panel they were set to share with South Park creators Trey Parker and Matt Stone that night?
“I’m looking forward to it now! Ask me again after,” Samberg said with a nervous laugh. “When we got told that we would have them as our lead-in, there’s nothing better.”
“That’s truly my favorite comedy,” Campbell added. “I’ve watched every episode, every special. Those guys, in a way they’re underappreciated for their influence on the world of comedy. It was awesome to get to come on afterwards.”
Samberg and Campbell were able to set aside their South Park nerves to discuss the surprisingly deep lore of Digman!, in which Samberg puts his Nicolas Cage impression to good use, playing a dubiously heroic archaeologist trying to save the world from the Unclechrist and Auntiechrist. Samberg also provided a rundown of some of his favorite “SNL Digital Shorts” that he and Lonely Island collaborators Akiva Schaffer and Jorma Taccone created during their time on NBC’s comedy flagship.
“I have a bunch of them that I’m very fond of,” Samberg said. “I really like ‘Jack Sparrow.’ I feel like that one kind of encapsulates everything that we do. I really like the one we did called ‘Great Day’ where I’m on Commerce Street and just gacked out of my mind. There’s one I did from last season with Jake Semanski and Jonah Hill called ‘Tennis Balls.’ That’s one that makes me laugh so hard. It was Jonah’s idea ‘cause there was actually a science video online of a guy who’s like, ‘This is what happens when you get hit in the nuts with a tennis ball.’ We took it and ran very far with it.” – AB
Ben Trivett/Shutterstock for Den of Geek
Gen V
The world of superheroes in training at Godolkin University is at the core of Gen V, but the cast members that visited our studio concede that the scope continues to widen the more synergy it has with The Boys, the series it spins off from. Jaz Sinclair, who plays Marie Moreau in the series, says they’re picking up right where things ended in the main show.
“The whole tone of our season is based on where The Boys leaves off,” she says. “Homelander has taken over, and we get to see how that directly affects the whole world and also the university itself… we have our new dean and the posters you see and things.”
Hamish Linklater, who plays Dean Cypher, is very cryptic about how his new character will be tied into the decidedly more contentious atmosphere. “I think the name says it all… he’s a cypher,” the actor says mysteriously. When asked about his superpower, Linklater preferred to leave it a mystery for Gen V fans to discover for themselves. Luckily they’ll only have to wait until the Sept. 17 premiere date to find out. – MA
Jason Universe
What exactly is a Jason universe you might ask? Well, at least in one filmmaker’s opinion, it is a whole interconnected, multimedia brand of everything we’ve ever loved about Friday the 13th and its bedeviled camp grounds. “It’s games, it’s movies, it’s figures, it’s collectibles, it’s all that stuff we’ve been craving for years now,” says Mike P. Nelson, who in addition to being a lifelong fan of Jason Voorhees is also the man tasked with bringing the hockey-masked killer back to live-action for the first time in decades via Jason Universe’s new short film, “Sweet Revenge.”
The short’s trailer gained an extraordinarily positive reaction at SDCC and fulfilled a dream for Nelson, who grew up perusing video store horror sections as a child in the same way an art critic might appreciate the walls of the MoMA. He also got to be the first filmmaker to work with a newly redesigned Jason courtesy of genre legend Greg Nicotero. “It’s just about capturing that vibe of what Jason sort of was. To me, Final Chapter was scripture. That was the movie that informed the ‘80s horror film. It was the look, the feel. It was sweaty, it was dirty, and for me creating a new Jason, I wanted to revisit that.”
While Nelson is coy as to whether “Sweet Revenge” could lead to a feature, or for that matter if it will literally crossover with other elements of the so-called Jason Universe like Peacock and A24’s Crystal Lake series, Nelson adds, “If down the road, those things collide, all the cooler.” – DC
Heroes & Villains
Releasing two highly anticipated merchandise drops at SDCC for Star Wars and Fantastic Four, Heroes & Villains stands out among other fan-merchandise brands that work with IPs for its wearable, streetwear-inspired look that is very specific to style and feel. We got the chance to sit down with Doug Johnson, the creative director of Bioworld Merchandising, which houses Heroes & Villains, to talk about the two collections and what specific characters and callbacks influenced their creations.
Specificity and planting easter eggs for devoted fans are key. Take its recent collection, inspired entirely by Star Wars’ Rebel Pathfinders, who were formed within the Rebel Alliance during the Clone Wars, hand-picked and tasked with crucial operations by the Alliance High Command. Johnson was inspired by the Rebels’ color pallette, pulling from earth tones like teal, mustard, taupe, and olive. The collection’s contrast is of course the Galactic Empire, which Heroes & Villains opts for a more techie, clean look with black and red in its designs, drawing from the Inferno Squadron from the 2017 video game, Star Wars Battlefront II.
“What we like to do is look back at what’s the story that brought us to that point and tell the comic side of that, or the true lore behind why this particular release is happening and have those touch points with our fans,” Johnson says.
To celebrate Marvel’s first family, and the recent release of The Fantastic Four: First Steps, Heroes & Villains also debuted jackets, shirts, bags, and hats inspired by the origins of the story and characters behind the new film. The collection blends futuristic and clean designs that feel true to the IP while also not abandoning Heroes & Villains’ quintessential streetwear and vintage look, appealing to both fans and franchise owners. While Star Wars has a little bit more to work with in terms of lore and characters, both collections are focused on creating wearable fashion for devoted fans.
“We try to stay current with content and find unique ways to develop products that speak to that truth, versus just marketing a slap of assets that you get from a style guide,” Johnson says. – Darcie Zudell
Ben Trivett/Shutterstock for Den of Geek
Interview with the Vampire
Back in 2024, Interview with the Vampire showrunner Rolin Jones stepped into Den of Geek Studio at San Diego Comic-Con with all the pep and vigor of a creative who had just completed a pitch-perfect two-season TV adaptation of Anne Rice’s classic first Vampire Chronicles novel. In 2025, having just begun production on the third season of the show, now styled The Vampire Lestat, Jones was equally as excited but a little more measured about crafting Lestat de Lioncourt’s big moment.
“I was cocky and confident that this was gonna be easy and awesome. And it’s been uh…the hardest season of TV I’ve ever done. Without a doubt,” Jones says. “It’s gotten very personal. A lot of personal writing has begun to dump into the show. There’s a lot of people that are on this very risky, weird, little journey with me. They are entering it with a lot of confidence and a lot of enthusiasm. We’re doing something kind of wild.”
Any adaptation of Rice’s second novel The Vampire Lestat is bound to be pretty wild. The narrative switches over from the taciturn Louis de Pointe du Lac (played by Jacob Anderson in the series) to the decadent Lestat (Sam Reid) as he enters his rock star era. Thankfully, the show’s composer is up to the task of producing some bangers.
“I started my [rock & roll] education long, long ago,” Daniel Hart says. “This is where I have been heading since I was a little kid – since my brother brought home Led Zeppelin IV. I feel right at home. It is a thrill and a great challenge to do something this ambitious. Inspirations run the gamut from Howlin’ Wolf to Chappel Roan and everything in-between.” – AB
Ben Trivett/Shutterstock for Den of Geek
Lilo & Stitch
It’s been over 23 years of riding the Hawaiian rollercoaster with some of Disney’s most beloved characters from the 2002 original animated film, Lilo & Stitch, and after a prolific opening weekend for the cast of the new live-action movie, stars Tia Carrere (Mrs. Kekoa, and the original voice of Nani), Maia Kealoha (Lilo) and Sydney Agudong (Nani) are celebrating the new film breaking several box-office records at SDCC.
In the studio, we learned that this was the first SDCC that Kealoha ever attended, who was only six-years-old when the 2025 movie was filmed. Her breakout role as the misunderstood young girl being raised by her sister was her first time onscreen acting. And she’s been very pleased with the reaction to the live-action film.
“It’s been amazing,” Kealoha says. “And I’m so excited that our movie is [worth] a billion dollars.”
The film was also significant for the young actress Agudong, who revealed that she was a huge Lilo & Stitch fan growing up. Her past experiences in theater helped prepare her for long days on set and returning to do it all again the next morning. Agudong also reveals when she was cast as Nani, her first instinct was to talk with the OG, Carrere, about stepping into a role she’s previously played.
“I was so happy that Sydney invited me in,” Carrere says. “I just had to say, ‘Girl, you are a warrior. You are fierce, and you have everything within you.’ We grow into our power as women, and sometimes we need to be reminded of that by other women.”
The film has been out since late May and has been subjected to public discourse by devoted fans who are emotionally attached to the original source material. One significant change from the original script was the ending of the 2025 film, which (spoiler alert) involved Nani leaving for college to study marine biology, leaving Lilo under the custody of their neighbors Tūtū and David. Sure, people had mixed opinions about the switch up, but what about the women who depicted the character?
“I loved it,” Carrere says. “It’s a reality that—coming from Hawaii—you have to leave the island to achieve and bring back that knowledge.”
Agudong adds, “I think we’re both on the same page.” She further echoes how the change made the story feel more real and spoke to the experiences of other mixed families in Hawaii. Plus in the new movie, Nani has a portal that she can use to see Lilo at any time, so as Carrere points out, it’s a non-issue.
“We’re also exposing the fact that hanai family is just as important as blood-related family,” Agudong says. “Everybody can belong. You can choose your family in that way.” – DZ
Ben Trivett/Shutterstock for Den of Geek
The Long Walk
Very rarely does San Diego Comic-Con feel the need to censor anything that comes by the hallowed grounds of Hall H. In fact, there is no other instance where the cavernous room’s big screens went black right before something as heinous as the summary execution of a teenager was carried out off-screen (although folks reportedly could still hear it). Yet that is what happened during a tense presentation of Francis Lawrence’s adaptation of The Long Walk, an adaptation of the first novel Stephen King ever completed as a writer.
“Comic-Con deemed the event too intense to show in its full entirety,” actor Garrett Wareing says when stopping by our studio after the panel. “So they censored some of the footage, and it was quite exciting to watch the fan reactions and to hear their reactions to what we’re seeing now.” There were gasps, groans, and perhaps an uneasy sense of creeping dread.
Screenwriter J.T. Mollner, however, notes that is both the power and appeal of something as potent as King’s dystopian paterfamilias to stories like The Hunger Games and Battle Royale.
“You know when I was a kid, and I was 12 or 13 and I couldn’t get in a movie because [it] was too rough for people my age, it made me want to see it immediately,” Mollner smiles. “So the beauty of this movie, in my opinion, is the way Francis handled the brutality, the intensity, the terror, it’s all shown honestly. It would be obscene to not show it honestly, but it never feels gratuitous. And I think that’s a fine line and a great balance, and Francis went all the way.” – DC
Nacelleverse/Toys that Made Us
It wouldn’t be San Diego Comic-Con without a visit with collector extraordinaire Brian Volk-Weiss, the man behind the Nacelle Company and the hit Netflix documentary series The Toys That Made Us, which Volk-Weiss assured us would be delivering its fourth season in 2026 with a fifth and final run in 2027.
Among the many items we talked about were the Nacelleverse lineup of Star Trek toys, which Volk-Weiss acknowledges feature some very niche Starfleet characters like Captain Jellico from The Next Generation or Tuvix from Voyager.
“I just knew if we started with Kirk and Spock and Picard and Data, the community would be like, ‘Eh, okay’,” Volk-Weiss says. Instead he wanted to “send a message to the community that we are Trekkies too and we’re doing the ones you all wanted.”
With Nacelleverse toys making the transition to comics, Volk-Weiss was excited about Wild West C.O.W. Boys of Moo Mesa making the transition for its first issue from Oni Press. “For the first time ever, we’re gonna have two comic books running simultaneously. So C.O.W. Boys of Moo Mesa and Biker Mice from Mars are running concurrently.” – MA
Ben Trivett/Shutterstock for Den of Geek
Peacemaker Season 2
The opening credits to every episode of Peacemaker season 1 asked a simple question: “Do you really wanna, do you really wanna taste it?” Starring John Cena as the titular maker of peace, this DC Universe era-straddling HBO Max series liked to have a good time, as evidenced by its jaunty hair metal dance number to “Do You Wanna Taste It?” by Wigwam.
Now season 2 has some more questions to answer. As showrunner and DC Studios co-CEO James Gunn told Den of Geek magazine, Peacemaker’s second season will provide a certain amount of clarity for the new DC Universe’s timeline. No previous Cena-starring effort like The Suicide Squad or Peacemaker season 1 can be considered canon until season 2 blesses it. Thankfully, this batch of Peacemaker episodes gets a lot of those clarifications out of the way early so it can get right back to dancing. When stars Cena (Peacemaker), Jennifer Holland (Emilia Harcourt), Frank Grillo (Rick Flag Sr.), Sol Rodríguez (Sasha Bordeaux), Steve Agee (John Economos), and Freddie Stroma (Adrian “Vigilante” Chase) visited the studio, they revealed just how seriously Gunn takes that musical sequence.
“You don’t see James Gunn getting angry often but… he wasn’t happy halfway through the day,” Grillo says.
“It was actually two days [of shooting],” Holland clarifies.
Once the episodes actually begin, Chris Smith a.k.a. Peacemaker will have a lot more than dancing to be concerned with. After all, does this kinder, gentler DC universe still have room for someone as violent as Peacemaker?
“In Peacemaker’s mind he’s doing what he’s doing for the greater good. So it’s a shock to his system when people don’t accept him,” Cena says. Surely, the Justice Gang can find use for a marksman. – AB
Ben Trivett/Shutterstock for Den of Geek
Resident Alien
After nearly five years, the producers of Resident Alien have confirmed that the recently released season 4 will be the sci-fi comedy show’s last installment. As a part of their victory lap, members of the cast visited the Den of Geek studio to reflect on the recent season and the series as a whole.
“Our show, despite not being renewed, is a complete story,” says actor Corey Reynolds. “So whether or not you are someone who is familiar with our show now, or if you want to jump on board after we aren’t renewed, you will get a beginning, a middle, and an end. We don’t just leave you hanging.”
In addition to discussing the longevity of a show with such reliable qualities, the cast reflected on specific moments in season 4 that wrap up each character’s storyline nicely, even if not in the way it was planned.
“In season 4, she [Asta] is just learning how to love herself now,” says actor Sara Tomko about her character’s final arc. “She’s such a nurturer and she has been people pleasing and peacekeeping, trying to save the world like no big deal. Now all the people she loves are looking at her and they’re saying, ‘What about you now?’ and that’s real love, that’s real friendship.” – SR
Ben Trivett/Shutterstock for Den of Geek
Revival
As a show about a town where people suddenly return from the dead and try to return to their normal lives, Revival could be seen as somewhat derivative, but its first season is proving that the ensemble cast and compelling mystery make it a one-of-a-kind series. Showrunners Aaron B. Koontz and Luke Boyce joined the cast in our studio, and they spoke about how they approached the show’s brilliant storytelling.
“I’m a big believer in setup and payoff,” says Koontz. “We just wanted to make a show that we liked, and I like things that aren’t spoonfed. I like things that make me lean forward and ask questions and try to figure it out.”
Melanie Scrofano is in Revival’s lead role as small town cop Dana Cypress, and her character’s struggles feel completely realistic, including an episode where a bullet leaves her incapacitated at a time when she needs to be mobile.
“That was a real challenge to be bedridden and have this dialogue that’s really high stakes and have to do it in a bed,” she admits. “But it was a really fun challenge.” We’re anxious to see where things go in the Revival season 1 finale in August! – MA
Ben Trivett/Shutterstock for Den of Geek
Roddenberry and Does It Fly?
Since launching in 2024, co-hosts Tamara Krinsky and Hakeem Oluseyi have been dissecting some of the most beloved fandoms with scientific eyes on their podcast Does it Fly? Krinsky and Oluseyi, along with Kelsey Goldberg, who also serves as an executive producer of Does it Fly?, and Trevor Roth, the chief operating officer of Roddenberry Entertainment, visited our studio at SDCC to discuss the evolution of the podcast in just a little over a year, and how Krinsky’s love of pop culture and Oluseyi’s science education background join forces in unexpected ways to create Does it Fly?
Goldberg shared that she was excited when the show expanded its range of genres and topics beyond purely sci-fi—recently, the podcast examined the logic of Superman’s sun-powered abilities and evolution in How to Train Your Dragon.
“It was an absolute joy the moment we discovered we can go outside of strict sci-fi, and I can start to look at fantasy or horror,” Goldberg says. “I got a little evil, but I think the audience benefits from it.”
Krinsky and Oluseyi’s friendship and appreciation toward what they’re discussing shines through every episode of Does it Fly? For Oluseyi, finding a co-host like Krinsky was a dream come true as he never really thought he’d find someone to share his passions with in this capacity.
“I’m from Mississippi, and nobody was into what I was into,” Oluseyi says. “They would always say things like, ‘Man, ain’t nobody into that shit you’re into.’ Well, guess what? I found somebody who’s into that shit I’m into!”
Roth expressed his appreciation of the fandom, and voiced that the podcast’s mission was not only to analyze characters and stories but to examine them without tearing them down or dismissing devoted fans.
“Sometimes we say, ‘We’re putting something on trial,’ but we’re putting something on trial in the nicest way possible,” Roth says. “We’re not here to cut anything down. We know that the things we’re talking about are beloved by someone, including us, much of the time, and because of that, we want to revel in whatever joy that it brings.” – DZ
Ben Trivett/Shutterstock for Den of Geek
Ron Moore
Ronald D. Moore is on a real tear these days with two of his most successful shows, Outlander and For All Mankind, both receiving highly anticipated spinoffs (Blood of My Blood and Star City respectively), so we were anxious to speak with him in-studio about his secret to making both a successful hit series and a companion show to explore.
“It’s always in the back of my mind: what could this be?” Moore says. “Because you’re always playing around with what’s the potential for the story. How big is the story? How many seasons is it? Can you expand the universe into something else? But it’s really a back of the head kind of thing.”
We also asked Moore about his progress on the God of War adaptation, and whether it might follow Kratos and Atreus. “As someone who’s new to this world, I was really impressed with the depth of what you’re talking about,” he tells us. “It’s such a rich environment… it’s been really fun to dive into this world.” -MA
Ben Trivett/Shutterstock for Den of Geek
Shin Godzilla
Shin Godzilla was something of a game-changer for the Big G when it hit theaters nearly 10 years ago in Japan. After decades of sequels, team-ups, and crossovers, the monster that once looked like a walking metaphor for nuclear armageddon had become cute and cuddly. In fact, looking back at the impact Shin had on the culture, co-director Shinji Higuchi tells us in our SDCC studio that they were adamant to go out of their way and avoid making a “Godzilla is going to fight against something” movie.
Instead they crafted a bitter parable for bureaucratic inaction and paralysis in the face of existential crisis, something Japanese audiences were eminently familiar with following the Fukushima nuclear meltdown disaster of 2011. And yet, even so, Higuchi admits that he and his co-director Hideaki Anno were surprised when they learned that they’d inadvertently invented the cuddliest looking Godzilla ever: you know the one with the big, googily eyes.
“It’s evolution, it’s not growth,” Higuchi says of Godzilla’s ever-mutating appearance. “There’s a difference. So I wanted to really follow an almost Darwinism [form] of evolution.” Thus to represent the midway point between the sea creature at the beginning of the movie and the more iconic reptilian visage that ends it, he and Anno settled on an image they thought would be chilling, not charming.
“Director Anno doesn’t like fish and doesn’t like meat,” Higuchi reveals. “So director Anno hates when you go to a fish market and you see the eyes, the way they look at you. So that was what we decided. ‘Let’s give him those eyes!’ But Anno is kind of confused, because he thought he made the scariest creature imaginable, but all the kids love it and everyone says it’s super cute. So there is this gap.” Merchandising windfalls have started from less. – DC
Ben Trivett/Shutterstock for Den of Geek
Star Trek: Starfleet Academy
After countless spinoffs, movies, comics, and other projects, Star Trek is finally boldly going where many rootless Gen Xers go: back to school. No, Gene Roddenberry’s sunny vision of a collaborative sci-fi future isn’t going to grad school to get its masters; it’s going all the way back to Starfleet Academy in the fittingly titled Star Trek: Starfleet Academy.
Den of Geek was joined by a supersized roster of Starfleet cadets and producers to discuss the project, including Holly Hunter (Nahla Ake), Robert Picardo (the Doctor), Noga Landau (executive producer), Alex Kurtzman (executive producer, co-showrunner), Sandro Rosta (Caleb Mir), Bella Shepard (Genesis Lythe) , Kerrice Brooks (Sam), George Hawkins (Darem Reymi), and Karim Diane (Jay-Den Kraag).
“It was very intentional to set it in the 32nd century,” Landau says. “Because it’s a time of rebuilding and it’s a time when the pressures of the rebuilding really falls on the shoulders of the younger generation. There’s a lot with these kids going on that other generations haven’t had to face.”
Kurtzman elaborates on why the time was finally right for a series featuring young Starfleet cadets after so many previous rumors and false starts.
“It feels like this generation in particular is facing so many deep challenges. Everybody is trying to figure out ‘how do we get back to hope?’ I think that’s where Roddenberry comes in. I always feel like Star Trek is a compass that points us toward our better angels and the people we want to be.”
Star Trek: Starfleet Academy is set to premiere to Paramount+ in 2026. – AB
Ben Trivett/Shutterstock for Den of Geek
Star Trek Strange New Worlds
When Paramount’s freshly-installed Star Trek czar Alex Kurtzman invited Akiva Goldsman to work on the first modern Trek spinoff Star Trek: Discovery, Goldsman ran a simple Google search to get what the series was all about. It immediately led him astray.
“I discovered that it was a show about Pike and Number One… at least according to the internet. Then I got there and discovered it had zero to do with any of that,” Goldsman says.
That initial internet research, however, planted the seed for the spinoff that would become Star Trek: Strange New Worlds, arguably the most creatively successful Trek endeavor of its era. Now with the show in its third season (and with two more final seasons on the way) Goldsman, producer Henry Alonso Myers, and castmembers Rebecca Romijn (Number One), Christina Chong (La’An Noonien-Singh), Ethan Peck (Spock), Paul Wesley (James T. Kirk), and Jess Bush (Christine Chapel) stopped by the studio to talk about season 3 and the show’s ultimate legacy.
“What we’ve done so far exceeds anything I’d ever imagined,” Goldsman says. “I hoped we’d get the original Star Trek values back because God knows we need them in times like this. I had no idea that we would be gifted with this extraordinary cast. They are more than collaborators, they are authors. If we’re lucky and if we stick the landing we’ll have added a significant piece to the canon of Star Trek.”
Before the end comes, however, season 3 finds Strange New Worlds being its goofy, ambitious self, including a fourth episode that pays homage to The Original Series and Star Trek parodies in more ways than one.
“That was the most fun episode I think I have filmed,” Chong says. “When I got to see all these guys as their different characters, it was just incredible. La’An has been really uptight. Season 3 I had an opportunity to lighten her up. She’s exploring her passions, full stop.” – AB
Ben Trivett/Shutterstock for Den of Geek
Todd McFarlane
Todd McFarlane’s King Spawn is about to hit issue 50 later this year. It’s a benchmark and milestone for the creator of Hell’s monarch, a character he created in 1992. That’s also the same year McFarlane co-founded Image Comics, where he remains president. Still, that’s a modest run when compared to mainline Spawn’s 350 and counting issues. Yet the way McFarlane tells it in our studio, it is the character’s longevity which is the secret to his success.
“At some point over time, and I don’t care which business it is, you’ll have high and low points,” McFarlane says. “But what’s going to matter is that the brand, that word that you’re putting out there, just never goes away. It’s always there. Attrition of the same thing over and over. Has Spawn had highs and lows? Of course it has. What it has [though] is it’s been there nonstop for over 30 years. That’s the secret sauce.”
McFarlane also confides that his biggest issue with many creators today, even at Image Comics, is that they wrap up a story they personally created after five or six issues.
“You get big sales in the first five and then they start to flatten and paper down,” the comic maestro notes. “And the thought is ‘I can stop and go start another book and get good sales for these next five of those.’ The answer economically is yes, in the short term, but I’m telling you, long-term I keep saying get to issue 50. Every book that Image Comics has done that has gotten 50 or more issues has gotten outside the bubble. And the bubble by definition is comic books and us in the geek [community]. The choir. The choir’s always coming, but how do you get it now out to T-shirts, hats, toys, video games, movies, TV shows? Outside so your neighbor may have heard of the work?”
Here’s to 50 more issues, King. – DC
Ben Trivett/Shutterstock for Den of Geek
Tony Hale
Tony Hale is quite simply one of the most successful comedic actors of this TV generation. After embodying motherboy Buster Bluth on Fox and Netflix’s beloved Arrested Development for five seasons, Hale would go on to win Emmy gold as President Meyer’s bagman Gary Walsh on HBO’s acidic satire, Veep. Now Hale is producing and starring in Sketch, a film he excitedly calls a combination of Inside Out and Jurassic Park.
“It took eight years to get made,” Hale says. “My buddy Seth Worley had the idea and wrote the script. We just went back and forth for a few years. I play a single dad who is worried because his daughter keeps drawing these crazy pictures that end up coming to life. It’s a really fun family adventure with a beautiful theme of processing feelings.”
In addition to teasing the madcap adventure to come in Sketch, Hale was kind enough to go deep on his career, discussing his time on Arrested Development, Veep, Community, Toy Story 4, Inside Out 2, and even his brief Marvel and Star Wars voice acting forays. One theme that emerged is that you might just remember Hale’s best roles better than he does.
“One of my favorite things is when people come up and are like, ‘I love this joke,’ and I’m like, ‘Please tell me it because I’ve completely forgotten.’ The only one I remember, because it’s my favorite, is Tobias joining Blue Man Group because he thinks it’s a support group for depressed men. That was the level of comedy you were working with.”
Next time you see Tony Hale out and about, please remind him of some more great Arrested Development gags. – AB
The Toxic Avenger
Writer-director Macon Blair and the stars of The Toxic Avenger remake, including Peter Dinklage, stopped by to chat about their new superhero movie—err, make that “super-human” movie. Yep, as producer Lloyd Kaufman, who co-wrote and co-directed the original Toxic Avenger, tells us, he has advised both Blair and Ahoy Comics to knock off using the term “superhero” while running with ol’ Toxie.
“[It’s] a super-human movie,” Kaufman insists. “You get a lawyer’s letter because Warners and Marvel co-own the word ‘superhero.’ When Toxic Avenger was a Marvel comic book, he was a superhero, but as soon as Warners dumped the Toxic Avenger remake, then suddenly we got a lawyer’s letter to no longer use ‘superhero.’ So it’s a super-human hero… That’s how you do things in the movie business.”
We should note that Marvel and DC have since lost their attempt to trademark the word ‘superhero’ in court, but the fact that Toxie has to stay DIY about even his job description—and even as the star of a glitzy (if still gory) new Legendary Pictures remake—is pretty on brand for a superhero who got dunked in toxic sludge. Watch the above video to see the rest of our discussion, including why Dinklage is not under the extensive Toxie makeup post-transformation. – DC
Twisted Metal Season 2
The first season of Twisted Metal on Peacock offered just about everything fans of the long-running vehicular combat video game series could have hoped for. In addition to Anthony Mackie’s anonymous hero John Doe, the season introduced many characters, vehicles, locations, and even the iconic fiery harlequin Sweet Tooth from the mythos. The only thing missing, however, was the all-important demolition derby itself. That is now set to arrive in Twisted Metal season 2 thanks to the introduction of another important game character: the mysterious Calypso, played by Anthony Carrigan (Barry, Superman).
“He’s just kind of your basic, run-of-the-mill MC of a vehicular death match. Go with the old standards,” Carrigan says.
Joining Carrigan to tease the season to come were Mackie (John Doe), Joe Seanoa (Sweet Tooth), Stephanie Beatriz (Quiet), and showrunner Michael Jonathan Smith.
“First of all, New San Francisco is a wonderful place,” Mackie says of John Doe’s initial season 2 digs. “If you get a chance, go check it out. I discovered carpaccio in New San Francisco. It’s quite nice. We discover John there having a wonderful time and trying to move forward without his right hand…”
“His right hand,” Beatriz interrupts with a masturbatory hand motion. A brief moment of sincerity immediately followed up by a dick joke? Hard to imagine a more fitting representation of Twisted Metal than that. – AB
Upper Deck and Rush of Ikorr
We got the chance to speak with Travis Rhea, head of Upper Deck about Rush of Ikorr, and what Upper Deck has coming now and for the near future.
The new trading card game dropped earlier this year, with Rush of Ikorr relying on strategy as you compete in epic mythological battles with up to 3v3 PvP. As Rhea explains, “Last year we released NeoPets TCG; this year, Rush of Ikorr. Rush of Ikorr is really a game changer. It’s a homegrown IP for us and has some pretty cool differences that you don’t see in the TCG world.” We touched on its unique qualities, including how the game encourages 2v2 and 3v3 play. But the style of play isn’t the only thing that separates it.
Says Rhea, “We tapped into stuff people were already excited about and somewhat familiar with.” This references the various cultures and sets of mythology the creative team researched for the creation of this game’s lore, which helps it to have a unique style in the TCG space. What is so great is that this is a return to form for Upper Deck, with Rhea stating, “We’ve been here before; we have a legacy on this side of the business. We did Yu-Gi-Oh!, World of Warcraft, Call of Duty…we’ve been in that world. It’s just we took a break from it for a while… back to TCG’s is really in our DNA.” – CM
The Walking Dead: Daryl Dixon
The folks behind The Walking Dead: Daryl Dixon stepped into the studio triumphant, having just announced at the preceding panel that their Walking Dead spinoff would receive a supersized fourth and final season. Producers Scott Gimple, Greg Nicotero, and David Zabel; and producers and stars Norman Reedus and Melissa McBride were happy to tease the ending to come.
“The French part of this show was always envisioned as a two-season story,” Zabel says. “And then starting this season we have three and four. By the end of that, Carol and Daryl’s European adventure would have a really good conclusion and open up whatever comes next.”
Following two full seasons of fighting the dead in France, Daryl (Reedus) and Carol (McBridge) head west (in a very roundabout way that includes a trip through the “Chunnel”) to take in some post-apocalyptic sightseeing on their long journey home.
“There’s a real passion in Spain,” Reedus says. “It’s like a Western. There’s a real Spanish fire to the cast and crew. You feel the passion in the show. We tried really hard not to make an American show and plop it down in Europe. We tried not to fake the funk. We didn’t want everyone in France to have a beret on and a poodle and eat brie. In Spain we were authentic as well.” – AB
Ben Trivett/Shutterstock for Den of Geek
Yaeji
One of the many draws of SDCC 2025 was the highly anticipated Crunchyroll Anime FanFest, two free days of live music, ranging from Japanese alternative bands to anime-inspired hip-hop groups. On the second day, Brooklyn-based DJ Yaeji performed a set on the Crunchyroll stage before stopping by the studio.
“It was definitely different,” Yaeji said about her FanFest set. “I prepared for it specially on the side. I wanted to play only anime edits and get deep if I can, so it wasn’t like a usual set I would play at all.”
Yaeji’s usual sets are platforms for her bilingual lyricism and dual lofi/electronic sounds. According to the DJ, the purpose behind writing lyrics in both Korean and English has changed for her over the years.
“In the beginning, I just sang in Korean because I wanted my friends to not know what I was singing about, and then I discovered that Korean sounds really texturally interesting, so it was more of an instrument,” the musician says. “Now, I find it to be helpful expressing in both languages … and also communicating via sounds and the sonics.”
Although Yaeji doesn’t point to one specific artist as her primary influence, she has resonated with icons of hip-hop and pop throughout her life. However, she has most notably found inspiration within mediums she shares in common with many SDCC attendees.
“Sometimes I would find random indie music through a blog probably, but I was always on the internet,” Yaeji tells us. “I think the more influential ones are actually probably from video games or anime openings and endings that I listened to throughout my teens.” – SR
Panels
Amy Sussman/Getty Images
Coyote vs. Acme Panel
Bright and early in Hall H, Looney Tunes fans at very long last got to watch never-before-seen clips from the long-awaited Coyote vs. Acme, the movie that Warner Bros. Discovery—I mean, Acme!—didn’t want you to see. The panel was hosted by comedian Paul Scheer and featured director Dave Green, cast members Will Forte, Eric Bauza, Martha Kelly, and surprise appearances by Wile E. Coyote himself, plus P. J. Byrne in-character as Acme’s legal rep. We saw three clips, including six minutes of footage and the film’s first official trailer. Byrne handed out fake cease and desist papers and had the panel removed by unpaid Acme interns.
These bits referenced WBD’s decision to shelve the completed film in 2023 as part of a $30 million tax write-off, which sparked immense fan outrage. Ketchup Entertainment later picked up the film for $50 million. The panel also revealed that Coyote vs. Acme is set for global release on Aug. 28, 2026. The film’s plot is based on a 1990 TheNew Yorker article by Ian Frazier, and imagines Wile E. Coyote finally suing Acme after years of injuries from their defective products.
“While [Wile E. Coyote] is the star of the movie, he is not the hero of this movie. Because I think what Paul [Scheer] and I, and everyone you’re going to meet on this panel today would say, is that the real hero of this movie is all you guys sitting in those seats,” Will Forte says at the panel. “Like Wile E. Coyote, you guys were underdogs who fought against a major corporation, and because you never gave up, this movie is now going to come out in a global wide release.” – DZ
Daniel Knighton/Getty Images
George Lucas and the Lucas Museum of Narrative Art Panel
In the discussion of geek culture’s biggest influences, few names loom as large as George Lucas. This year, the creator of the Star Wars and Indiana Jones universes, founder of Lucasfilm, and veritable godfather of science fiction made his first appearance at SDCC for a panel titled “Sneak Peek of the Lucas Museum of Narrative Art Panel.”
Over the course of 50 years, Lucas has collected over 40,000 pieces of narrative art. Many of those artworks will soon be on display at the museum in Los Angeles, founded by Lucas and his wife, Mellody Hobson.
“It’s more about a connection, an emotional connection with the work, not how much it cost or what celebrity did it,” Lucas said about the artwork he collects. “It’s more a personal thing, and I don’t think it’s anything that anybody else can tell you… if you have an emotional connection, then it’s art.”
Sitting next to Lucas were director and museum board member Guillermo del Toro, artist and designer Doug Chiang, and musician and panel moderator Queen Latifah. Each member of the panel has their own connection to Lucas’ work, as well as their own passion behind narrative art.
“Many of the pieces we have celebrate freedom or anarchy,” del Toro said. “… Comics have a lot of social conscience, before or around the same time as movies and so forth. You have graffiti, you have many of the popular mimeographed forms of art that do that, they are not dominated… What is important for me or what is magical, [the museum] is not a man and his collection, it is a lineage of images… We are in a critical moment in which one of the things they like to disappear is the past, and this is memorializing a popular, vociferous, expressive, eloquent moment in our visual past that belongs to all of us.”
The Lucas Museum of Narrative Art is scheduled to open in 2026 and will feature original works from the Star Wars universe, along with original Peanuts sketches, an original Flash Gordon comic strip, the first Iron Man cover and much more. – SR
Gundam Wing
The flowing locks of unkempt ‘90s nostalgia were on full display when the anime heartthrobs of Heero Yuy, Duo Maxwell, Quatre Winner, and the rest of the Gundam Wing pilots took a long overdue encore at the convention center on Thursday night. Before thousands of cheering fans, actors Mark Hildreth, Scott McNeil, and Brad Swaile–who voiced Heero, Duo, and Quatre in the beloved 2000 English dub of Gundam Wing—took a bow and recited some of the fans’ favorite lines while reminiscing about how best to vocalize imminent annihilation at the hands of a Gundam.
The biggest piece of news out of the panel was definitely a modern tribute video to Gundam Wing’s 30th anniversary with dazzling hand-drawn imagery that set the mind aflutter with possibilities. Gundam executive producer Naohiro Ogata was on hand to also tease folks to “keep watching” if they liked the above video (which might just be fan-baiting the dream of a Wing sequel). However, for attendees in the room, the highlight might be one fan asking for Hildreth to tear up a hand-delivered invitation to his birthday party with the same coldness that Heero infamously displayed to Relena Peacecraft in the first episode of Wing… Hildreth even made sure to throw the torn scraps so that the paper could catch the air-conditioning, like a feather in the wind. – DC
Ben Trivett/Shutterstock for Den of Geek
Peacemaker Panel
Touching down at the Peacemaker panel in Hall H at San Diego Comic-Con, we were graced with the presence of the crew behind Peacemaker, including actors John Cena, Jennifer Holland, Freddie Stroma, Steve Agee, Sol Rodriguez, Frank Grillo, Tim Meadows, and writer-director James Gunn.
Not only did they answer questions and speak on their characters’ motivations coming into the next season, but we also got a few clips of the new season 2 coming to HBO Max on Aug. 21, including a comedic “bird blindness” bit with Steve Agee’s John Economos and Tim Meadows’ new ARGUS agent character, “Langston Fleury,” as well as a hard-hitting action scene featuring Jennifer Holland’s Emilia Harcourt giving out haymakers and head kicks in a biker bar.
James explained to the audience that coming into season 2, Peacemaker feels shunned from the superhero community, his love interest, and the things he wants from life in general. With Gunn speaking to where we find peacemaker: “He’s dealing with the demons he sort of uncovered from the first season and trying to deal with them and the world is not accepting him the way he is.” But once his father’s inter-dimensional storage door, aka the Quantum Unfolding Chamber, comes into play, Chris has a chance to see if the grass is greener on the other side in this parallel world. – CM
Photo by Chelsea Guglielmino/FilmMagic/Getty Images
Percy Jackson and the Olympians Panel
The mythological world of Percy Jackson & the Olympians has seen many iterations, from the original book series by Rick Riordan to the film adaptations in the 2010s to a spinoff series to, most recently, a TV rendition. The first season aired in 2023 and after a year and a half of patient waiting, members of the cast and production team offered fans a sneak peak at Season Two in the first Hall H panel of SDCC 2025.
“We like to say it (Season Two) is supersized,” executive producer and writer Dan Shotz said. “Season Two is epic, it is so huge what we were able to build … We are out at sea, we are on a 175-foot ironclad ship, we are on cruise ships, we are in chariot races, we are fighting incredible monsters … It is so massive and we cannot wait for you guys to see what we do.”
Season Two will dig deeper into Riordan’s written world, this time following the story of the series’ second installment: The Sea of Monsters. Cast members like Walker Scobell (Percy), Leah Jeffries (Annabeth) and Dior Goodjohn (Clarisse) all spoke to the experience of returning to a film set that has become a home to them and how Season Two allowed them to open up to their characters in a way viewers will not want to miss. The panel ended with a video message from Riordan, in which he announced the official release date of Season Two – Dec. 10 on Disney+ – and the casting of two vital characters set to appear in Season Three: Levi Chrisopulos as Nico di Angelo and Olive Abercrombie as his sister, Bianca. – SR
Michael Buckner/Variety via Getty Images)
Predator: Badlands Panel
Disney and 20th Century Studios have a lot of confidence in Predator: Badlands, and for good reason. It was, after all, only three years ago when they brought Dan Trachtenberg’s previous live-action movie in the Predator universe, the Hulu exclusive Prey, to SDCC. There the historical period piece set during the 18th century and in Comanche Nation tore the roof off of a nearby theater. So seeing Trachtenberg back, now in Hall H alongside stars Elle Fanning and Dimitrius Schuster-Koloamatangi and moderator Kevin Smith, amounted to something of a victory lap. Except this win had a whole other movie to wow attendees.
While Trachtenberg didn’t bring his full Predator follow-up to San Diego in 2025 (it’s still being made), he and Disney confidently unveiled the first 15 minutes of Badlands, including portions of action sequences and special effects that are unfinished. They were right to be bullish, the sequence, which amounts to an extended prologue set on the Predators’ homeworld, reveals just how much of Schuster-Koloamatangi’s protagonist, a Predator named Dek, truly is the main character of the future-set movie (a first in the Predator franchise). His story is given vaguely Shakespearean heft as he must battle against the expectations of his clan and a murderous father.
Our full impression of the first 15 minutes can be found here, but rest assured that there is much yet to be revealed, including the full extent of Elle Fanning’s role as a pairof Weyland-Yutani synthetics that Dek discovers on a hostile alien world where there is an apex predator so ferocious that even the alien race which hunts Arnold Schwarzenegger for sport fears it. What that creature looks like has yet to be glimpsed–or just how much fun Fanning will have playing dual roles (Smith teased she portrays two very different kinds of robots)—but suffice to say the future looks bright for pop culture’s most beloved ugly MFer. – DC
The new music video for “As Alive As You Need Me To Be” by Nine Inch Nails closes out the ‘TRON: ARES’ panel at #SDCCpic.twitter.com/MeMvAPLZXi
Disney also brought Tron: Ares to SDCC this year with perhaps the most spectacular light show Hall H has ever witnessed. Heralded by several red-hued programs and neon-crimson beams piercing the darkness, an all-star cast, including Jeff Bridges (and for better or worse Jared Leto) took the stage to discuss the legacy of Tron, the future of technology, and just how awesome it is to walk on a Disney soundstage made to rebirth “the Grid.”
Still, the most tantalizing tease offered fans was an extended glimpse and listen at the original score written by Nine Inch Nails for the movie. Following in the footsteps of Daft Punk’s iconic score from Tron: Legacy (2010), all of NIN has reassembled to write and perform on a soundtrack that includes literal vocal tracks and new NIN songs. In fact, a killer music video was even fired up for Hall H attendees. Trust us, it’s amazing, as some naughty social media posts have proven with leaks like the one above. – DC
Events
Avatar Party
Earth, fire, water, air… the four nations were more than represented at the Nickelodeon x Den of Geek Avatar the Last Airbender 20th anniversary party in San Diego, celebrating the massively popular animated series and its ever-expanding world, with some of the cast and crew that made it special. including voice actors Jack De Sena, Jennie Kwan, Olivia Hack, Dante Basco, Zach Tyler Eisen, Michaela Jill Murphy, and Dee Bradley Baker, as well as co-creators Michael Dante DiMartino and Bryan Konietzko.
The party was a lively time with signature mixed drinks from each of the four nations and plenty of food to boot, including a beautifully crafted sushi bar made out of ice in the water tribe section, and a large charcuterie platter within the mini earth kingdom, while servers handed out a vast amount of food and treats like vegetarian spring rolls, cabbage dumplings, hand-cut potato chips, bao buns, and countless other Avatar-themed snacks. We were also happily entertained with the exciting sounds of DJ Dante, aka Dante Basco, pleasing the crowd with a set list signature to his swagger while at the DJ booth, Basco actually brought up his fellow Avatar castmates one by one, on stage by name and character name, to exclaim, “We just want to say thank you guys so much for 20 years, just being in support of the show and changing all of our lives… ours and yours.”
And he wasn’t the only one with words for the audience; co-creators Michael Dante DiMartino and Bryan Konietzko came up on stage to lend some words, with DiMartino stating, “We have Den of Geek to thank for that, for throwing, to my knowledge, the first actual really awesome, really fun party… Thank you, Den of Geek.” Konietzko added, “Here we are, we’ll keep going as long as we can go.” –CM
DC Comics x eBay Live
Den of Geek was honored to host the culmination event of our Summer of Superman charity auctions live at San Diego Comic-Con. Partnering with eBay and DC Comics, we brought you all-new and original Superman artwork from seasoned comics veterans like Rafael Albuquerque, Cully Hamner, Scott Koblish, Joe Prado, Ian Churchill, John Timms, Eddy Barrows, Daniel Sampere, Clayton Henry, Tony S. Daniel, Dan Jurgens, Kenneth Rocafort, and Ivan Reis.
The event was hosted by Sam Stone and Rosie Knight, with guest appearances from comic creatives Jurgens, Hamner, Phillip Kennedy Johnson, and Tom King, as well as Peacemaker actor Steve Agee joining in on the fun. Fans and fellow geeks came out in droves to support the cause, helping raise over $7,000 for the BINC Foundation. BINC’s mission is to support the emergent financial, medical, and mental health service needs of comic and book shop owners and workers across the country, who have guided fellow comic enthusiasts along the way.-CM
Op Games
The Op Games Party came alive with shared laughter, the cheerful clink of glasses and a nostalgic soundtrack featuring ‘90s icons like Madonna and New Radicals, which captured the electric, feel-good vibes of the night. Tables buzzed with excitement as fans and first-timers alike dove into classic titles like Telestrations and Blank Slate. Founded in 1994, The Op Games, formally known as USAopoly, is celebrating over 30 years of bringing families and friends together over fun, easy-to-learn party games. The Op has built a legacy of laughter and connection with original fan favorites and licensed games from renowned brands like Disney, Marvel, Sanrio, and Nickelodeon. Partygoers who wanted to game rotated from table to table, speed-dating-style, sampling four of the company’s best-selling and most fast-paced games. Tasty hors d’oeuvres, beer, wine and specialty drinks at the open bar were restocked throughout the night as the over 400 guests that attended passed through the Andaz Hotel on the first night of San Diego Comic-Con.
At one table, each player would alternate between sketching and guessing to see how wildly their original phrase transforms by the end of the round while playing Telestrations. Others raced against the timer to shout out answers that fit a category without repeating letters during multiple intense rounds of Tapple. Attempting to read the room but not be too obvious, others tried to match one word with another player to complete a phrase in many hilarious rounds of Blank Slate. One of the most popular and competitive games of the night had to be Flip 7, which earned the 2025 Golden Geek Award for Party Game of the Year. People roared during this fast-paced, push-your-luck card game where players race to play numbered cards in sequence, which involves flipping and swapping cards to outwit opponents and be the first to clear their hand. Guests arrived in full cosplay, their best 90s fashion, in groups and solo, but no matter how they showed up, everyone found something to enjoy.
“My mission, and The Ops mission, is to bring people together, where we can all play these games and actually experience them in a real world situation that you can replicate at home and show to your friends,” Adam Minton, associate director of marketing at The Op Games, says. “We want nothing more than joy, laughter, lifetime memories, and doing events like these with cool people in a cool atmosphere.” – DZ
Mission Brewing Party
No SDCC is complete without stopping downtown at Mission Brewing, and with the Comic-Con crowd still buzzing from the recent release of Superman (2025), what better way to celebrate than with a super happy hour?
In collaboration with Mission Brewing, Upper Deck and Den of Geek, the brewery hosted two live podcast tapings. The podcast Power-Up discussed Upper Deck’s Rush of Ikorr card games, and Roddenberry Entertainment’s Does it Fly and iHeartRadio’s X-Ray Vision teamed up to chat about the Man of Steel himself.
The first 50 guests received a pack of Upper Deck Fleer Brilliants Superman trading cards, and the first 150 snagged a free Mission x Den of Geek custom pint. The vibes were good as people unwinded with tailor-made canned Den of Geek lagers, deliciously refreshing pale ales made just for the event. – DZ
” Any post” you might have? is perhaps one of the worst ways to ask for suggestions. It’s obscure and unfocused, and it doesn’t give a clear picture of what we’re looking for. Getting good opinions starts sooner than we might hope: it starts with the demand.
Starting the process of receiving feedback with a question may seem counterintuitive, but it makes sense if we consider that receiving input can be seen as a form of pattern research. In the same way that we wouldn’t perform any studies without the correct questions to get the insight that we need, the best way to ask for feedback is also to build strong issues.
Design criticism is not a one-time procedure. Sure, any great comments process continues until the project is finished, but this is especially true for layout because architecture work continues iteration after iteration, from a high level to the finest details. Each stage requires its unique set of questions.
And suddenly, as with any great research, we need to examine what we got up, get to the base of its perspectives, and take action. Problem, generation, and analysis. This look at each of those.
The query
Being available to input is important, but we need to be specific about what we’re looking for. Any comments,” What do you think,” or” I’d love to hear your mind” at the conclusion of a presentation are likely to garner a lot of different ideas, or worse, to make everyone follow the lead of the first speaker. And next… we get frustrated because vague issues like those you turn a high-level moves review into folks rather commenting on the borders of buttons. Which topic may be a savory one, so it might be difficult to get the team to switch to the subject you wanted to concentrate on.
But how do we get into this scenario? A number of elements are involved. One is that we don’t often consider asking as a part of the input approach. Another is how healthy it is to assume that everyone else will agree with the problem and leave it alone. Another is that in nonprofessional debate, there’s usually no need to be that exact. In summary, we tend to undervalue the value of the issues, and we don’t work to improve them.
The work of asking good questions guidelines and focuses the criticism. It’s even a form of acceptance because it specifies what kind of opinions you’d like to receive and how you’re open to them. It puts people in the right emotional position, especially in situations when they weren’t expecting to provide feedback.
There isn’t a second best way to ask for opinions. It simply needs to be certain, and sensitivity can take several shapes. The concept of stage than level is a design for design criticism that I’ve found to be particularly helpful in my coaching.
” Stage” refers to each of the steps of the process—in our event, the design process. The type of input changes as the customer research moves on to the final design. But within a single stage, one might also examine whether some assumptions are correct and whether there’s been a suitable language of the amassed input into updated designs as the task has evolved. The layers of user experience could serve as a starting point for potential questions. What do you want to know: Project objectives? user requirements? Functionality? Content? Interaction design? Information architecture UI design? Navigation planning? Visual design? branding?
Here’re a few example questions that are precise and to the point that refer to different layers:
Functionality: Is it desirable to automate account creation?
Interaction design: Take a look through the updated flow and let me know whether you see any steps or error states that I might’ve missed.
Information architecture: On this page, we have two competing pieces of information. Is the structure effective in communicating them both?
User interface design: What do you think about the top-of-the-page error counter, which makes sure you can see the next error even when the error is outside the viewport?
Navigation design: From research, we identified these second-level navigation items, but once you’re on the page, the list feels too long and hard to navigate. Exist any recommendations for resolving this?
Visual design: Are the sticky notifications in the bottom-right corner visible enough?
The other axis of specificity is determined by how far you’d like to go with the information being presented. For example, we might have introduced a new end-to-end flow, but there was a specific view that you found particularly challenging and you’d like a detailed review of that. This can be especially helpful from one iteration to the next when it’s crucial to highlight the areas that have changed.
There are other things that we can consider when we want to achieve more specific—and more effective—questions.
A quick fix is to get rid of the generic qualifiers from questions like “good,” “well,” “nice,” “bad,” “okay,” and” cool.” For example, asking,” When the block opens and the buttons appear, is this interaction good”? is possible to appear specific, but the “good” qualifier can be found in an even better question,” When the block opens and the buttons appear, is it clear what the next action is?”
Sometimes we actually do want broad feedback. Although that’s uncommon, it can occur. In that sense, you might still make it explicit that you’re looking for a wide range of opinions, whether at a high level or with details. Or perhaps just say,” At first glance, what do you think”? so that it’s clear that what you’re asking is open ended but focused on someone’s impression after their first five seconds of looking at it.
Sometimes the project is particularly broad, and some areas may have already been thoroughly explored. In these situations, it might be useful to explicitly say that some parts are already locked in and aren’t open to feedback. Although it’s not something I’d recommend in general, I’ve found it helpful in avoiding getting back into rabbit holes like those that could lead to even more refinement if what’s important right now isn’t.
Asking specific questions can completely change the quality of the feedback that you receive. People who have less refined critique abilities will now be able to provide more useful feedback, and even experienced designers will appreciate the clarity and effectiveness gained from concentrating solely on what is required. It can save a lot of time and frustration.
The iteration
Design iterations are probably the most visible part of the design work, and they provide a natural checkpoint for feedback. Many design tools have inline commenting, but many of those methods typically display changes as a single fluid stream in the same file. These methods cause conversations to vanish once they’re resolved, update shared UI components automatically, and require designs to always display the most recent version unless these would-be useful features were manually turned off. The implied goal that these design tools seem to have is to arrive at just one final copy with all discussions closed, probably because they inherited patterns from how written documents are collaboratively edited. That approach to design critiques is probably not the best approach, but some teams might benefit from it even if I don’t want to be too prescriptive.
The asynchronous design-critique approach that I find most effective is to create explicit checkpoints for discussion. For this, I’m going to use the term iteration post. It refers to a write-up or presentation of the design iteration followed by a discussion thread of some kind. This can be used on any platform that can accommodate this structure. By the way, when I refer to a “write-up or presentation“, I’m including video recordings or other media too: as long as it’s asynchronous, it works.
Using iteration posts has a number of benefits:
It creates a rhythm in the design work so that the designer can review feedback from each iteration and prepare for the next.
It makes decisions accessible for upcoming review, and conversed conversations are also always available.
It creates a record of how the design changed over time.
It might also make it simpler to collect and act on feedback depending on the tool.
These posts of course don’t mean that no other feedback approach should be used, just that iteration posts could be the primary rhythm for a remote design team to use. And from there, there can develop additional feedback techniques ( such as live critique, pair designing, or inline comments ).
I don’t think there’s a standard format for iteration posts. However, there are a few high-level elements that make sense to include as a baseline:
The goal
The layout
The list of changes
The querys
Each project is likely to have a goal, and hopefully it’s something that’s already been summarized in a single sentence somewhere else, such as the client brief, the product manager’s outline, or the project owner’s request. Therefore, I would repeat this in every iteration post, literally copy and pasting it. The idea is to provide context and to repeat what’s essential to make each iteration post complete so that there’s no need to find information spread across multiple posts. The most recent iteration post will have everything I need if I want to know about the most recent design.
This copy-and-paste part introduces another relevant concept: alignment comes from repetition. Therefore, repeating information in posts is actually very effective at ensuring that everyone is on the same page.
The design is then the actual series of information-architecture outlines, diagrams, flows, maps, wireframes, screens, visuals, and any other kind of design work that’s been done. It’s any design object, to put it briefly. For the final stages of work, I prefer the term blueprint to emphasize that I’ll be showing full flows instead of individual screens to make it easier to understand the bigger picture.
Because it makes it easier to refer to the objects, it might also be helpful to have clear names on them. Write the post in a way that helps people understand the work. It’s not much different from creating a strong live presentation.
For an efficient discussion, you should also include a bullet list of the changes from the previous iteration to let people focus on what’s new, which can be especially useful for larger pieces of work where keeping track, iteration after iteration, could become a challenge.
Finally, as mentioned earlier, a list of the questions must be included in order to help you guide the design critique. Doing this as a numbered list can also help make it easier to refer to each question by its number.
Not every iteration is the same. Earlier iterations don’t need to be as tightly focused—they can be more exploratory and experimental, maybe even breaking some of the design-language guidelines to see what’s possible. Then, later, the iterations begin coming to a decision and improving it until the design process is complete and the feature is ready.
I want to highlight that even if these iteration posts are written and conceived as checkpoints, by no means do they need to be exhaustive. A post might be a draft, just a concept to start a discussion, or it might be a cumulative list of every feature that was added over the course of each iteration until the full picture is achieved.
Over time, I also started using specific labels for incremental iterations: i1, i2, i3, and so on. Although this may seem like a minor labeling tip, it can be useful in many ways:
Unique—It’s a clear unique marker. One can quickly say,” This was discussed in i4″ with each project, and everyone knows where to go to review things.
Unassuming—It works like versions ( such as v1, v2, and v3 ) but in contrast, versions create the impression of something that’s big, exhaustive, and complete. Attempts must be exploratory, incomplete, or partial.
Future proof—It resolves the “final” naming problem that you can run into with versions. No more files with the title “final final complete no-really-its-done” Within each project, the largest number always represents the latest iteration.
The wording release candidate (RC ) could be used to indicate when a design is finished enough to be worked on, even if there are some areas that still need improvement and, in turn, require more iterations, such as” with i8 we reached RC” or “i12 is an RC” to indicate when it is finished.
The review
A back-and-forth between two people that can be very productive typically occurs during a design critique. This approach is particularly effective during live, synchronous feedback. However, when we work asynchronously, it is more effective to adopt a different strategy: we can adopt a user-research mindset. Written feedback from teammates, stakeholders, or others can be treated as if it were the result of user interviews and surveys, and we can analyze it accordingly.
This shift has some significant advantages, making asynchronous feedback particularly effective, especially around these friction points:
It removes the pressure to reply to everyone.
It lessens the annoyance of snoop-by comments.
It lessens our personal stake.
The first friction is being forced to respond to every comment. Sometimes we write the iteration post, and we get replies from our team. It’s just a few of them, it’s simple, and there isn’t much to worry about. But other times, some solutions might require more in-depth discussions, and the amount of replies can quickly increase, which can create a tension between trying to be a good team player by replying to everyone and doing the next design iteration. This might be especially true if the respondent is a stakeholder or someone directly involved in the project who we feel we need to speak with. We need to accept that this pressure is absolutely normal, and it’s human nature to try to accommodate people who we care about. When responding to all comments, it can be effective, but when we consider a design critique more like user research, we realize that we don’t need to respond to every comment, and there are alternatives in asynchronous spaces:
One is to let the next iteration speak for itself. That is the response when the design changes and we publish a follow-up iteration. You might tag all the people who were involved in the previous discussion, but even that’s a choice, not a requirement.
Another option is to respond politely to acknowledge each comment, such as” Understood. Thank you”,” Good points— I’ll review”, or” Thanks. In the upcoming iteration, I’ll include these. In some cases, this could also be just a single top-level comment along the lines of” Thanks for all the feedback everyone—the next iteration is coming soon”!
Another option is to provide a quick summary of the comments before moving on. Depending on your workflow, this can be particularly useful as it can provide a simplified checklist that you can then use for the next iteration.
The swoop-by comment, which is the kind of feedback that comes from a member of a team or non-project who might not be aware of the context, restrictions, decisions, or requirements, or of the discussions from earlier iterations, is the second friction point. On their side, there’s something that one can hope that they might learn: they could start to acknowledge that they’re doing this and they could be more conscious in outlining where they’re coming from. Swoop-by comments frequently prompt the simple thought,” We’ve already discussed this,” and it can be frustrating to have to keep saying the same thing over and over.
Let’s begin by acknowledging again that there’s no need to reply to every comment. However, if responding to a previously litigated point might be helpful, a brief response with a link to the previous discussion for additional information is typically sufficient. Remember, alignment comes from repetition, so it’s okay to repeat things sometimes!
Swoop-by commenting has two benefits: first, they might point out something that isn’t clear, and second, they might serve as a reference point for someone who is first viewing the design. Sure, you’ll still be frustrated, but that might at least help in dealing with it.
The personal stake we might have in the design could be the third friction point, which might cause us to feel defensive if the review turned into a discussion. Treating feedback as user research helps us create a healthy distance between the people giving us feedback and our ego ( because yes, even if we don’t want to admit it, it’s there ). And in the end, presenting everything in aggregated form helps us to prioritize our work more.
Always remember that while you need to listen to stakeholders, project owners, and specific advice, you don’t have to accept every piece of feedback. You must examine it and come up with a rationale for your choice, but sometimes “no” is the best choice.
As the designer leading the project, you’re in charge of that decision. In the end, everyone has their area of specialization, and the designer has the most background and knowledge to make the best choice. And by listening to the feedback that you’ve received, you’re making sure that it’s also the best and most balanced decision.
Thanks to Mike Shelton and Brie Anne Demkiw for their contributions to the initial draft of this article.
One of the most successful soft skills we have at our disposal is feedback, in whatever form it takes, and whatever it may be called. It helps us collaborate to improve our designs while developing our own abilities and perspectives.
Feedback is also one of the most underestimated equipment, and generally by assuming that we’re now great at it, we settle, forgetting that it’s a skill that can be trained, grown, and improved. Bad opinions can lead to conflict on projects, lower morale, and long-term, undermine trust and teamwork. Quality comments can be a revolutionary force.
Practicing our knowledge is absolutely a good way to enhance, but the learning gets yet faster when it’s paired with a good base that programs and focuses the exercise. What are some fundamental components of providing effective opinions? And how can comments be adjusted for rural and distributed job settings?
A long history of sequential comments can be found online: code was written and discussed on mailing lists before becoming an open source standard. Currently, engineers engage on pull calls, developers post in their favourite design tools, project managers and sprint masters exchange ideas on tickets, and so on.
Design analysis is often the label used for a type of input that’s provided to make our job better, jointly. It generally shares many of the principles with comments, but it also has some differences.
The information
The content of the feedback serves as the foundation for every effective criticism, so we need to start there. There are many designs that you can use to form your content. The one that I personally like best—because it’s obvious and actionable—is this one from Lara Hogan.
This calculation, which is typically used to provide feedback to users, even fits really well in a design critique because it finally addresses one of the main issues that we address: What? Where? Why? How? Imagine that you’re giving some comments about some pattern function that spans several screens, like an onboard movement: there are some pages shown, a stream blueprint, and an outline of the decisions made. You notice anything that needs to be improved. If you keep the three components of the equation in mind, you’ll have a mental unit that can help you become more precise and effective.
Here is a post that could be included in some feedback, and it might appear fair at first glance because it appears to partially fulfill the requirements. But does it?
Not confident about the keys ‘ patterns and hierarchy—it feels off. May you alter them?
Observation for style feedback doesn’t really mean pointing out which part of the software your input refers to, but it also refers to offering a viewpoint that’s as specific as possible. Do you offer the user’s viewpoint? Your expert perspective? A business perspective? From the perspective of the project manager? A first-time user’s perspective?
I anticipate that one of these two buttons will go forward and the other will go back when I see them.
Impact is about the why. Just pointing out a UI element might sometimes be enough if the issue may be obvious, but more often than not, you should add an explanation of what you’re pointing out.
I anticipate that one of these two buttons will go forward and the other will go back when I see them. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow.
The question approach is meant to provide open guidance by eliciting the critical thinking in the designer receiving the feedback. Notably, in Lara’s equation she provides a second approach: request, which instead provides guidance toward a specific solution. While that’s generally a viable option for feedback, I’ve found that going back to the question approach typically leads to the best solutions for design critiques because designers are generally more open to experiment in a space.
The difference between the two can be exemplified with, for the question approach:
I anticipate that one of these two buttons will go forward and the other will go back when I see them. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Would it make sense to unify them?
Or, for the request approach:
I anticipate that one of these two buttons will go forward and the other will go back when I see them. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same pair of forward and back buttons.
At this point in some situations, it might be useful to integrate with an extra why: why you consider the given suggestion to be better.
I anticipate that one of these two buttons will go forward and the other will go back when I see them. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same two forward and back buttons so that users don’t get confused.
Choosing the question approach or the request approach can also at times be a matter of personal preference. I did rounds of anonymous feedback and I reviewed feedback with other people a while back when I was putting a lot of effort into improving my feedback. After a few rounds of this work and a year later, I got a positive response: my feedback came across as effective and grounded. Until I changed teams. Surprise surprise, my next round of criticism from a specific person wasn’t very positive. The reason is that I had previously tried not to be prescriptive in my advice—because the people who I was previously working with preferred the open-ended question format over the request style of suggestions. However, there was one person in this other team who now preferred specific guidance. So I adapted my feedback for them to include requests.
One comment that I heard come up a few times is that this kind of feedback is quite long, and it doesn’t seem very efficient. Yes, but also no. Let’s explore both sides.
No, this kind of feedback is effective because the length is a byproduct of clarity, and giving this kind of feedback can provide precisely enough information for a sound fix. Also if we zoom out, it can reduce future back-and-forth conversations and misunderstandings, improving the overall efficiency and effectiveness of collaboration beyond the single comment. Imagine that in the example above the feedback were instead just,” Let’s make sure that all screens have the same two forward and back buttons”. Since the designer receiving this feedback wouldn’t have much to go by, they might just implement the change. In later iterations, the interface might change or they might introduce new features—and maybe that change might not make sense anymore. Without explaining the why, the designer might assume that the change is one of consistency, but what if it wasn’t? So there could now be an underlying concern that changing the buttons would be perceived as a regression.
Yes, this style of feedback is not always efficient because the points in some comments don’t always need to be exhaustive, sometimes because certain changes may be obvious (” The font used doesn’t follow our guidelines” ) and sometimes because the team may have a lot of internal knowledge such that some of the whys may be implied.
Therefore, the equation above is intended to serve as a mnemonic to reflect and enhance the practice rather than a strict template for feedback. Even after years of active work on my critiques, I still from time to time go back to this formula and reflect on whether what I just wrote is effective.
The atmosphere
Well-grounded content is the foundation of feedback, but that’s not really enough. The soft skills of the person who’s providing the critique can multiply the likelihood that the feedback will be well received and understood. It has been demonstrated that only positive feedback can lead to sustained change in people. It can be determined by tone alone whether content is rejected or welcomed.
Since our goal is to be understood and to have a positive working environment, tone is essential to work on. I’ve tried to summarize the necessary soft skills over the years using a formula that resembles that of the content receptivity equation.
Respectful feedback comes across as grounded, solid, and constructive. It’s the kind of feedback that, whether it’s positive or negative, is perceived as useful and fair.
The term “timing” describes the moment when the feedback occurs. To-the-point feedback doesn’t have much hope of being well received if it’s given at the wrong time. When a new feature’s entire high-level information architecture is about to go live, it might still be relevant if the questioning raises a significant blocker that no one saw, but those concerns are much more likely to have to wait for a later revision. So in general, attune your feedback to the stage of the project. Early iteration? Iteration later? Polishing work in progress? Each of these needs a different one. The right timing will make it more likely that your feedback will be well received.
Attitude is the equivalent of intent, and in the context of person-to-person feedback, it can be referred to as radical candor. That entails checking whether what we have in mind will actually help the person and improve the overall project before writing. This might be a hard reflection at times because maybe we don’t want to admit that we don’t really appreciate that person. Although it’s possible, and that’s okay, it’s hoped not to be the case. Acknowledging and owning that can help you make up for that: how would I write if I really cared about them? How can I avoid being passive aggressive? How can I encourage constructive behavior?
Form is relevant especially in a diverse and cross-cultural work environments because having great content, perfect timing, and the right attitude might not come across if the way that we write creates misunderstandings. There could be many reasons for this, including the fact that occasionally certain words may cause specific reactions, that nonnative speakers may not be able to comprehend all thenuances of some sentences, that our brains may be different and that our world may be perceived differently; hence, neurodiversity must be taken into account. Whatever the reason, it’s important to review not just what we write but how.
A few years back, I was asking for some feedback on how I give feedback. I was given some sound advice, but I also got a surprise comment. They pointed out that when I wrote” Oh, ]… ]”, I made them feel stupid. That wasn’t my intention at all! I felt really bad, and I just realized that I provided feedback to them for months, and every time I might have made them feel stupid. I was horrified … but also thankful. I quickly changed the way I typed “oh” into my list of replaced words (your choice between aText, TextExpander, or others ), so that it was instantly deleted when I typed “oh.”
Something to highlight because it’s quite frequent—especially in teams that have a strong group spirit—is that people tend to beat around the bush. A positive attitude doesn’t necessarily mean giving in to criticism; it just means that you give it in a respectful and constructive manner, whether it be in the form of criticism or criticism. The nicest thing that you can do for someone is to help them grow.
We have a great advantage in giving feedback in written form: it can be reviewed by another person who isn’t directly involved, which can help to reduce or remove any bias that might be there. When I shared a comment and asked someone I trusted,” How does this sound,”” How can I do it better,” or even” How would you have written it,” I discovered that the best, most insightful moments for me occurred when I saw the two versions side by side.
The format
Asynchronous feedback also has a significant inherent benefit: we can devote more time to making sure that the suggestions ‘ clarity of communication and actionability meet two main objectives.
Let’s imagine that someone shared a design iteration for a project. You are reviewing it and leaving a comment. There are many ways to accomplish this, and context is of course important, but let’s try to think about some things that might be worthwhile to take into account.
In terms of clarity, start by grounding the critique that you’re about to give by providing context. This includes specifically describing where you’re coming from: do you have a thorough understanding of the project, or is this your first encounter with it? Are you coming from a high-level perspective, or are you figuring out the details? Are there regressions? Which user’s point of view do you consider when providing feedback? Is the design iteration at a point where it would be okay to ship this, or are there major things that need to be addressed first?
Even if you’re giving feedback to a team that already has some background information on the project, providing context is helpful. And context is absolutely essential when giving cross-team feedback. If I were to review a design that might be indirectly related to my work, and if I had no knowledge about how the project arrived at that point, I would say so, highlighting my take as external.
We frequently concentrate on the negatives and attempt to list every improvement that could be made. That’s of course important, but it’s just as important—if not more—to focus on the positives, especially if you saw progress from the previous iteration. Although this may seem superfluous, it’s important to remember that design has a number of possible solutions to each problem. So pointing out that the design solution that was chosen is good and explaining why it’s good has two major benefits: it confirms that the approach taken was solid, and it helps to ground your negative feedback. In the longer term, sharing positive feedback can help prevent regressions on things that are going well because those things will have been highlighted as important. Positive feedback can also help, as an added bonus, prevent impostor syndrome.
There’s one powerful approach that combines both context and a focus on the positives: frame how the design is better than the status quo ( compared to a previous iteration, competitors, or benchmarks ) and why, and then on that foundation, you can add what could be improved. There is a significant difference between a critique of a design that is already in good shape and one that isn’t quite there yet.
Another way that you can improve your feedback is to depersonalize the feedback: the comments should always be about the work, never about the person who made it. It’s” This button isn’t well aligned” versus” You haven’t aligned this button well”. This can be changed in your writing very quickly by reviewing it just before sending.
In terms of actionability, one of the best approaches to help the designer who’s reading through your feedback is to split it into bullet points or paragraphs, which are easier to review and analyze one by one. You might also think about breaking up the feedback into sections or even across multiple comments if it is longer. Of course, adding screenshots or signifying markers of the specific part of the interface you’re referring to can also be especially useful.
One approach that I’ve personally used effectively in some contexts is to enhance the bullet points with four markers using emojis. A red square indicates that it is something I consider blocking, a yellow diamond indicates that it should be changed, and a green circle indicates that it is fully confirmed. I also use a blue spiral � � for either something that I’m not sure about, an exploration, an open alternative, or just a note. However, I’d only use this strategy on teams where I’ve already established a high level of trust because it might turn out to be quite demoralizing if I deliver a lot of red squares, and I’d have to reframe how I’d communicate that.
Let’s see how this would work by reusing the example that we used earlier as the first bullet point in this list:
🔶 Navigation—I anticipate that one of these two buttons will go forward and the other will go back when I see them. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same two forward and back buttons so that users don’t get confused.
� � Overall— I think the page is solid, and this is good enough to be our release candidate for a version 1.0.
� � Metrics—Good improvement in the buttons on the metrics area, the improved contrast and new focus style make them more accessible.
Button Style: Using the green accent in this context, which conveys that it is a positive action because green is typically seen as a confirmation color. Do we need to explore a different color?
Tiles—It seems to me that the tiles should use the Subtitle 2 style rather than the Subtitle 1 style given the number of items on the page and the overall page hierarchy. This will keep the visual hierarchy more consistent.
� � Background—Using a light texture works well, but I wonder whether it adds too much noise in this kind of page. What is the purpose of using that?
What about giving feedback directly in Figma or another design tool that allows in-place feedback? These are generally difficult to use because they conceal discussions and are harder to follow, but in the right setting, they can be very effective. Just make sure that each of the comments is separate so that it’s easier to match each discussion to a single task, similar to the idea of splitting mentioned above.
One final note: say the obvious. Sometimes we might feel that something is clearly right or wrong, and we don’t say it. Or sometimes we might have a doubt that we don’t express because the question might sound stupid. Say it, that’s fine. You might have to reword it a little bit to make the reader feel more comfortable, but don’t hold it back. Good feedback is transparent, even when it may be obvious.
Asynchronous feedback also has the benefit of automatically guiding decisions, according to writing. Especially in large projects,” Why did we do this”? there’s nothing better than open, transparent discussions that can be reviewed at any time, and this could be a question that arises from time to time. For this reason, I recommend using software that saves these discussions, without hiding them once they are resolved.
Content, tone, and format. Although each of these subjects offers a useful model, focusing on improving eight of the subjects ‘ focus points, including observation, impact, question, timing, attitude, form, clarity, and actionability, is a lot of work to complete at once. One effective approach is to take them one by one: first identify the area that you lack the most (either from your perspective or from feedback from others ) and start there. Then the third, the third, and so on. At first you’ll have to put in extra time for every piece of feedback that you give, but after a while, it’ll become second nature, and your impact on the work will multiply.
Thanks to Brie Anne Demkiw and Mike Shelton for reviewing the first draft of this article.
Although I’m not certain when I first heard this statement, it has stuck with me over the centuries. How do you generate solutions for scenarios you can’t think? Or create items that are functional on products that have not yet been created?
Flash, Photoshop, and flexible pattern
When I first started designing sites, my go-to technology was Photoshop. I set about making a layout that I would eventually decline content into a 960px cloth. The growth phase was about attaining pixel-perfect precision using set widths, fixed levels, and absolute placement.
All of this was altered by Ethan Marcotte’s speak at An Event Apart and the subsequent article in A Checklist Off in 2010. I was sold on reactive style as soon as I heard about it, but I was even terrified. The pixel-perfect models full of special figures that I had formerly prided myself on producing were no longer good enough.
My first encounter with flexible design didn’t help my fear. My second project was to get an active fixed-width website and make it reactive. I quickly realized that you didn’t just put responsiveness at the end of a job. To make smooth design, you need to prepare throughout the style stage.
A new way to style
Making articles accessible to all devices a priority when designing responsive or liquid websites has always been the goal. It relies on the use of percentage-based design, which I immediately achieved with local CSS and power groups:
The next ingredient for flexible design is press queries. Without them, regardless of whether the content remained readable, would shrink to fit the available space. ( The exact opposite issue developed with the introduction of a mobile-first approach. )
Media answers prevented this by allowing us to add breakpoints where the design could adapt. Like most people, I started out with three breakpoints: one for desktop, one for tablets, and one for mobile. Over the years, I added more and more for phablets, wide screens, and so on.
For years, I happily worked this way and improved both my design and front-end skills in the process. The only problem I encountered was making changes to content, since with our Sass grid system in place, there was no way for the site owners to add content without amending the markup—something a small business owner might struggle with. This is because each row in the grid was defined using a div as a container. Adding content meant creating new row markup, which requires a level of HTML knowledge.
String premium was a mainstay of early flexible design, present in all the frequently used systems like Bootstrap and Skeleton.
1 of 7
2 of 7
3 of 7
4 of 7
5 of 7
6 of 7
7 of 7
Another difficulty arose as I moved from a design firm building websites for smaller- to medium-sized companies, to larger in-house teams where I worked across a collection of related sites. In those capacities, I began to work many more with washable pieces.
Our rely on multimedia queries resulted in parts that were tied to frequent screen sizes. If the goal of part libraries is modify, then this is a real problem because you can just use these components if the devices you’re designing for correspond to the viewport sizes used in the pattern library—in the process never really hitting that “devices that don’t already occur” goal.
Then there’s the problem of space. Media answers allow components to adapt based on the viewport size, but what if I put a component into a sidebar, like in the figure below?
Container queries: our savior or a false dawn?
Container queries have long been touted as an improvement upon media queries, but at the time of writing are unsupported in most browsers. Workarounds for JavaScript exist, but they can lead to dependencies and compatibility issues. The basic theory underlying container queries is that elements should change based on the size of their parent container and not the viewport width, as seen in the following illustrations.
One of the biggest arguments in favor of container queries is that they help us create components or design patterns that are truly reusable because they can be picked up and placed anywhere in a layout. This is an important step in moving toward a form of component-based design that works at any size on any device.
In other words, responsive elements are meant to replace responsive layouts.
Container queries will help us move from designing pages that respond to the browser or device size to designing components that can be placed in a sidebar or in the main content, and respond accordingly.
My issue is that layout is still used to determine when a design needs to adapt. This approach will always be restrictive, as we will still need pre-defined breakpoints. For this reason, my main question with container queries is, How would we decide when to change the CSS used by a component?
A component library that is disconnected from context and real content is probably not the best place to make that choice.
As the diagrams below illustrate, we can use container queries to create designs for specific container widths, but what if I want to change the design based on the image size or ratio?
In this example, the dimensions of the container are not what should dictate the design, rather, the image is.
Without reliable cross-browser support for them, it’s difficult to say for certain whether container queries will be successful. Responsive component libraries would definitely evolve how we design and would improve the possibilities for reuse and design at scale. However, we might always need to modify these elements to fit our content.
CSS is changing
Whilst the container query debate rumbles on, there have been numerous advances in CSS that change the way we think about design. The days of fixed-width elements measured in pixels and floated div elements used to cobble layouts together are long gone, consigned to history along with table layouts. Flexbox and CSS Grid have revolutionized layouts for the web. We can now create elements that wrap onto new rows when they run out of space, not when the device changes.
The repeat() function paired with auto-fit or auto-fill allows us to specify how much space each column should use while leaving it up to the browser to decide when to spill the columns onto a new line. Similar things can be achieved with Flexbox, as elements can wrap over multiple rows and “flex” to fill available space.
The biggest benefit of all of this is that you don’t have to wrap elements in container rows. Without rows, content isn’t tied to page markup in quite the same way, allowing for removals or additions of content without additional development.
This is a big step forward when it comes to creating designs that allow for evolving content, but the real game changer for flexible designs is CSS Subgrid.
Remember the days of crafting perfectly aligned interfaces, only for the customer to add an unbelievably long header almost as soon as they’re given CMS access, like the illustration below?
Subgrid allows elements to respond to adjustments in their own content and in the content of sibling elements, helping us create designs more resilient to change.
CSS Grid allows us to separate layout and content, thereby enabling flexible designs. Meanwhile, Subgrid allows us to create designs that can adapt in order to suit morphing content. The above code can be implemented behind an @supports feature query even though Firefox is the only browser that supports subgrid at the time of writing.
Intrinsic layouts
I’d be remiss not to mention intrinsic layouts, a term used by Jen Simmons to describe a mix of contemporary and traditional CSS features to create layouts that respond to available space.
Responsive layouts have flexible columns using percentages. Intrinsic layouts, on the other hand, use the fr unit to create flexible columns that won’t ever shrink so much that they render the content illegible.
frunits is a statement that says,” I want you to distribute the extra space in this way, but… don’t ever make it smaller than the content that is inside of it.”
—Jen Simmons,” Designing Intrinsic Layouts”
Intrinsic layouts can also make use of a mix of fixed and flexible units, letting the content choose how much space it occupies.
What makes intrinsic design stand out is that it not only creates designs that can withstand future devices but also helps scale design without losing flexibility. Without the requirement of having the same breakpoints or the same amount of content as in the previous implementation, components and patterns can be lifted and reused.
We can now create designs that adapt to the space they have, the content within them, and the content around them. We can create responsive components without relying on container queries using an intrinsic approach.
Another 2010 moment?
This intrinsic approach should in my view be every bit as groundbreaking as responsive web design was ten years ago. It’s another “everything changed” moment for me.
But it doesn’t seem to be moving quite as fast, I haven’t yet had that same career-changing moment I had with responsive design, despite the widely shared and brilliant talk that brought it to my attention.
One possible explanation for that might be that I now work for a sizable company, which is significantly different from the role I held as a design agency in 2010: In my agency days, every new project was a clean slate, a chance to try something new. Nowadays, projects use existing tools and frameworks and are often improvements to existing websites with an existing codebase.
Another possibility is that I now feel more prepared for change. In 2010 I was new to design in general, the shift was frightening and required a lot of learning. Additionally, an intrinsic approach isn’t exactly new; it’s about applying existing skills and CSS knowledge in a unique way.
You can’t framework your way out of a content problem
Another reason for the slightly slower adoption of intrinsic design could be the lack of quick-fix framework solutions available to kick-start the change.
Ten years ago, responsive grid systems were everywhere. With a framework like Bootstrap or Skeleton, you had a responsive design template at your fingertips.
Because having a selection of units is a hindrance when creating layout templates, intrinsic design and frameworks do not work together quite as well. The beauty of intrinsic design is combining different units and experimenting with techniques to get the best for your content.
And then there are design tools. We probably all used Photoshop templates for desktop, tablet, and mobile devices to drop designs into and demonstrate how the site would look at all three stages at some point in our careers.
How do you do that now, with each component responding to content and layouts flexing as and when they need to? Personally, I’m a big fan of this kind of design in the browser.
The debate about “whether designers should code” is another that has rumbled on for years. When designing a digital product, we should, at the very least, design for a best- and worst-case scenario when it comes to content. It’s not ideal to implement this in a graphics-based software package. In code, we can add longer sentences, more radio buttons, and extra tabs, and watch in real time as the design adapts. Does it continue to function? Is the design too reliant on the current content?
Personally, I look forward to the day intrinsic design is the standard for design, when a design component can be truly flexible and adapt to both its space and content with no reliance on device or container dimensions.
Content should come first
Content is not constant. After all, to design for the unanticipated or unexpected, we must take into account changes in content, such as in our earlier Subgrid card illustration, which allowed the cards to make adjustments to both their own and sibling elements.
Thankfully, there’s more to CSS than layout, and plenty of properties and values can help us put content first. Subgrid and pseudo-elements like ::first-line and ::first-letter help to separate design from markup so we can create designs that allow for changes.
Much bigger additions to CSS include logical properties, which change the way we construct designs using logical dimensions (start and end) instead of physical ones (left and right), something CSS Grid also does with functions like min(), max(), and clamp().
This flexibility allows for directional changes according to content, a common requirement when we need to present content in multiple languages. In the past, this was often achieved with Sass mixins but was often limited to switching from left-to-right to right-to-left orientation.
Directional variables must be set in the Sass version.
However, now we have native logical properties, removing the reliance on both Sass ( or a similar tool ) and pre-planning that necessitated using variables throughout a codebase. These properties also start to break apart the tight coupling between a design and strict physical dimensions, creating more flexibility for changes in language and in direction.
margin-block-end: 10px;padding-block-start: 10px;
There are also native start and end values for properties like text-align, which means we can replace text-align: right with text-align: start.
Like the earlier examples, these properties help to build out designs that aren’t constrained to one language, the design will reflect the content’s needs.
Fluid and fixed
We briefly covered the power of combining fixed widths with fluid widths with intrinsic layouts. The min() and max() functions are a similar concept, allowing you to specify a fixed value with a flexible alternative.
For min() this means setting a fluid minimum value and a maximum fixed value.
.element { width: min(50%, 300px);}
The element in the figure above will be 50 % of its container as long as the element’s width doesn’t exceed 300px.
For max() we can set a flexible max value and a minimum fixed value.
.element { width: max(50%, 300px);}
Now the element will be 50 % of its container as long as the element’s width is at least 300px. This means we can set limits but allow content to react to the available space.
The clamp() function builds on this by allowing us to set a preferred value with a third parameter. Now we can allow the element to shrink or grow if it needs to without getting to a point where it becomes unusable.
.element { width: clamp(300px, 50%, 600px);}
This time, the element’s width will be 50 % of its container’s preferred value, with no exceptions for 300px and 600px.
With these techniques, we have a content-first approach to responsive design. We can separate content from markup, meaning the changes users make will not affect the design. By making plans for unanticipated changes in language or direction, we can begin to future-proof designs. And we can increase flexibility by setting desired dimensions alongside flexible alternatives, allowing for more or less content to be displayed correctly.
First, the situation
Thanks to what we’ve discussed so far, we can cover device flexibility by changing our approach, designing around content and space instead of catering to devices. But what about that last bit of Jeffrey Zeldman’s quote,”… situations you haven’t imagined”?
It’s a lot different to design for someone using a mobile phone and walking through a crowded street in glaring sunshine than it is for someone using a desktop computer. Situations and environments are hard to plan for or predict because they change as people react to their own unique challenges and tasks.
This is why making a decision is so crucial. One size never fits all, so we need to design for multiple scenarios to create equal experiences for all our users.
Thankfully, there is a lot we can do to provide choice.
Responsible design is important.
” There are parts of the world where mobile data is prohibitively expensive, and where there is little or no broadband infrastructure”.
One of the biggest assumptions we make is that people interacting with our designs have a good wifi connection and a wide screen monitor. However, our users may be commuters using smaller mobile devices that may experience drops in connectivity while traveling on trains or other modes of transportation. There is nothing more frustrating than a web page that won’t load, but there are ways we can help users use less data or deal with sporadic connectivity.
The srcset attribute allows the browser to decide which image to serve. This means we can create smaller ‘cropped’ images to display on mobile devices in turn using less bandwidth and less data.
The preload attribute can also help us to think about how and when media is downloaded. It can be used to tell a browser about any critical assets that need to be downloaded with high priority, improving perceived performance and the user experience.
There’s also native lazy loading, which indicates assets that should only be downloaded when they are needed.
With srcset, preload, and lazy loading, we can start to tailor a user’s experience based on the situation they find themselves in. What none of this does, however, is allow the user themselves to decide what they want downloaded, as the decision is usually the browser’s to make.
So how can we put users in control?
The media queries are now being returned.
Media answers have always been about much more than device sizes. They allow content to adapt to different situations, with screen size being just one of them.
We’ve long been able to check for media types like print and speech and features such as hover, resolution, and color. These checks allow us to provide options that suit more than one scenario, it’s less about one-size-fits-all and more about serving adaptable content.
The Media Queries Level 5 spec is still being developed as of this writing. It introduces some really exciting queries that in the future will help us design for multiple other unexpected situations.
For instance, a light-level option lets you alter a user’s style when they are in the dark or in the sun. Paired with custom properties, these features allow us to quickly create designs or themes for specific environments.
Another key feature of the Level 5 spec is personalization. Instead of creating designs that are the same for everyone, users can choose what works for them. This is achieved by using features like prefers-reduced-data, prefers-color-scheme, and prefers-reduced-motion, the latter two of which already enjoy broad browser support. These features tap into preferences set via the operating system or browser so people don’t have to spend time making each site they visit more usable.
Media answers like this go beyond choices made by a browser to grant more control to the user.
Expect the unexpected
In the end, we should always anticipate that things will change. Devices in particular change faster than we can keep up, with foldable screens already on the market.
We can design for content, but we can’t do it the same way we have for this constantly changing landscape. By putting content first and allowing that content to adapt to whatever space surrounds it, we can create more robust, flexible designs that increase the longevity of our products.
A lot of the CSS discussed here is about moving away from layouts and putting content at the heart of design. There is so much more we can do to adopt a more intrinsic approach, from responsive components to fixed and fluid units. Even better, we can test these techniques during the design phase by designing in-browser and watching how our designs adapt in real-time.
When it comes to unexpected circumstances, we need to make sure our goods are accessible whenever and wherever needed. We can move closer to achieving this by involving users in our design decisions, by creating choice via browsers, and by giving control to our users with user-preference-based media queries.
Good design for the unexpected should allow for change, provide choice, and give control to those we serve: our users themselves.
We’ve been conversing for many thousands of years. Whether to present information, perform transactions, or just to check in on one another, people have yammered aside, chattering and gesticulating, through spoken discussion for many generations. Only recently have we begun to write our discussions, and only recently have we outsourced them to the system, a system that exhibits a significantly higher affection for written letter than for the vernacular rigors of spoken language.
Laptops have trouble because between spoken and written speech, talk is more primitive. Machines must wrestle with the complexity of human statement, including the disfluencies and pauses, the gestures and body speech, and the variations in phrase choice and spoken dialect, which may impede even the most skillfully crafted human-computer interaction. In the human-to-human scenario, spoken language also has the privilege of face-to-face contact, where we can readily interpret nonverbal social cues.
In contrast, written language develops its own fossil record of dated terms and phrases as we commit to recording and keeping usages long after they are no longer relevant in spoken communication ( for example, the salutation” To whom it may concern” ). Because it tends to be more consistent, polished, and formal, written text is fundamentally much easier for machines to parse and understand.
Spoken language lacks this luxury. Besides the nonverbal cues that decorate conversations with emphasis and emotional context, there are also verbal cues and vocal behaviors that modulate conversation in nuanced ways: how something is said, not what. Our spoken language reaches far beyond what the written word can ever deliver, whether it’s rapid-fire, low-pitched, high-decibel, sarcastic, stilted, or sighing. So when it comes to voice interfaces—the machines we conduct spoken conversations with—we face exciting challenges as designers and content strategists.
Voice-to-voice interactions
We interact with voice interfaces for a variety of reasons, but according to Michael McTear, Zoraida Callejas, and David Griol in The Conversational Interface, those motivations by and large mirror the reasons we initiate conversations with other people, too ( ). We typically strike up a discussion by:
we need something done ( such as a transaction ),
we want to know something, or some kind of information, or
we are social beings and want someone to talk to ( conversation for conversation’s sake ).
These three categories, which I refer to as transactional, informational, and prosocial, also apply to virtually every voice interaction: a single conversation that begins with the voice interface’s first greeting and ends with the user leaving the interface. Note here that a conversation in our human sense—a chat between people that leads to some result and lasts an arbitrary length of time—could encompass multiple transactional, informational, and prosocialvoice interactions in succession. In other words, a voice interaction is a conversation, but it is not always just one voice interaction.
Purely prosocial conversations are more gimmicky than captivating in most voice interfaces, because machines don’t yet have the capacity to really want to know how we’re doing and to do the sort of glad-handing humans crave. Additionally, there is a debate about whether users actually prefer organic human conversations that start with prosocial voiceovers and then seamlessly transition to other types. In fact, in Voice User Interface Design, Michael Cohen, James Giangola, and Jennifer Balogh recommend sticking to users ‘ expectations by mimicking how they interact with other voice interfaces rather than trying too hard to be human—potentially alienating them in the process ( ).
That leaves two different types of conversations we can have with one another that a voice interface can also have easily, including one that is transactional and one that is informational, teaching us something new ( “discuss a musical” ).
Transactional voice interactions
When you order a Hawaiian pizza with extra pineapple, you’re typically having a conversation and a voice interaction when you’re tapping buttons on a food delivery app. Even when we walk up to the counter and place an order, the conversation quickly pivots from an initial smattering of neighborly small talk to the real mission at hand: ordering a pizza ( generously topped with pineapple, as it should be ).
Alison: Hey, how are things going?
Burhan: Hi, welcome to Crust Deluxe! It’s chilly outside. How can I help you?
Can I get a Hawaiian pizza with extra pineapple, Alison?
Burhan: Sure, what size?
Alison: Big.
Burhan: Anything else?
Alison: No, that’s it.
Burhan: Something to drink?
Alison, I’ll have a bottle of Coke.
Burhan: You got it. It will cost about$ 15 and take fifteen minutes to complete.
Each progressive disclosure in this transactional conversation reveals more and more of the desired outcome of the transaction: a service rendered or a product delivered. Conversations that are transactional have certain characteristics: they are direct, concise, and cost-effective. They quickly dispense with pleasantries.
Informational voice interactions
Meanwhile, some conversations are primarily about obtaining information. Alison might visit Crust Deluxe with the sole intention of placing an order, but she might not want to leave with a pizza at all. She might be just as interested in whether they serve halal or kosher dishes, gluten-free options, or something else. Even though we have a prosocial mini-conversation once more at the beginning to establish politeness, we’re after much more.
Alison: Hey, how are things going?
Burhan: Hi, welcome to Crust Deluxe! It’s chilly outside. How can I help you?
Alison: Can I ask a few questions?
Burhan: Of course! Go right ahead.
Alison, do you have any menu items that are halal?
Burhan: Absolutely! On request, we can make any pie halal. We also have lots of vegetarian, ovo-lacto, and vegan options. Are you considering any additional dietary restrictions?
Alison: What about gluten-free pizzas?
Burhan: For both our deep-dish and thin-crust pizzas, we can definitely make a gluten-free crust for you. Anything else I can answer for you?
Alison: That’s it for now. Good to know. Thank you!
Burhan: Anytime, come back soon!
This dialogue is a lot different. Here, the goal is to get a certain set of facts. Informational conversations are research expeditions to gather data, news, or facts, or they are investigative quests for the truth. Voice interactions that are informational might be more long-winded than transactional conversations by necessity. Responses are typically longer, more in-depth, and carefully communicated to ensure that the customer understands the main ideas.
Voice Interfaces
Voice interfaces, in essence, use speech to assist users in accomplishing their objectives. But simply because an interface has a voice component doesn’t mean that every user interaction with it is mediated through voice. We’re most concerned with pure voice interfaces, which depend entirely on spoken conversation and lack any visual component, making multimodal voice interfaces much more nuanced and challenging to deal with because they can lean on visual components like screens as crutches.
Though voice interfaces have long been integral to the imagined future of humanity in science fiction, only recently have those lofty visions become fully realized in genuine voice interfaces.
IVR ( interactive voice response ) systems
Though written conversational interfaces have been fixtures of computing for many decades, voice interfaces first emerged in the early 1990s with text-to-speech ( TTS ) dictation programs that recited written text aloud, as well as speech-enabled in-car systems that gave directions to a user-provided address. We became familiar with the first real voice interfaces that could actually be spoken with the help of interactive voice response ( IVR ) systems, which were developed as an alternative to overburdened customer service representatives.
IVR systems allowed organizations to reduce their reliance on call centers but soon became notorious for their clunkiness. When you call an airline or hotel company, which is a common practice in the corporate world, these systems were primarily intended as metaphorical switchboards to direct customers to a real phone agent (” Say Reservations to book a flight or check an itinerary” ), which are more likely to happen when you call one. Despite their functional issues and users ‘ frustration with their inability to speak to an actual human right away, IVR systems proliferated in the early 1990s across a variety of industries (, PDF).
IVR systems have a reputation for having less scintillating conversations than we’re used to in real life ( or even in science fiction ), despite being extremely repetitive and monotonous.
Screen readers
The screen reader, a program that converts visual information into synthesized speech, was a development that accompanied the development of IVR systems. For Blind or visually impaired website users, it’s the predominant method of interacting with text, multimedia, or form elements. Perhaps the closest thing we have today to an out-of-the-box implementation of content delivered through voice is represented by screen readers.
Among the first screen readers known by that moniker was the Screen Reader for the BBC Micro and NEEC Portable developed by the Research Centre for the Education of the Visually Handicapped (RCEVH) at the University of Birmingham in 1986 ( ). The first IBM Screen Reader for text-based computers was created by Jim Thatcher in the same year, which was later recreated for computers with graphical user interfaces ( GUIs ) ( ).
With the rapid growth of the web in the 1990s, the demand for accessible tools for websites exploded. Screen readers started facilitating quick interactions with web pages that ostensibly allow disabled users to traverse the page as an aural and temporal space rather than a visual and physical one with the introduction of semantic HTML and especially ARIA roles in 2008, enabling speedy interactions with the pages. In other words, screen readers for the web “provide mechanisms that translate visual design constructs—proximity, proportion, etc. in A List Apart, writes Aaron Gustafson, “into useful information.” ” At least they do when documents are authored thoughtfully” ( ).
There is a big draw for screen readers: they’re challenging to use and relentlessly verbose, despite being incredibly instructive for voice interface designers. The visual structures of websites and web navigation don’t translate well to screen readers, sometimes resulting in unwieldy pronouncements that name every manipulable HTML element and announce every formatting change. Working with web-based interfaces is a cognitive burden for many screen reader users.
In Wired, accessibility advocate and voice engineer Chris Maury considers why the screen reader experience is ill-suited to users relying on voice:
I disliked the operation of Screen Readers from the beginning. Why are they designed the way they are? It makes no sense to present information visually and then only to have that information translated into audio. All of the time and energy that goes into creating the perfect user experience for an app is wasted, or even worse, adversely impacting the experience for blind users. ( )
In many cases, well-designed voice interfaces can speed users to their destination better than long-winded screen reader monologues. After all, users of the visual interface have the advantage of freely scurrying around the viewport to find information, ignoring areas that are unimportant to them. Blind users, meanwhile, are obligated to listen to every utterance synthesized into speech and therefore prize brevity and efficiency. Users with disabilities who have long had no choice but to use clumsy screen readers might find that voice interfaces, especially more contemporary voice assistants, provide a more streamlined experience.
Voice assistants
Many of us immediately associate voice assistants with the popular subset of voice interfaces found in living rooms, smart homes, and offices with the film Star Trek or with Majel Barrett’s voice as the omniscient computer. Voice assistants are akin to personal concierges that can answer questions, schedule appointments, conduct searches, and perform other common day-to-day tasks. And because of their assistive potential, they are quickly gaining more and more attention from accessibility advocates.
Before the earliest IVR systems found success in the enterprise, Apple published a demonstration video in 1987 depicting the Knowledge Navigator, a voice assistant that could transcribe spoken words and recognize human speech to a great degree of accuracy. Then, in 2001, Tim Berners-Lee and others created their vision for a” semantic web agent” that would carry out routine tasks like” checking calendars, making appointments, and finding locations” ( hinter paywall ). It wasn’t until 2011 that Apple’s Siri finally entered the picture, making voice assistants a tangible reality for consumers.
There is a significant variation in how programmable and customizable some voice assistants are compared to others due to the sheer number of voice assistants available today ( Fig 1 ). At one extreme, everything except vendor-provided features is locked down, for example, at the time of their release, the core functionality of Apple’s Siri and Microsoft’s Cortana couldn’t be extended beyond their existing capabilities. There are no other means by which developers can interact with Siri at a low level, aside from predefined categories of tasks like sending messages, hailing rideshares, making restaurant reservations, and other things, so even now it isn’t possible to program Siri to perform arbitrary functions.
At the opposite end of the spectrum, voice assistants like Amazon Alexa and Google Home offer a core foundation on which developers can build custom voice interfaces. For this reason, developers who feel constrained by the limitations of Siri and Cortana are increasingly using programmable voice assistants that are extensibable and customizable. Amazon offers the Alexa Skills Kit, a developer framework for building custom voice interfaces for Amazon Alexa, while Google Home offers the ability to program arbitrary Google Assistant skills. Users can choose from among the thousands of custom-built skills available today in the Google Assistant and Amazon Alexa ecosystems.
As businesses like Amazon, Apple, Microsoft, and Google continue to occupy their positions, they’re also selling and open-sourcing an unheard array of tools and frameworks for designers and developers that aim to make creating voice interfaces as simple as possible, even without code.
Often by necessity, voice assistants like Amazon Alexa tend to be monochannel—they’re tightly coupled to a device and can’t be accessed on a computer or smartphone instead. In contrast, many development platforms, such as Google’s Dialogflow, have omnichannel capabilities that allow users to create a single conversational interface that then manifests as a voice interface, textual chatbot, and IVR system upon deployment. I don’t prescribe any specific implementation approaches in this design-focused book, but in Chapter 4 we’ll get into some of the implications these variables might have on the way you build out your design artifacts.
Voice content
Simply put, voice content is content delivered through voice. Voice content must be free-flowing and organic, contextless and concise in order to preserve what makes human conversation so compelling in the first place. Everything written content is not.
Our world is replete with voice content in various forms: screen readers reciting website content, voice assistants rattling off a weather forecast, and automated phone hotline responses governed by IVR systems. We’re most concerned with the audiobook content being delivered as a requirement rather than an option.
For many of us, our first foray into informational voice interfaces will be to deliver content to users. One issue is that any content we already have isn’t in any way suitable for this new environment. So how do we make the content trapped on our websites more conversational? And how do we create fresh copy that works with voice-recognition?
Lately, we’ve begun slicing and dicing our content in unprecedented ways. Websites are, in many ways, massive vaults of what I call macrocontent: lengthy prose that can last for miles in a browser window while extending like microfilm viewers of newspaper archives. Back in 2002, well before the present-day ubiquity of voice assistants, technologist Anil Dash defined microcontent as permalinked pieces of content that stay legible regardless of environment, such as email or text messages:
An example of microcontent can be a day’s weather forecast [sic], the arrival and departure times for an airplane flight, an abstract from a lengthy publication, or a single instant message. ( )
I would update Dash’s definition of microcontent to include all instances of bite-sized content that transcends written communiqués. After all, today we encounter microcontent in interfaces where a small snippet of copy is displayed alone, unmoored from the browser, like a textbot confirmation of a restaurant reservation. Informing delivery channels both established and novel, Microcontent provides the best opportunity to find out how your content can be stretched to the limits of its potential.
As microcontent, voice content is unique because it’s an example of how content is experienced in time rather than in space. We can instantly look at a digital sign for an instant and be informed when the next train is coming, but voice interfaces keep our attention captive for so long that we can’t quickly evade or skip, a feature that screen reader users are all too familiar with.
Because microcontent is fundamentally made up of isolated blobs with no relation to the channels where they’ll eventually end up, we need to ensure that our microcontent truly performs well as voice content—and that means focusing on the two most important traits of robust voice content: voice content legibility and voice content discoverability.
Our voice content’s legibility and discoverability in general both depend on how it manifests in terms of perceived space and time.
Some members of the elite running group were beginning to think it was impossible to run a hour in less than four hours in the 1950s. Riders had been attempting it since the later 19th century and were beginning to draw the conclusion that the human body just wasn’t built for the job.
But Roger Bannister surprised all on May 6, 1956. It was a cold, damp morning in Oxford, England—conditions no one expected to give themselves to record-setting—and but Bannister did really that, running a mile in 3: 59.4 and becoming the first people in the history books to run a mile in under four hours.
The world today knew that the four-minute hour was possible because of this change in the standard. Bannister’s history lasted just forty-six days, when it was snatched aside by American sprinter John Landy. Finally, in the same race, three athletes all managed to cross the four-minute challenge. Since therefore, over 1, 400 walkers have actually run a mile in under four days, the current document is 3: 43.13, held by Moroccan performer Hicham El Guerrouj.
We can do a lot more with what we think is possible, and we can only do it if we see that someone else has already done it. As with people running speed, there are also hard limits on how a website can accomplish.
Establishing requirements for a lasting web
The key indicators of climate performance in most big sectors are pretty well established, such as power per square metre for homes and miles per gallon for cars. The tools and methods for calculating those measures are standardized as well, which keeps everyone on the same site when doing economic evaluations. However, we aren’t held to any specific environmental standards in the world of websites and apps, and we only recently have access to the tools and strategies we need to do so.
The main objective in green web layout is to reduce carbon emissions. However, it’s nearly impossible to accurately assess the CO2 output of a online product. We can’t measure the pollutants coming out of the exhaust valves on our devices. The pollution coming from power plants that burn coal and oil are considerably away, out of sight, and out of mind. We have no way to track the particles from a website or app up to the power station where the light is being generated and really know the exact amount of house oil produced. What then do we do?
If we can‘t measure the actual carbon pollution, therefore we need to get what we can measure. The following are the main elements that could be used as coal pollution gauges:
Transfer of data
Electricity’s coal power
Let’s take a look at how we can use these indicators to calculate the energy use, and in turn the carbon footprint, of the sites and web applications we create.
Transfer of data
Most researchers use kilowatt-hours per gigabyte (k Wh/GB ) as a metric of energy efficiency when measuring the amount of data transferred over the internet when a website or application is used. This serves as a wonderful example of how much energy is consumed and how much carbon is released. As a rule of thumb, the more files transferred, the more electricity used in the data center, telecoms systems, and end users products.
The most accurate way to calculate data exchange for a second visit for web pages is to measure the page weight, which is the first time a user visits the page in kilobytes. It’s very easy to measure using the engineer equipment in any modern internet browser. Frequently, the statistics for the total data transfer of any web application are included in your web hosting account ( Fig. 2.1 ).
The great thing about website weight as a parameter is that it allows us to compare the effectiveness of web pages on a level playing field without confusing the issue with frequently changing traffic volumes.
A large scope is required to reduce page weight. By early 2020, the median page weight was 1.97 MB for setups the HTTP Archive classifies as “desktop” and 1.77 MB for “mobile”, with desktop increasing 36 percent since January 2016 and mobile page weights nearly doubling in the same period ( Fig 2.2 ). Image files account for the majority of this data transfer, making them the single biggest contributor to carbon emissions on a typical website.
History clearly shows us that our web pages can be smaller, if only we set our minds to it. While most technologies, including the underlying technology of the web like data centers and transmission networks, become more and more energy efficient, websites themselves become less effective as time goes on.
You might be aware of the project team’s focus on creating faster user experiences using the concept of performance budgeting. For example, we might specify that the website must load in a maximum of one second on a broadband connection and three seconds on a 3G connection. Performance budgets are upper limits rather than vague suggestions, much like speed limits while driving, so the goal should always be to come within budget.
Designing for fast performance does often lead to reduced data transfer and emissions, but it isn’t always the case. Page weight and transfer size are more objective and reliable benchmarks for sustainable web design, whereas web performance is frequently more about the subjective perception of load times than it is about the underlying system’s actual efficiency.
We can set a page weight budget in reference to a benchmark of industry averages, using data from sources like HTTP Archive. We can also use competitor page weights and the website’s current layout to compare it to. For example, we might set a maximum page weight budget as equal to our most efficient competitor, or we could set the benchmark lower to guarantee we are best in class.
We could start looking at the transferability of our web pages for repeat visitors if we want to take it one step further. Although page weight for the first time someone visits is the easiest thing to measure, and easy to compare on a like-for-like basis, we can learn even more if we start looking at transfer size in other scenarios too. For instance, visitors who load the same page more frequently will likely have a high percentage of the files cached in their browser, which means they won’t need to move all the files on subsequent visits. Likewise, a visitor who navigates to new pages on the same website will likely not need to load the full page each time, as some global assets from areas like the header and footer may already be cached in their browser. Moving away from the first visit and allowing us to determine page weight budgets for scenarios other than this one can help us learn even more about how to optimize efficiency for users who regularly visit our pages.
Page weight budgets are easy to track throughout a design and development process. Although they don’t directly disclose their data on energy consumption and carbon emissions, they do provide a clear indicator of efficiency in comparison to other websites. And as transfer size is an effective analog for energy consumption, we can actually use it to estimate energy consumption too.
In summary, less data transfer leads to more energy efficiency, which is a crucial component of reducing web product carbon emissions. The more efficient our products, the less electricity they use, and the less fossil fuels need to be burned to produce the electricity to power them. However, as we’ll see next, it’s important to take into account the source of that electricity because all web products require some.
Electricity’s coal power
Regardless of energy efficiency, the level of pollution caused by digital products depends on the carbon intensity of the energy being used to power them. The term” carbon intensity” (gCO2/k Wh ) is used to describe how much carbon dioxide is produced for each kilowatt-hour of electricity ). This varies widely, with renewable energy sources and nuclear having an extremely low carbon intensity of less than 10 gCO2/k Wh ( even when factoring in their construction ), whereas fossil fuels have very high carbon intensity of approximately 200–400 gCO2/k Wh.
The majority of electricity is produced by national or state grids, which combine energy from a variety of sources with different carbon intensity levels. The distributed nature of the internet means that a single user of a website or app might be using energy from multiple different grids simultaneously, a website user in Paris uses electricity from the French national grid to power their home internet and devices, but the website’s data center could be in Dallas, USA, pulling electricity from the Texas grid, while the telecoms networks use energy from everywhere between Dallas and Paris.
Although we don’t have complete control over the energy supply of web services, we do have some control over where our projects are hosted. With a data center using a significant proportion of the energy of any website, locating the data center in an area with low carbon energy will tangibly reduce its carbon emissions. Danish startup Tomorrow reports and maps the user-provided data, and a look at their map demonstrates how, for instance, choosing a data center in France will result in significantly lower carbon emissions than choosing a data center in the Netherlands ( Fig. 2.3 ).
Having said that, we don’t want to locate our servers too far away from our users; however, it takes energy to transmit data through the telecom’s networks, and the more energy is used, the further the data travels. Just like food miles, we can think of the distance from the data center to the website’s core user base as “megabyte miles” —and we want it to be as small as possible.
We can use website analytics to determine the country, state, or even city where our core user group is located and determine the distance between that location and the data center that our hosting company uses as a benchmark. This will be a somewhat fuzzy metric as we don’t know the precise center of mass of our users or the exact location of a data center, but we can at least get a rough idea.
For instance, if a website is hosted in London but the main audience is on the United States ‘ West Coast, we could calculate the distance between San Francisco and London, which is 5,300 miles. That’s a long way! We can see how hosting it somewhere in North America, ideally on the West Coast, would significantly lessen the distance and the amount of energy required to transmit the data. In addition, locating our servers closer to our visitors helps reduce latency and delivers better user experience, so it’s a win-win.
Reverting it to carbon emissions
If we combine carbon intensity with a calculation for energy consumption, we can calculate the carbon emissions of our websites and apps. A tool my team created accomplishes this by measuring the data transfer over the wire when a web page is loaded, calculating the associated electricity consumption, and then converting that data into a CO2 figure ( Fig. 2.4). It also factors in whether or not the web hosting is powered by renewable energy.
The Energy and Emissions Worksheet that comes with this book teaches you how to improve it and tailor the data more appropriately to your project’s unique features.
With the ability to calculate carbon emissions for our projects, we could even set up carbon budgets as well. CO2 is not a metric commonly used in web projects, we’re more familiar with kilobytes and megabytes, and can fairly easily look at design options and files to assess how big they are. Although translating that into carbon adds a layer of abstraction that isn’t as intuitive, carbon budgets do focus our minds on the main thing we’re trying to reduce, and this is in line with the main goal of sustainable web design: reducing carbon emissions.
Browser Energy
Transfer of data might be the simplest and most complete analog for energy consumption in our digital projects, but by giving us one number to represent the energy used in the data center, the telecoms networks, and the end user’s devices, it can’t offer us insights into the efficiency in any specific part of the system.
One part of the system we can look at in more detail is the energy used by end users ‘ devices. The computational burden is increasingly shifting from the data center to the users ‘ devices, whether they are smart TVs, tablets, laptops, phones, tablets, laptops, or other front-end web technologies. Modern web browsers allow us to implement more complex styling and animation on the fly using CSS and JavaScript. Additionally, JavaScript libraries like Angular and React make it possible to create applications where the” thinking” process is performed either partially or completely in the browser.
All of these advances are exciting and open up new possibilities for what the web can do to serve society and create positive experiences. However, more energy is used by the user’s devices as a result of the user’s web browser’s increased computation. This has implications not just environmentally, but also for user experience and inclusivity. Applications that put a lot of processing power on a user’s device unintentionally exclude those who have older, slower devices and make the batteries on phones and laptops drain more quickly. Furthermore, if we build web applications that require the user to have up-to-date, powerful devices, people throw away old devices much more frequently. This not only hurts the environment, but it also places a disproportionate financial burden on society’s poorest.
In part because the tools are limited, and partly because there are so many different models of devices, it’s difficult to measure website energy consumption on end users ‘ devices. The Energy Impact monitor inside the developer console of the Safari browser is one of the tools we currently have ( Fig. 2.5 ).
You know what happens when your computer’s cooling fans start spinning so frantically that you suspect it might take off when you load a website? That’s essentially what this tool is measuring.
It uses these figures to create an energy impact rating and shows how much CPU is used and how long it takes to load the web page. It doesn’t give us precise data for the amount of electricity used in kilowatts, but the information it does provide can be used to benchmark how efficiently your websites use energy and set targets for improvement.
Do you recall the days when having a fantastic site was sufficient? Nowadays, people are getting answers from Siri, Google seek fragments, and mobile applications, not only our websites. Companies with forward-thinking goals have adopted an holistic information strategy that aims to reach people across a range of digital stations and platforms.
How can a content management system ( CMS ) be set up to reach your current and future audience? I learned the hard way that creating a content model—a concept of information types, attributes, and relationships that let people and systems understand content—with my more comfortable design-system wondering would collapse my patient’s holistic information strategy. By developing conceptual material models that also connect related content, you can avoid that result.
I just had the opportunity to lead a Fortune 500 company’s CMS execution. The customer was excited by the benefits of an holistic information plan, including material modify, multichannel marketing, and robot delivery—designing content to be comprehensible to bots, Google knowledge panels, snippets, and voice user interfaces.
For our information to be understood by many systems, the unit needed conceptual types, which are names given based on their meaning rather than their presentation. This is crucial for an multichannel content strategy. Our goal was to allow writers to produce original content that could be used wherever they felt was most useful. But as the job proceeded, I realized that supporting material utilize at the range that my client needed required the whole group to identify a new pattern.
Despite our best efforts, we remained influenced by what we were more familiar with: design systems. An omnichannel content strategy cannot rely on WYSIWYG tools for design and layout, unlike web-focused content strategies. Our tendency to approach the content model with our familiar design-system thinking constantly led us to veer away from one of the primary purposes of a content model: delivering content to audiences on multiple marketing channels.
Two fundamental tenets must be followed in order to create a successful content model
We needed to explain to our designers, developers, and stakeholders that we were undertaking a very different task from their earlier web projects, where it was common for everyone to view content as visual building blocks that fit into layouts. The previous approach was not only more familiar but also more intuitive—at least at first—because it made the designs feel more tangible. We learned two guiding principles that helped the team understand how a content model and the design processes we were familiar with were:
Instead of layout, semantics must be used by content models.
And content models should connect content that belongs together.
Semantic content models
Type and attribute names for semantic content models are used to reflect the content’s intended purpose and not its intended display. For example, in a nonsemantic model, teams might create types like teasers, media blocks, and cards. These types may simplify the presentation of content, but they do not aid in understanding the meaning of the content, which would have opened the door to the content presented in each marketing channel. To allow each delivery channel to comprehend the content and use it as it sees fit, a semantic content model uses type names like product, service, and testimonial.
When you’re creating a semantic content model, a great place to start is to look over the types and properties defined by Schema. a community-driven resource for type definitions that are understandable on platforms like Google search .org
Benefits of a semantic content model include:
Even if your team doesn’t care about omnichannel content, a semantic content model decouples content from its presentation so that teams can evolve the website’s design without needing to refactor its content. In this way, content can withstand irrational website redesigns.
Beyond those practical benefits, you’ll also need a semantic content model if you want to deliver omnichannel content. Delivery channels must be able to understand the same content in order to use it across multiple marketing channels. For instance, if your content model provided a list of questions and answers, it could be used as a voice interface or by a bot to answer frequently asked questions ( FAQ ) pages.
For example, using a semantic content model for articles, events, people, and locations lets A List Apart provide cleanly structured data for search engines so that users can read the content on the website, in Google knowledge panels, and even with hypothetical voice interfaces in the future.
connective content models
Instead of slicing up related content across disparate content components, I’ve come to the realization that the best models are those that are semantic and also connect related content components ( such as a FAQ item’s question and answer pair ). A good content model connects content that should remain together so that multiple delivery channels can use it without needing to first put those pieces back together.
Consider creating an essay or article. The unity of an article’s parts determines its meaning and usefulness. Would one of the headings or paragraphs be meaningful on their own without the context of the full article? Our well-known design-system thinking on our project frequently led us to want to develop content models that would divide content into distinct chunks to fit the web-centric layout. This had a similar effect to an article that had had its headline removed. Because we were slicing content into standalone pieces based on layout, content that belonged together became difficult to manage and nearly impossible for multiple delivery channels to understand.
Let’s examine how connecting related content can be used in a practical setting to illustrate. The client’s design team created a challenging layout for a software product page that included numerous tabs and sections. Our instincts were to follow suit with the content model. Shouldn’t we make adding any number of tabs in the future as simple and flexible as possible?
Because our design-system instincts were so well-known, it appeared that we needed a “tab section” content type so that multiple tab sections could be added to a page. Each tab section would display various types of content. The software’s overview or specifications might be available in one tab. A list of resources might be found under another tab.
Our inclination to break down the content model into “tab section” pieces would have led to an unnecessarily complex model and a cumbersome editing experience, and it would have also created content that couldn’t have been understood by additional delivery channels. How would a different system have been able to determine which “tab section” referred to a product’s specifications or resource list, for instance? Would that system have had to have used tab sections and content blocks to calculate this? This would have prevented the tabs from ever being rearranged, and it would have required adding logic to each other delivery channel to interpret the layout of the design system. Furthermore, if the customer were to have no longer wanted to display this content in a tab layout, it would have been tedious to migrate to a new content model to reflect the new page redesign.
Our customer had a breakthrough when we realized that for each tab, their customer had a specific purpose in mind: it would reveal specific information like the software product’s overview, specifications, related resources, and pricing. Once implementation began, our inclination to focus on what’s visual and familiar had obscured the intent of the designs. It wasn’t long after a little digging that it became clear that the idea of tabs wasn’t applicable to the content model. What was important was the meaning of the information that was intended to be displayed in the tabs.
In fact, the customer could have decided to display this content in a different way—without tabs—somewhere else. In response to this realization, we created content types for the software product based on the meaningful attributes the client wanted to display on the web. There were rich attributes like screenshots, software requirements, and feature lists as well as obvious semantic attributes like name and description. The software’s product information stayed together because it wasn’t sliced across separate components like “tab sections” that were derived from the content’s presentation. Any delivery channel, including those that follow, could comprehend and display this content.
Conclusion
In this omnichannel marketing project, we discovered that the best way to keep our content model on track was to ensure that it was semantic ( with type and attribute names that reflected the meaning of the content ) and that it kept content together that belonged together ( instead of fragmenting it ). These two ideas made it easier for us to decide what to do with the content model based on the design. Remember: If you’re developing a content model to support an omnichannel content strategy, or even if you just want to make sure Google and other interfaces understand your content, remember:
A design system isn’t a content model. Team members may be persuaded to combine them and have their content model resemble their design system, so you should guard the semantic and contextual integrity of the content strategy throughout the entire implementation process. This will enable each delivery channel to consume the content without the need for a magic decoder ring.
If your team is struggling to make this transition, you can still reap some of the benefits by using Schema. Your website uses structured data from org. The benefit of search engine optimization is a compelling reason on its own, even if additional delivery channels aren’t on the horizon in the near future.
Additionally, remind the team that decoupling the content model from the design will let them update the designs more easily because they won’t be held back by the cost of content migrations. They will be prepared for the upcoming big thing, and they will be able to create new designs without compromising compatibility between the design and the content.
By firmly defending these ideas, you’ll help your team treat content the way it deserves as the most important component of your user experience and the best way to engage with your audience.
According to antiracist analyst Kim Crayton, “intention without plan is chaos.” We’ve discussed how our prejudices, beliefs, and carelessness toward marginalized and resilient parties lead to dangerous and irresponsible tech—but what, precisely, do we need to do to fix it? We need a strategy, not just the desire to make our technology safer.
This book will provide you with that plan of action. It covers how to incorporate safety principles into your design work in order to make tech that’s secure, how to persuade your stakeholders that this work is important, and how to respond to the critique that what we really need is more diversity. ( Spoiler: we do, but diversity alone is not the solution to fixing unethical, unsafe technology. )
The procedure for equitable safety
Your objectives when designing for protection are as follows:
discover ways your solution can be used for abuse,
style ways to prevent the maltreatment, and
offer assistance for harmed people to regain control and power.
The Process for Inclusive Safety is a tool to help you reach those goals ( Fig 5.1 ). It’s a method I developed in 2018 to better understand the different methods I used to create products that were designed with safety in mind. Whether you are creating an entirely new product or adding to an existing element, the Process can help you produce your product secure and diverse. The Process includes five public areas of action:
conducting exploration
Creating themes
Pondering issues
Designing answers
Testing for health
It is intended to be flexible, so teams might not want to apply every action in all circumstances. Use the parts that are related to your special function and environment, this is meant to be something you can put into your existing style process.
And once you use it, if you have an idea for making it better or simply want to give perspective of how it helped your staff, please get in touch with me. It’s a dwelling report that I hope technicians can use as a practical and useful resource in their day-to-day work.
If you’re working on a product especially for a resilient team or survivors of some form of injury, such as an application for survivors of domestic violence, sexual abuse, or drug addiction, be sure to read Section 7, which covers that position directly and should be handled a bit different. The principles set forth here are for putting safety first when designing a more general product with a broad user base ( which, as we already know from statistics, will include some groups that should be protected from harm ). Chapter 7 is focused on products that are specifically for vulnerable groups and people who have experienced trauma.
Step 1: Conduct research
Design research should include a thorough analysis of how your technology might be used for abuse as well as specific insights into the experiences of those who have witnessed and perpetrated that kind of abuse. At this stage, you and your team will investigate issues of interpersonal harm and abuse, and explore any other safety, security, or inclusivity issues that might be a concern for your product or service, like data security, racist algorithms, and harassment.
broad research
Your project should begin with broad, general research into similar products and issues around safety and ethical concerns that have already been reported. For example, a team building a smart home device would do well to understand the multitude of ways that existing smart home devices have been used as tools of abuse. If you’re creating an AI product, be aware of the potential for racism and other issues that have been reported in other AI products. Nearly all types of technology have some kind of potential or actual harm that’s been reported on in the news or written about by academics. Google Scholar is a useful resource for locating these studies.
Specific research: Survivors
When possible and appropriate, include direct research ( surveys and interviews ) with people who are experts in the forms of harm you have uncovered. In order to have a better understanding of the subject and be better positioned to prevent retraumatize survivors, you should interview advocates working in the area of your research first. If you’ve uncovered possible domestic violence issues, for example, the experts you’ll want to speak with are survivors themselves, as well as workers at domestic violence hotlines, shelters, other related nonprofits, and lawyers.
It is crucial to pay people for their knowledge and lived experiences, especially when interviewing survivors of any kind of trauma. Don’t ask survivors to share their trauma for free, as this is exploitative. While some survivors may not want to be paid, you should always make the offer in the initial ask. Donating to a cause that combated the kind of violence the interviewee experienced is an alternative to paying for. We’ll talk more about how to appropriately interview survivors in Chapter 6.
Abusers specific research
It’s unlikely that teams aiming to design for safety will be able to interview self-proclaimed abusers or people who have broken laws around things like hacking. Don’t make this a goal, rather, try to get at this angle in your general research. Attempt to understand how abusers or bad actors use technology to harm others, how they use it against others, and how they justify or explain the abuse.
Step 2: Create archetypes
Use your research’s findings to create the archetypes of abuser and survivor once you’ve finished your research. Archetypes are not personas, as they’re not based on real people that you interviewed and surveyed. Instead, they’re based on your research into likely safety issues, much like when we design for accessibility: we don’t need to have found a group of blind or low-vision users in our interview pool to create a design that’s inclusive of them. Instead, we base those designs on existing research and the requirements of this group. Personas typically represent real users and include many details, while archetypes are broader and can be more generalized.
The abuser archetype is someone who views a product as a means of harm ( Fig. 5.2 ). They may be trying to harm someone they don’t know through surveillance or anonymous harassment, or they may be trying to control, monitor, abuse, or torment someone they know personally.
Someone who is being abused with the product is the survivor archetype. There are various situations to consider in terms of the archetype’s understanding of the abuse and how to put an end to it: Do they need proof of abuse they already suspect is happening, or are they unaware they’ve been targeted in the first place and need to be alerted ( Fig 5.3 )?
You may want to make multiple survivor archetypes to capture a range of different experiences. They may be aware of the abuse being occurring but not be able to stop it, such as when a stalker keeps tracing their whereabouts or when an abuser locks them out of IoT devices ( Fig. 5.4). Include as many of these scenarios as you need to in your survivor archetype. These will be used later when you create solutions to help your survivor archetypes achieve their goals of preventing and ending abuse.
It may be useful for you to create persona-like artifacts for your archetypes, such as the three examples shown. Focus on their objectives rather than the demographic details we frequently see in personas. The goals of the abuser will be to carry out the specific abuse you’ve identified, while the goals of the survivor will be to prevent abuse, understand that abuse is happening, make ongoing abuse stop, or regain control over the technology that’s being used for abuse. Later, you’ll think about how to help the survivor’s goals and the abuser’s goals.
And while the “abuser/survivor” model fits most cases, it doesn’t fit all, so modify it as you need to. For example, if you uncovered an issue with security, such as the ability for someone to hack into a home camera system and talk to children, the malicious hacker would get the abuser archetype and the child’s parents would get survivor archetype.
Step 3: Remind yourself of your issues
After creating archetypes, brainstorm novel abuse cases and safety issues. You’re trying to identify entirely new safety issues that are unique to your product or service by using the term” Novel” in terms of things you’ve not found in your research. The goal with this step is to exhaust every effort of identifying harms your product could cause. You aren’t worrying about how to prevent the harm yet—that comes in the next step.
How else could your product be used for any kind of abuse besides what you’ve already found in your research? I recommend setting aside at least a few hours with your team for this process.
Try conducting a Black Mirror brainstorming session if you want to start somewhere. This exercise is based on the show Black Mirror, which features stories about the dark possibilities of technology. Try to figure out how your product would be used in an episode of the show—the most wild, awful, out-of-control ways it could be used for harm. Participants typically have a lot of fun when I lead Black Mirror brainstorms ( which is great because having fun when designing for safety! ). I recommend time-boxing a Black Mirror brainstorm to half an hour, and then dialing it back and using the rest of the time thinking of more realistic forms of harm.
You may still not feel confident that you have found every potential source of harm after identifying as many opportunities for abuse as you can. A healthy amount of anxiety is normal when you’re doing this kind of work. It’s common for teams designing for safety to worry,” Have we really identified every possible harm? What if something is missing? If you’ve spent at least four hours coming up with ways your product could be used for harm and have run out of ideas, go to the next step.
It’s impossible to say 100 % assurance that you’ve done everything, but instead of aiming for 100 %, acknowledge that you’ve done it and will continue to prioritize safety in the future. Once your product is released, your users may identify new issues that you missed, aim to receive that feedback graciously and course-correct quickly.
Step 4: Design solutions
You should now be able to identify potential harm-causing uses for your product as well as survivor and abuser archetypes describing opposing user objectives. The next step is to identify ways to design against the identified abuser’s goals and to support the survivor’s goals. This is a good addition to existing areas of your design process where you’re making recommendations for solutions to the various issues your research has identified.
Some questions to ask yourself to help prevent harm and support your archetypes include:
Can you design your product in such a way that the identified harm cannot happen in the first place? If not, what barriers can you place to stop the harm from occurring?
How can you make the victim aware that abuse is happening through your product?
How can you assist the victim in understanding what they need to do to stop the problem?
Can you identify any types of user activity that would indicate some form of harm or abuse? Could your product help the user access support?
In some products, it’s possible to proactively detect harm that is occurring. For example, a pregnancy app might be modified to allow the user to report that they were the victim of an assault, which could trigger an offer to receive resources for local and national organizations. Although this kind of proactiveness is not always possible, it’s worthwhile to spend a half hour talking about how your product could help the user receive help in a safe manner if any kind of user activity would indicate some form of harm or abuse.
That said, use caution: you don’t want to do anything that could put a user in harm’s way if their devices are being monitored. If you do offer some kind of proactive help, always make it voluntary, and think through other safety issues, such as the need to keep the user in-app in case an abuser is checking their search history. In the next chapter, we’ll walk through a good illustration of this.
Step 5: Test for safety
The final step is to evaluate your prototypes from the perspective of your archetypes, who wants to harm the product and the victim of the harm who needs to regain control over the technology. Just like any other kind of product testing, at this point you’ll aim to rigorously test out your safety solutions so that you can identify gaps and correct them, validate that your designs will help keep your users safe, and feel more confident releasing your product into the world.
Ideally, safety testing happens along with usability testing. If you work for a company that doesn’t conduct usability testing, you might be able to use safety testing to deftly perform both. A user who uses your design while trying to use it against someone else can also be encouraged to point out interactions or other design details that don’t make sense.
You’ll want to conduct safety testing on either your final prototype or the actual product if it’s already been released. There is no harm in testing an existing product that wasn’t created with safety goals in mind right away; “etrofitting” it for safety is a wise thing to do.
Remember that testing for safety involves testing from the perspective of both an abuser and a survivor, though it may not make sense for you to do both. Alternatively, if you made multiple survivor archetypes to capture multiple scenarios, you’ll want to test from the perspective of each one.
You as the designer are probably too closely acquainted with the product and its design at this point, just like other usability testing techniques, and you know the product too well. Instead of doing it yourself, set up testing as you would with other usability testing: find someone who is not familiar with the product and its design, set the scene, give them a task, encourage them to think out loud, and observe how they attempt to complete it.
testing for abuse
The goal of this testing is to understand how easy it is for someone to weaponize your product for harm. Unlike with usability testing, you want to make it impossible, or at least difficult, for them to achieve their goal. Use your product in an effort to accomplish the objectives in the abuser archetype you created earlier.
For example, for a fitness app with GPS-enabled location features, we can imagine that the abuser archetype would have the goal of figuring out where his ex-girlfriend now lives. With this in mind, you’d make every effort to discover the location of a different user who has their privacy settings turned on. You might try to see her running routes, view any available information on her profile, view anything available about her location ( which she has set to private ), and investigate the profiles of any other users somehow connected with her account, such as her followers.
If by the end of this you’ve managed to uncover some of her location data, despite her having set her profile to private, you know now that your product enables stalking. Reverting to step 4 and figuring out how to stop this from occurring is your next step. You may need to repeat the process of designing solutions and testing them more than once.
testing for a Survivor
testing for a Survivor involves identifying how to give information and power to the survivor. It might not always make sense based on the product or context. Thwarting the attempt of an abuser archetype to stalk someone also satisfies the goal of the survivor archetype to not be stalked, so separate testing wouldn’t be needed from the survivor’s perspective.
However, there are cases where it makes sense. A survivor archetype’s goal would be to discover who or what causes the temperature change when they aren’t doing it themselves, for instance. You could test this by looking for the thermostat’s history log and checking for usernames, actions, and times, if you couldn’t find that information, you would have more work to do in step 4.
Another goal might be regaining control of the thermostat once the survivor realizes the abuser is remotely changing its settings. Are there any instructions that explain how to remove a user and change the password, and are they simple to find? For your test, you would need to try to figure out how to do this. This might again reveal that more work is needed to make it clear to the user how they can regain control of the device or account.
Stress testing
To make your product more inclusive and compassionate, consider adding stress testing. This concept comes from Design for Real Life by Eric Meyer and Sara Wachter-Boettcher. The authors noted that personas typically focus on happy people, but happy people are frequently anxious, stressed, unhappy, or even tragic. These are called” stress cases”, and testing your products for users in stress-case situations can help you identify places where your design lacks compassion. More information about how to incorporate stress cases into your design can be found in Design for Real Life, as well as in many other effective methods for compassionate design.