Category: Blog

Your blog category

  • Peacemaker Just Brought an Unlikely Team of Superman Villains to the DCU

    Peacemaker Just Brought an Unlikely Team of Superman Villains to the DCU

    Trailers appear in this post for Season 2 Episode 3 of Peacemaker. By this stage, we don’t be surprised when James Gunn pulls an obscure DC Comics figure for one of his tasks, but he may have outdone himself in the latest episode of Peacemaker. While traveling to the other world where he is a devoted fan, Chris Smith […]

    The article Peacemaker Only Brought an Unlikely Group of Superman Criminals to the DCU appeared initially on Den of Geek.

    John Cena has embodied a lore that are typically reserved for comic book characters since 2002. Then his wrestling retirement work has felt less like a goodnight and more like the ultimate act of a story that has spanned across different timescales. As WWE continues to recreate itself, it’s a major event that makes it difficult for anyone who has traveled with him to enjoy anything he’s done and who he was for the professional wrestling industry.

    What makes the schedule exciting is the humor of Cena closing the door on his work as WWE’s enduring hero, while having begun a fresh start as another one in an entirely new world. When James Gunn portrayed the boxer as Chris Smith, aka Peacemaker, in The Suicide Squad, he placed Cena in a position he had already prepared for. That’s because his profession mirrored the very people that he was tasked to describe. In the WWE, he embodied the same paradox and richness that Peacemaker carries through the DC Universe. &nbsp,

    cnx. powershell. cnx ( playerId:” 106e33c0-3911-473c-b599-b1426db57530″ ) is the function of the player. render ( “0270c398a82f44f49c23c16122516796” ), }),

    Cena’s on-screen emergence in Hollywood reinforced what wrestlers was now publishing out in the wild about him. In WWE’s situation, it was him holding up the company on his back when tradition fans were turning away from the solution. Christopher Smith’s pursuit of justice by all means that led to his atonement is significant for DC. And like WWE, James Gunn saw something in Cena’s skill to be a principal character as he sought to modify the notion of DC, helping move Cena into the kind of critically-acclaimed professional that had been just out of achieve as he pursued Hollywood.

    Cena’s in-ring occupation was defined by Peacemaker’s contradictions, his sincerity and sincerity that clashed with murder and absurdity. His pension run and Peacemaker path aren’t independent stories. They are the exact story told in various media. He was the WWE’s misunderstood warrior and is now the DC Universe’s misunderstood warrior too. Both have nature stories. Cena’s began a few short decades after the earth entered into the year 2000 in a squared-circle. &nbsp,

    The New Millenium’s Superhero

    The jobs in the wrestling business are just as unpredictable as those outside. Stars can fall or die almost immediately. It might have been possible to take Kurt Angle’s June 2002 album straight from a graphic novel’s splash page. Fans watched a comfortable quarterback cry “ruthless anger” before putting on a now-legendary meet with one of the company’s most painted performers. No one could have predicted that Cena may continue to be the agency’s most recognizable standard-bearer for the next 20 years, despite the fact that this all but spelled their belief in him.

    What made his appearance more profound was its schedule. The September 11 attacks were less than a year after that album, when wrestling, like much of the United States, was looking for hope and a sense of national ethos. WWE leaned heavily on patriotic story throughout 2001 and 2002 and is notably known as the second public meeting spread after the problems. Despite the country’s overall history of horror, the search for a new and innovative face that could change into their vessel of British resilience. &nbsp,

    Cena fit that description, and his clean-cut appearance may be combined with a martial welcome, the phrase” Hurry, Loyalty, Respect,” which embodied the core principles of America, and was presented as an outsider who would never,” Never Give Up.” He also regularly started wearing colour items and was booked in functions that cast him as the experience of National perseverance. In many ways, he was perceived as the company’s man by WWE. He was a strong contrast of the dying Attitude Century when the organization wanted to tone down their edge in the midst of crisis and rebrand into a company that was centered around stability and wish through a major character.

    Cena continues to serve as WWE’s magnetic facility to this day. He survived swings in command, social changes, and lover rebellions, carrying the business on his backside during its most delicate transitions. He was their constant ally, their outlet, and much more than a trustworthy hand. He was the side.

    Cena’s pension circle replaces nostalgia. It is a solution. Every single time instantly connects as one in a long-running humorous book, similar to the final issue. This completes the hero account wrestlers had been writing about him in plain view since the outset.

    Hero of the PG Time

    In 2008, when WWE totally embraced PG software, the business needed more than a hero. They needed someone who could resist attention and represent security. Cena became that figure, clean and eternally promotable, the company’s most apparent experience.

    That change was significant. The Attitude Era took rights with raunchiness, body, swearing, and antiheroes like Stone Cold Steve Austin and then, that additional was gone. The company’s leadership replaced it with a more family-friendly story and a partnership-focused perspective. This brought in partnerships with companies like Mattel and Post Consumer Brands. One of wrestling’s biggest fan uprisings ever occurred as a result of sponsorship deals and a rapid rise in popular culture. Die-hards rejected the tamer solution typified by Cena, and audiences responded in person. They returned his shirt to him and carried signs that read,” If Cena Wins, We Riot,” with them. They cheered the poor men, hijacked his adverts, and filled fields with slogans like,” You can’t fight”. The man they thought was the savior of this innovative way was the object of fanfare. &nbsp,

    Anti-Cena attitude wasn’t antiquated. It was popular and he took the brunt of every dispute that had been building between the market and the company. The people opted for revolution, but they were rather given respectability. Below the area, this was a turbulent time, but Cena was still there, standing as a regular through it all. &nbsp,

    Very few individuals in current pleasure may say they worked seven days a week. Cena did. For more than a decade, he lived a rewarding schedule that saw him fight more than 250 nights a year, travel abroad, look at live events, headline television and pay-per-views, and still maintain first morning talk shows and late-night interviews. Additionally, he became the most popular Make-A-Wish celebrity, granting more than 650 wishes and setting a record that no other celebrity has ever achieved. His life was relentless, structured almost entirely around WWE’s demands, and yet he carried that responsibility without faltering. He has never expressed any grievances. He always showed up. He persevered with injuries and delivered his best every day.

    His presence was not only cultural, but financial. Cena worked part-time in 2018, only second only to Roman Reigns in sales. He headlined more pay-per-views than anyone else in company history and was a ratings stabilizer when Monday Night Raw was averaging around 3.5 million weekly viewers. With live event gates and merchandise sales firmly correlated with Cena’s draw potential, WWE’s annual revenue increased from about$ 485 million in 2007 to$ 729 million in 2016.

    Streaming numbers proved his drawing power too. The main event of SummerSlam was won by Cena in 2021, making it the most watched SummerSlam in WWE history on Peacock at the time. His 2023 SmackDown comeback in September led to one of the brand&#8217, s largest viewership surges of that year. The presence of Cena led to significant business gains.

    Fans, however, didn’t appreciate how those numbers translated into their experience and often did not care to see the superhero either. What was happening on camera wasn’t the edge or shock that they demanded. At that very moment, antiheroes were dominating pop culture and Cena’s squeaky-clean persona was unwelcome. Events rang with” Cena sucks” chants. He faced continuous criticism of his wrestling ability and his infamous” Five Moves of Doom” and promos that sounded almost too good. He never bowed. He walked into the fire every night, in the midst of a brutal and grueling schedule that preceded his appearances and waited for him afterwards. When the world believed they wanted him to change and be something different, he turned into Superman in flesh onscreen and off of it as well.

    His rivalry with Randy Orton became the backbone of the PG Era, a battle of morality against rebellion that proved WWE could endure when anchored by one man who refused to compromise. He was a component of the brand that was required. He took up a mantle, one that presented itself as desirable, but was paired with an incredible cost. Because of this, he shouldn’t retire. It’s the end of one of the greatest stories ever told in a professional sport. Few have ever walked in shoes like these and had the opportunity to leave gracefully.

    John Cena’s Comic Book Saga

    Cena’s career has been filled with matchups that are larger-than-life. His rivalry with Edge was chaos against order. His conflict with CM Punk established a resistance to change. His battles with The Rock were staged like crossover summer blockbusters. Each pair brought significant stakes with them, and they each made a significant contribution to his larger mythology.

    The nickname” Super Cena” was not wrong. Even though they opposed it, the archetype Cena portrayed was still accurately identified by the fans. They wanted him, but only on their terms. However, Cena’s presence maintained the organization. He was a total performer who was shouldering complex burdens: headlining pay-per-views, filming charity spots, handling endless media, and wrestling night after night with the same effort. He was significant not just for the sake of his championships but also for his perseverance. And he never became nasty and resentful after fulfilling his duties time and time again. He simply kept going. That is the essence of a hero and someone who deserves their endless flowers.

    Cena has been defined by years of rejection. Now, arenas thunder in gratitude. The same fans who once criticized him for being too flawless now acknowledge that he had a unique talent that they may never see displayed.

    Cena’s final appearances have electrified crowds because of the weight of finality. His salutation at the top of the ramp has since lost some of its appeal. They are the last frames of a story that has been building for as long as some can even remember. Every word he has used has a different meaning, being smuggled with hints of both his legacy and his farewell.

    Wrestlepalooza, WWE’s first under the TKO banner, highlights Cena’s place in the company and the need to have someone ready to be their franchise player who provides continuity while Cena provides closure. His final opponent’s speculation now feels less like booking chatter and more like myths are being written in real time. If Orton is the rival who defines his arc, if Roman Reigns is the franchise who succeeded him, or if a younger talent is chosen to stand across from him, the choice will symbolize more than a match. It will mark the passing of the torch to the new standard bearer and generational talent for the next 20 years, and hopefully the final page.

    Cena’s last appearance on the Friday Night Smackdown brand is on the same show and in the same building where he made his debut against Kurt Angle in 2002: Chicago’s Allstate Arena. The setting represents the hero’s journey’s final transformation that has taken place over the years. The return there more than two decades later, at the close of his career, transforms September 5, 2025 into something larger than nostalgia. Cena and his audience can share in the end of the world together, but it turns into sacred space.

    As this chapter closes, Cena doesn’t just leave behind championships or catchphrases. The last great wrestling era, which he carried out with great perseverance, consistency, and sacrifice, comes to an end with his retirement. And for once, in both WWE and Hollywood, the world seems ready to admit they had been watching a superhero all along.

    The first post on Den of Geek was John Cena’s retirement: the final chapter of a superhero story.

  • The Conjuring: Last Rites Turns the Horror Franchise into a Schmaltzy Soap Opera

    The Conjuring: Last Rites Turns the Horror Franchise into a Schmaltzy Soap Opera

    The Conjuring: Next RiTES has full spoilers in this post. It all ends with a ceremony. Paranormal investigators Ed and Lorraine Warren ( Patrick Wilson and Vera Farmiga ) dance together in the final moments of The Conjuring: Last Rites, surrounded by [ …][ …]

    The article The Conjuring: Next Customs Turns the Horror Franchise into a Schmaltzy Soap Opera appeared initially on Den of Geek.

    John Cena has embodied a mythology that is typically reserved for comic book characters since 2002. Then his wrestling retirement work has felt less like a goodnight and more like the ultimate act of a story that has spanned across different timescales. It’s a major event that makes it difficult for anyone who has traveled with him to enjoy anything he’s done and who he was for the professional wrestling industry as WWE continues to recreate itself.

    What makes the schedule exciting is the sarcasm of Cena closing the door on his work as WWE’s enduring hero, while having begun a fresh start as another one in an entirely new world. When James Gunn portrayed the boxer as Chris Smith, aka Peacemaker, in The Suicide Squad, he placed Cena in a position he had already prepared for. That’s because his profession mirrored the very people that he was tasked to describe. In the WWE, he embodied the same paradox and richness as Peacemaker does in the DC Universe. &nbsp,

    cnx. powershell. cnx ( playerId:” 106e33c0-3911-473c-b599-b1426db57530″ ) is the function of the player. render ( “0270c398a82f44f49c23c16122516796” ), }),

    Cena’s on-screen emergence in Hollywood reinforced what wrestlers was presently reporting out in the wild about him. In WWE’s situation, it was him holding up the company on his back when tradition fans were turning away from the solution. Christopher Smith’s relentless pursuit of justice was what ultimately led to his atonement, in DC. And like WWE, James Gunn saw something in Cena’s skill to be a principal character as he sought to modify the notion of DC, helping move Cena into the kind of critically-acclaimed professional that had been just out of achieve as he pursued Hollywood.

    Cena’s in-ring occupation was defined by Peacemaker’s contradictions, his sincerity and sincerity that clashed with murder and absurdity. His pension run and Peacemaker path aren’t independent stories. They are the exact tale told in various media. He was the WWE’s misunderstood warrior and is now the DC Universe’s misunderstood hero also. Both have nature stories. Cena’s began a few short decades after the earth entered into the year 2000 in a squared-circle. &nbsp,

    The New Millenium’s Superhero

    The jobs in the wrestling business are just as unpredictable as those outside of it. Stars may increase or die almost immediately. It might have been possible to take the title of Cena’s June 2002 film album against Kurt Angle straight from a visual novel’s cover. Fans watched a comfortable quarterback cry “ruthless anger” before putting on a now-legendary meet with one of the company’s most painted performers. Nobody could have imagined that Cena would become the agency’s most recognizable standard-bearer for the next 20 years, despite the fact that this all but spelled their faith in him. It was a powerful advantages.

    What made his appearance more poignant was its schedule. The September 11 attacks were less than a year after that album, when wrestling, like much of the United States, was looking for hope and a sense of national ethos. WWE leaned heavily on patriotic story throughout 2001 and 2002 and is notably known as the second public meeting spread after the problems. Despite the country’s overall history of horror, the search for a new and innovative face that could change into their vessel of British resilience. &nbsp,

    Cena fit that description, and his clean-cut appearance may be combined with a martial welcome, the phrase” Hurry, Loyalty, Respect,” which embodied the core principles of America, and was presented as an outsider who would never,” Never Give Up.” He also regularly started wearing colour items and was booked in functions that cast him as the experience of National perseverance. In many ways, he was perceived as the company’s man by WWE. He was a strong contrast of the dying Attitude Century when the organization wanted to tone down their edge in the midst of crisis and rebrand into a company that was centered around stability and wish through a major character.

    Cena continues to serve as the tidal centre of WWE to this day. He survived swings in command, social changes, and admirer rebellions, carrying the business on his backside during its most delicate transitions. He was their outlet, their lightning rod, and much more than a trustworthy hand. He was the side.

    Cena’s pension circle replaces nostalgia. It is a solution. Every single time instantly connects as one in a long-running humorous book, similar to the final matter. This completes the hero account wrestlers had been writing about him in plain view since the outset.

    The PG Era‘s Hero

    In 2008, when WWE totally embraced PG software, the business needed more than a hero. They needed someone who could resist attention and exhibit security. Cena became that figure, clean and eternally promotable, the company’s most apparent experience.

    That change was significant. The Attitude Era took rights with raunchiness, body, swearing, and antiheroes like Stone Cold Steve Austin and then, that additional was gone. A more family-friendly narrative and partnership-focused perception from the top of the business replaced it. This brought in partnerships with companies like Mattel and Post Consumer Brands. One of the loudest lover rebellions in wrestling history was caused by sponsorship deals and a rapid rise in conventional charm. Die-hards rejected the tamer solution typified by Cena, and audiences responded in people. They returned his shirt to him and carried signs that read,” If Cena Wins, We Riot,” with them. They cheered the poor men, hijacked his adverts, and filled fields with slogans like,” You can’t fight”. The man they regarded as the savior of this innovative way was the target of the fanfare. &nbsp,

    Anti-Cena mood wasn’t antiquated. It was popular and he took the brunt of every gripe that had been building between the market and the company. The people opted for dignity over rebellion. Below the area, this was a turbulent time, but Cena was still there, standing as a regular through it all. &nbsp,

    Very few individuals in current pleasure may say they worked seven days a week. Cena did. For more than a decade, he lived a rewarding routine that saw him fight more than 250 nights a year, travel abroad, look at live events, headline television and pay-per-views, and still maintain first morning talk shows and late-night interviews. He also rose to the position of Make-A-Wish’s most popular celebrity, granting more than 650 wishes and setting a record that no other person has ever achieved. His life was relentless, structured almost entirely around WWE’s demands, and yet he carried that responsibility without faltering. He never once expressed his frustration. He always showed up. He persevered despite his injuries and performed as promised.

    His presence was not only cultural, but financial. Cena worked part-time in 2018, only second to Roman Reigns in sales. He headlined more pay-per-views than anyone else in company history and was a ratings stabilizer when Monday Night Raw was averaging around 3.5 million weekly viewers. With live event gates and merchandise sales heavily influenced by Cena’s drawing power, WWE’s annual revenue increased from about$ 485 million in 2007 to$ 729 million in 2016.

    Streaming numbers proved his drawing power too. The 2021 return to face Roman Reigns in the main event of SummerSlam made it the most watched SummerSlam ever on Peacock at the time. His 2023 SmackDown comeback in September led to one of the brand&#8217, s largest viewership surges of that year. The presence of Cena led to significant business gains.

    Fans, however, didn’t appreciate how those numbers translated into their experience and often did not care to see the superhero either. What was happening on camera wasn’t the edge or shock that they demanded. At that very moment, antiheroes were dominating pop culture and Cena’s squeaky-clean persona was unwelcome. Events rang with” Cena sucks” chants. He faced continuous criticism of his wrestling ability and his infamous” Five Moves of Doom” and promos that sounded almost too good. He never swayed. He walked into the fire every night, in the midst of a brutal and grueling schedule that preceded his appearances and waited for him afterwards. When the world believed they wanted him to change and be something different, he became Superman made flesh onscreen and off of it as well.

    His rivalry with Randy Orton became the backbone of the PG Era, a battle of morality against rebellion that proved WWE could endure when anchored by one man who refused to compromise. He made a necessary part of the brand. He took up a mantle, one that presented itself as desirable, but was paired with an incredible cost. Because of this, he shouldn’t retire. It’s the end of one of the greatest stories ever told in a professional sport. Few people have ever had the opportunity to leave gracefully in shoes like these.

    John Cena’s Comic Book Saga

    Cena’s career has been full of matchups that are larger-than-life. His rivalry with Edge was chaos against order. His conflict with CM Punk was a first for revolution. His battles with The Rock were staged like crossover summer blockbusters. Each pair brought significant stakes with them, and they each made a significant contribution to his larger mythology.

    The nickname” Super Cena” was not wrong. Even though they resisted it, the archetype Cena portrayed was still accurately identified by the audience. They wanted him, but only on their terms. However, Cena’s presence still managed to keep the group together. He was a total performer who was shouldering complex burdens: headlining pay-per-views, filming charity spots, handling endless media, and wrestling night after night with the same effort. He was significant for both his endurance and his championship success. And he never became nasty and resentful after fulfilling his duties time and time again. He simply continued. That is the essence of a hero and someone who deserves their endless flowers.

    Cena’s reputation has been defined by years of rejection. Now, arenas thunder in gratitude. The same fans who once criticized him for being too flawless now acknowledge that he had a unique talent that they may never see displayed.

    Cena’s final appearances have electrified crowds because of the weight of finality. The image of him saluting at the top of the ramp has lost some of its appeal. They are the last frames of a story that has been building for as long as some can even remember. Every word he has spoken has a different landing, tinged with hints of both his legacy and his farewell.

    Wrestlepalooza, WWE’s first under the TKO banner, highlights Cena’s place in the company and the need to have someone ready to be their franchise player who provides continuity while Cena provides closure. His final opponent’s speculation now feels less like booking chatter and more like myths are being written in real time. If Orton is the rival who defines his arc, if Roman Reigns is the franchise who succeeded him, or if a younger talent is chosen to stand across from him, the choice will symbolize more than a match. It will mark the passing of the torch to the new standard bearer and generational talent for the next 20 years, and hopefully the final page.

    Cena’s last appearance on the Friday Night Smackdown brand is on the same show and in the same building where he made his debut against Kurt Angle in 2002: Chicago’s Allstate Arena. The setting represents the hero’s journey’s final transformation that has taken place over the years. The return there more than two decades later, at the close of his career, transforms September 5, 2025 into something larger than nostalgia. Cena and his audience can share the experience of together closure in sacred space.

    As this chapter closes, Cena doesn’t just leave behind championships or catchphrases. The last great wrestling story of the PG Era, an entire era of professional wrestling that he carried with sheer endurance, consistency, and sacrifice, comes to an end with his retirement. And for once, in both WWE and Hollywood, the world seems ready to admit they had been watching a superhero all along.

    The first post on Den of Geek was John Cena’s retirement: the final chapter of a superhero story.

  • Why Peacemaker Season 2 Does the Multiverse Right

    Why Peacemaker Season 2 Does the Multiverse Right

    Trailers appear in this post for Seasons 1 through 3 of Peacemaker. ” Best. Universum. Actually”. The alternative reality that the Peacemaker visits during the next season of the show that bears his name is described in that manner. While it was a cheering crowd—complete with a child moved to tears and a woman moved to bare her ]… ]

    On Den of Geek, why Peacemaker Season 2 Does the Multiverse Right appeared second.

    Since 2002, John Cena embodied a lore typically reserved for comic book characters. His wrestlers retirement work has now come off as a goodnight and more like the closing scene of a story that has taken place on numerous timelines. It’s a major event that challenges everyone who has been on this trip with him to appreciate everything he’s done and who he was for the firm of professional wrestlers as WWE continues to recreate itself.

    The humor of Cena closing the door on his reign as WWE’s enduring hero while making a fresh start as a new one in a completely new world makes the scheduling exciting. When James Gunn cast the boxer as Chris Smith a. k. a. Peacemaker in The Suicide Squad, he placed Cena in a position that he was already prepared for. Because of how closely his profession resembled the person he was expected to present. He embodied the same paradox and difficulty in the WWE that Peacemaker carries in the DC Universe. &nbsp,

    cnx. command. push ( function ( ) {cnx ( {playerId:” 106e33c0-3911-473c-b599-b1426db57530″, }). render ( “0270c398a82f44f49c23c16122516796” ),

    Cena’s on-screen development in Hollywood became a reinforcement of what wrestlers was already writing about him out in the open. In the case of WWE, it was him who was holding onto the company while the company’s former supporters were turning away from the product. For DC, it’s Christopher Smith’s pursuit of justice by any means that led him to achieve penance. And as he sought to change the perception of DC, James Gunn saw something in Cena’s potential to be a major character, helping to make him the kind of critically acclaimed actor who had been so far away from Hollywood as he pursued it.

    Peacemaker’s conflicts, his sincerity and sincerity that intersects with assault and absurdity is the very dilemma that defined Cena’s in-ring job. His peacemaking career and his pension story are not distinct. They are the same tale told in various methods. He was the misunderstood warrior of WWE and is now the misunderstood hero of DC Universe. Both have an nature narrative. The era of the cena began a few short decades after the world squared-circle-circleted the earth into the year 2000. &nbsp,

    Superhero of The New Millennium

    The wrestling market is as volatile as the profession inside of it. Nearly over, celebrities can rise or fall. Cena’s debut in June 2002 against Kurt Angle could have been pulled straight from the splash page of a creative tale. Fans listened intently to a comfortable quarterback cry “ruthless aggression” before staging a now-legendary match with one of the bank’s most accomplished performers. It was a powerful entry, but nobody could have predicted that Cena would become the agency’s most renowned standard-bearer for the next two years, even if this time all but signaled their opinion in him.

    His appearance was more poignant because of the schedule. That album came less than a year after the September 11 problems, a time when wrestling, like much of the United States, was searching for enthusiasm and a renewed sense of national ethos. Throughout 2001 and 2002, WWE leaned heavily on nationalist narrative, and it is renowned as the first public collecting broadcast following the attacks. However, with horror looming over the region as a whole, the search for a new and fresh mouth that could change into their vehicle of British resilience. &nbsp,

    Cena suit that mildew and his cleancut look may be paired with a military salute, the mantra of” Hurry, Loyalty, Respect” that reflected the core values of America, and was presented as an outsider who do,” Never Grant Up”. He also regularly began wearing camouflage equipment and was cast in roles that depicted National perseverance. In many ways, he was looked to as WWE’s man for the organization. He stood in stark contrast to the company’s waning Attitude Era, which wanted to reshape itself into a business with a principal character that was centered on balance and wish in the midst of a crisis.

    To this day, Cena is still the magnetic centre of WWE. He withstanded changes in leadership, social shifts, and fan rebellions, carrying the business on his rear during its most arduous transitions. He was usually their lightning pole, their buoy, and much more than a trusted hand. The finger was him.

    Cena’s pension circle supersedes memories. It is a decision. It reads like the last topic of a long-running comic book, where every single moment instantly connects as one. This completes the hero narrative that wrestling has been keeping a bare eye on from the beginning.

    Hero of The PG Age

    When WWE totally embraced PG development in 2008, the company needed more than just a hero. They needed someone who could resist attention and represent security. The brand’s most noticeable face was Cena, who became that person, who was wholesome and eternally promoted.

    That move was huge. The Attitude Era took rights by featuring antiheroes like Stone Cold Steve Austin and raunchiness, body, swearing, and other violent acts. In its place came a more family-friendly story, partnership-focused perspective from the top of the business. This resulted in partnerships with companies like Post Consumer Brands and Mattel. Sponsorships and a sudden rise in conventional appeal triggered one of the loudest lover rebellions in wrestling story. Die-hards rebuffed the less-than-tamer product that Cena typified, and crowds reacted in person. They threw his shirts back at him, brought signs with them that said things like,” If Cena Wins, We Riot”. They chanted” You can’t wrestle,” hijacked his promos, and cheered the bad guys in their arenas. Fans directed their frustration at the man they believed was the herald of this new direction. &nbsp,

    Anti-Cena sentiment wasn’t fringe. He took the brunt of every grievance that had been arising between the audience and the business because it was common. The people wanted rebellion and they got respectability instead. Although this was a chaotic time beneath the surface, Cena was still present and a constant throughout it all. &nbsp,

    Few people in contemporary entertainment can claim to work seven days per week. Cena did. He worked a punishing schedule for more than 250 nights a year, traveled abroad, and still managed to host live shows and late-night interviews. On top of that, he became the most requested celebrity for Make-A-Wish, granting more than 650 wishes and setting a record no one has come close to. His life was relentless, tightly woven almost entirely around WWE’s demands, and yet he carried that responsibility without falsification. He never once complained. He always arrived. He worked through injuries and gave his best day in and day out.

    His presence was both financial and cultural. In 2018, Cena was working part-time and only second to Roman Reigns in sales. He was a ratings stabilizer when Monday Night Raw had an average of 3.5 million weekly viewers and headlined more pay-per-views than anyone else in company history. During that run, WWE’s annual revenue climbed from roughly$ 485 million in 2007 to$ 729 million in 2016, with live event gates and merchandise sales strongly tied to Cena’s drawing power.

    His drawing ability was demonstrated by streaming numbers as well. Cena’s 2021 return to face Roman Reigns in the main event of SummerSlam drove the occasion to become the most-watched SummerSlam in WWE history on Peacock at the time. His 2023 SmackDown comeback in September resulted in one of the brand’s biggest viewership increases that year. Cena’s presence produced measurable business lifts.

    Fans, however, were less interested in seeing the superhero because they frequently didn’t care how those numbers translated into their lives. What was happening on camera wasn’t the shock and awe or the edge that they demanded. Antiheroes were dominating popular culture at the time, and Cena’s cynical demeanor was inappropriate. Events rang with chants of” Cena sucks”. He frequently received criticism for his wrestling prowess, his infamous” Five Moves of Doom” promos, and other nearly too good-sounding commercials. He never bent. Every night, as he slept through a brutal and exhausting schedule that followed his appearances, he walked into the fire and waited for him to arrive. He became Superman made flesh onscreen and off of it too, when the world thought they wanted him to change and be something different.

    The morality conflict between WWE and Randy Orton served as the foundation of the PG Era, proving WWE could survive without the support of one man who refused to compromise. He was a part of the brand that was necessary. He took up a mantle, one that appeared attractive but had a price tag that was unbelievable. This is why his retirement isn’t normal. One of the best stories ever told in a professional sport is now the end. Very few have ever walked in shoes like these and got their chance to leave gracefully.

    The Comic Book Saga of John Cena

    Cena’s career has been full of larger-than-life matchups. His conflict with Edge was chaos against order. His feud with CM Punk was establishment against revolution. His conflicts with The Rock were staged like crossover summer blockbusters. Each pairing carried meaningful stakes with them and they all contributed to his larger mythology.

    The term” Super Cena” was appropriate. Fans were correctly identifying the archetype Cena portrayed even as they resisted it. He was only wanted on their terms, though. Still, Cena’s presence kept the company intact. He was a masterful performer who was carrying heavy duties like headlining pay-per-views, filming charity events, managing endless media, and wrestling night after night with the same effort. He was important not just because of championships, but because of his endurance too. And he never developed animosity or resentment after repeatedly completing his duties. He just kept going. That is the character of a hero and a person who deserves endless flowers.

    For years, Cena was defined by rejection. Arenas are now thundering in gratitude. The same fans who once booed him for being too perfect now recognize that he was a special talent the likes of which they may never see again.

    Due to the weight of the finality, Cena’s final appearances have captivated audiences. The sight of him saluting at the top of the ramp has regained its luster. They represent the final elements of a narrative that has been developing for as long as some people can recall. Every word he has spoken has landed differently, laced with subtle hints of his legacy and of his goodbye.

    WWE’s first wrestling event under the TKO banner, Wrestlepalooza, emphasizes Cena’s position within the business and the need for a franchise player who can provide continuity while Cena provides closure. The speculation over his final opponent now feels less like booking chatter and more like mythology being written in real time. The choice will mean more than just a match if Orton is the rival who determines his arc, Roman Reigns is the franchise who took his place, or if a younger talent is chosen to stand across from him. It will represent the final page and hopefully the passing of the torch to the new standard bearer and generational talent for the next 20 years.

    Cena’s final appearance on Friday Night Smackdown is on the same show and in the same building, Chicago’s Allstate Arena, where he made his debut against Kurt Angle in 2002. The setting is the final metamorphosis of a hero’s journey that’s been unfolding throughout the years. More than 20 years later, at the close of his career, his return there transforms September 5, 2025 into something more than just nostalgia. It becomes sacred ground where Cena and his audience can share in closure together.

    Cena doesn’t just leave behind championships or catchphrases as this chapter comes to a close. With his retirement ends the last great story of the PG Era, an entire era of professional wrestling that he carried through sheer stamina, consistency, and sacrifice. For once, the world appears to be ready to admit they have been watching a superhero from the beginning in both Hollywood and WWE.

    The post John Cena’s Retirement Is the Final Chapter of a Superhero Story appeared first on Den of Geek.

  • John Cena’s Retirement Is the Final Chapter of a Superhero Story

    John Cena’s Retirement Is the Final Chapter of a Superhero Story

    John Cena has a lore that is typically reserved for comic book characters since 2002. Then his wrestling retirement work has felt less like a goodnight and more like the ultimate act of a story that has spanned across different timescales. Everyone who has traveled with him on this voyage must be prepared for it because it is a major event.

    The article John Cena’s Retirement Is the Last Book of a Superhero Story appeared initially on Den of Geek.

    John Cena has embodied a mythology that are typically reserved for comic book characters since 2002. Then his wrestling retirement work has felt less like a goodnight and more like the ultimate act of a story that has spanned across different timescales. It’s a major event that makes it difficult for anyone who has traveled with him to enjoy anything he’s done and who he was for the professional wrestling industry as WWE continues to recreate itself.

    What makes the schedule exciting is the sarcasm of Cena closing the door on his work as WWE’s enduring hero, while having begun a fresh start as another one in an entirely new world. In The Suicide Squad, James Gunn cast the boxer as Chris Smith, aka. Peacemaker, and he had already prepared Cena for the role. That’s because his occupation mirrored the very people that he was tasked to describe. In the WWE, he embodied the same paradox and richness that Peacemaker carries in the DC Universe. &nbsp,

    cnx. powershell. cnx ( playerId:” 106e33c0-3911-473c-b599-b1426db57530 ), ) -push ( function ( ). render ( “0270c398a82f44f49c23c16122516796” ), }),

    Cena’s on-screen emergence in Hollywood reinforced what grappling was now making of him out in the open. In WWE’s situation, it was him holding up the company on his back when tradition supporters were turning away from the solution. Christopher Smith’s relentless pursuit of justice was what ultimately led to his atonement, in DC. And like WWE, James Gunn saw something in Cena’s skill to be a principal character as he sought to modify the notion of DC, helping move Cena into the kind of critically-acclaimed professional that had been just out of achieve as he pursued Hollywood.

    The very dilemma that defined Cena’s in-ring occupation is Peacemaker’s inconsistencies, his sincerity and sincerity that clash with murder and absurdity. His pension run and Peacemaker path aren’t independent stories. They are the exact story told in various media. He was the WWE’s misunderstood warrior and is now the DC Universe’s misunderstood warrior too. Both have nature stories. Cena’s began a few short years after the earth entered into the year 2000 in a squared-circle. &nbsp,

    The New Millenium’s Superhero

    The professions in the wrestling business are just as unpredictable as those outside of it. Stars may increase or die almost immediately. It might have been possible to take Kurt Angle’s June 2002 album straight from a visual novel’s splash page. Fans watched a comfortable quarterback cry “ruthless anger” before putting on a now-legendary meet with one of the company’s most painted performers. Nobody could have predicted that Cena would become the agency’s most recognizable standard-bearer for the next 20 years, even if this all but signaled their faith in him at the time. It was a powerful advantages.

    What made his appearance more poignant was its schedule. The September 11 attacks were less than a year after that album, when wrestling, like much of the United States, was looking for hope and a sense of national spirit. WWE leaned heavily on patriotic story throughout 2001 and 2002 and is notably known as the second public meeting spread after the problems. Despite the country’s overall history of horror, the search for a new and innovative experience that could change into their vessel of British resilience. &nbsp,

    Cena suit that casting, and his clean-cut appearance may be combined with a military salute, the phrase” Hurry, Loyalty, Respect,” that embodied the core values of America, and was presented as an outsider who do,” Never Give Up.” He also regularly started wearing colour items and was booked in functions that cast him as the experience of National perseverance. He was perceived as WWE’s guy in many ways. He was a strong contrast of the dying Attitude Century when the organization wanted to tone down their edge in the midst of crisis and rebrand into a company that was centered around stability and wish through a major character.

    Cena continues to serve as the gravitational center of WWE to this day. He survived shifts in leadership, cultural changes, and fan rebellions, carrying the company on his back during its most fragile transitions. He was their anchor, their lightning rod, and much more than a trustworthy hand. He was the hand.

    Cena’s retirement arc replaces nostalgia. It is a resolution. Every single moment suddenly connects as one in the final issue of a long-running comic book. This completes the superhero story wrestling had been writing about him in plain sight since the beginning.

    Hero from the PG Era

    In 2008, when WWE fully embraced PG programming, the company needed more than a figurehead. They needed someone who could withstand scrutiny and embody stability. Cena became that figure, wholesome and endlessly promotable, the brand’s most visible face.

    That change was significant. The Attitude Era took liberties with raunchiness, blood, swearing, and antiheroes like Stone Cold Steve Austin and now, that excess was gone. A more family-friendly storytelling and partnership-focused vision from the top of the business was in its place. This brought in collaborations with entities like Mattel and Post Consumer Brands. One of wrestling’s biggest fan rebellions ever occurred as a result of sponsorship deals and a sudden rise in popular culture. Die-hards rejected the tamer product typified by Cena, and crowds responded in person. They returned his shirt to him and carried signs that read,” If Cena Wins, We Riot,” with them. They cheered the bad guys, hijacked his promos, and filled arenas with chants like,” You can’t wrestle”. The man they regarded as the savior of this new direction was the target of the fanfare. &nbsp,

    Anti-Cena sentiment wasn’t antiquated. It was common and he took the brunt of every grievance that had been building between the audience and the business. The people opted for rebellion, but they were instead given respectability. Below the surface, this was a chaotic time, but Cena was still there, standing as a constant through it all. &nbsp,

    Very few people in modern entertainment can say they worked seven days a week. Cena did. For more than a decade, he lived a punishing schedule that saw him wrestle more than 250 nights a year, travel internationally, appear at live events, headline television and pay-per-views, and still manage early morning talk shows and late-night interviews. He also rose to the position of Make-A-Wish’s most popular celebrity, granting more than 650 wishes and setting a record that no other person has ever achieved. His life was relentless, structured almost entirely around WWE’s demands, and yet he carried that responsibility without faltering. He never once expressed his frustration. He always showed up. He persevered despite his injuries and performed his best every day.

    His presence was not only cultural, but financial. Cena worked part-time in 2018, only second only to Roman Reigns in sales. He headlined more pay-per-views than anyone else in company history and was a ratings stabilizer when Monday Night Raw was averaging around 3.5 million weekly viewers. With live event gates and merchandise sales heavily correlated with Cena’s drawing power, WWE’s annual revenue increased from about$ 485 million in 2007 to$ 729 million in 2016. During that run, WWE’s annual revenue significantly increased from about$ 485 million in 2007.

    Streaming numbers proved his drawing power too. The 2021 return to face Roman Reigns in the main event of SummerSlam made it the most watched SummerSlam in Peacock’s history at the time. His 2023 SmackDown comeback in September led to one of the brand&#8217, s largest viewership surges of that year. The presence of Cena led to significant business gains.

    Fans, however, didn’t appreciate how those numbers translated into their experience and often did not care to see the superhero either. What was happening on camera didn’t have the edge or shock that they had awed. At that very moment, antiheroes were dominating pop culture and Cena’s squeaky-clean persona was unwelcome. Events rang with” Cena sucks” chants. He faced continuous criticism of his wrestling ability and his infamous” Five Moves of Doom” and promos that sounded almost too good. He never swayed. He walked into the fire every night, in the midst of a brutal and grueling schedule that preceded his appearances and waited for him afterwards. When the world believed they wanted him to change and be something different, he became Superman made flesh onscreen and off of it as well.

    His rivalry with Randy Orton became the backbone of the PG Era, a battle of morality against rebellion that proved WWE could endure when anchored by one man who refused to compromise. He was a component of the required brand. He took up a mantle, one that presented itself as desirable, but was paired with an incredible cost. This is why he is retiring from a job. It’s the end of one of the greatest stories ever told in a professional sport. Few people have ever had the opportunity to leave gracefully in shoes like these.

    John Cena’s Comic Book Saga

    Cena’s career has been full of matchups that are larger-than-life. His rivalry with Edge was chaos against order. His conflict with CM Punk established a government against revolution. His battles with The Rock were staged like crossover summer blockbusters. Each pair brought significant stakes with them, and they each contributed to his larger mythology.

    The nickname” Super Cena” was not wrong. Even as they resisted it, the fans were still able to identify the archetype Cena portrayed. They wanted him, but only on their terms. The company was still standing despite Cena’s presence. He was a total performer who was shouldering complex burdens: headlining pay-per-views, filming charity spots, handling endless media, and wrestling night after night with the same effort. He was significant not just for his championships, but also for his perseverance. And he never became nasty and resentful after fulfilling his duties time and time again. He simply kept going. That is the essence of a hero and someone who deserves their endless flowers.

    Cena’s reputation has been defined by years of rejection. Now, arenas thunder in gratitude. Fans who once criticized him for being too perfect now recognize that he had a unique talent that they may never see displayed.

    Cena’s final appearances have electrified crowds because of the weight of finality. The image of him saluting at the top of the ramp has lost some of its appeal. They are the last frames of a story that has been building for as long as some can even remember. Every word he has spoken has a different landing, tinged with hints of both his legacy and his farewell.

    Wrestlepalooza, WWE’s first under the TKO banner, highlights Cena’s place in the company and the need to have someone ready to be their franchise player who provides continuity while Cena provides closure. His final opponent’s speculation now resembles booking chatter rather than myth being written in real time. If Orton is the rival who defines his arc, if Roman Reigns is the franchise who succeeded him, or if a younger talent is chosen to stand across from him, the choice will symbolize more than a match. It will mark the passing of the torch to the new standard bearer and generational talent for the next 20 years, and hopefully the final page.

    Cena’s last appearance on the Friday Night Smackdown brand is on the same show and in the same building where he made his debut against Kurt Angle in 2002: Chicago’s Allstate Arena. The setting represents the hero’s final evolution as a character has evolved over time. The return there more than two decades later, at the close of his career, transforms September 5, 2025 into something larger than nostalgia. Cena and his audience can share the experience of together closure in sacred space.

    As this chapter closes, Cena doesn’t just leave behind championships or catchphrases. The last great wrestling story of the PG Era, an entire era of professional wrestling that he carried with sheer endurance, consistency, and sacrifice, comes to an end with his retirement. And for once, in both WWE and Hollywood, the world seems ready to admit they had been watching a superhero all along.

    The article John Cena’s Retirement Is the Last Book of a Superhero Story appeared initially on Den of Geek.

  • Asynchronous Design Critique: Giving Feedback

    Asynchronous Design Critique: Giving Feedback

    One of the most successful soft knowledge we have at our disposal is the ability to work together to improve our patterns while developing our own abilities and opinions, in whatever form it takes, and whatever it may be called.

    Feedback is also one of the most underestimated equipment, and generally by assuming that we’re now great at it, we settle, forgetting that it’s a skill that can be trained, grown, and improved. Bad comments can lead to conflict on projects, lower confidence, and long-term, undermine trust and teamwork. Quality opinions can be a revolutionary force.

    Practicing our knowledge is absolutely a good way to enhance, but the learning gets yet faster when it’s paired with a good base that programs and focuses the exercise. What are some fundamental components of providing effective opinions? And how can input be adjusted for isolated and distributed function settings?

    On the web, we may find a long history of sequential comments: code was written and discussed on mailing lists since the beginning of open source. Currently, engineers engage on pull calls, developers post in their favourite design tools, project managers and sprint masters exchange ideas on tickets, and so on.

    Design analysis is often the label used for a type of input that’s provided to make our job better, jointly. So it generally adheres to many of the concepts with suggestions, but it also has some differences.

    The material

    The material of the feedback serves as the foundation for all effective critiques, so we need to start there. There are many designs that you can use to form your content. The one that I personally like best—because it’s obvious and actionable—is this one from Lara Hogan.

    Although this formula is typically used to provide opinions to individuals, it likewise fits really well in a style criticism because it finally addresses some of the main inquiries that we work on: What? Where? Why? How? Imagine that you’re giving some comments about some pattern function that spans several screens, like an onboard movement: there are some pages shown, a stream blueprint, and an outline of the decisions made. You notice a flaw in the situation. If you keep the three elements of the equation in mind, you’ll have a mental model that can help you be more precise and effective.

    Here is a comment that could be included in some feedback, and it might appear reasonable at first glance because it appears to merely fit the equation. But does it?

    Not sure about the buttons ‘ styles and hierarchy—it feels off. Can they be altered?

    Observation for design feedback doesn’t just mean pointing out which part of the interface your feedback refers to, but it also refers to offering a perspective that’s as specific as possible. Do you offer the user’s viewpoint? Your expert perspective? A business perspective? From the perspective of the project manager? A first-time user’s perspective?

    When I see these two buttons, I anticipate one to go forward and the other to go back.

    Impact is about the why. Just pointing out a UI element might sometimes be enough if the issue may be obvious, but more often than not, you should add an explanation of what you’re pointing out.

    When I see these two buttons, I anticipate one to go forward and the other to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow.

    The question approach is meant to provide open guidance by eliciting the critical thinking in the designer receiving the feedback. Notably, in Lara’s equation she provides a second approach: request, which instead provides guidance toward a specific solution. While that’s a viable option for general feedback, in my experience, going back to the question approach typically leads to the best solutions because designers are generally more at ease with having an open space to experiment with.

    The difference between the two can be exemplified with, for the question approach:

    When I see these two buttons, I anticipate one to go forward and the other to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Would it make sense to unify them?

    Or, for the request approach:

    When I see these two buttons, I anticipate one to go forward and the other to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same pair of forward and back buttons.

    At this point in some situations, it might be useful to integrate with an extra why: why you consider the given suggestion to be better.

    When I see these two buttons, I anticipate one to go forward and the other to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same two forward and back buttons so that users don’t get confused.

    Choosing the question approach or the request approach can also at times be a matter of personal preference. I spent a while working on improving my feedback, conducting anonymous feedback reviews and sharing feedback with others. After a few rounds of this work and a year later, I got a positive response: my feedback came across as effective and grounded. Until I changed teams. Surprise surprise, one particular person gave me a lot of negative feedback. The reason is that I had previously tried not to be prescriptive in my advice—because the people who I was previously working with preferred the open-ended question format over the request style of suggestions. However, there was a member of this other team who preferred specific guidance. So I adapted my feedback for them to include requests.

    One comment that I heard come up a few times is that this kind of feedback is quite long, and it doesn’t seem very efficient. Yes, but no. Let’s explore both sides.

    No, because of the length in question, this kind of feedback is effective and can provide just enough information for a sound fix. Also if we zoom out, it can reduce future back-and-forth conversations and misunderstandings, improving the overall efficiency and effectiveness of collaboration beyond the single comment. Imagine that in the example above the feedback were instead just,” Let’s make sure that all screens have the same two forward and back buttons”. Since the designer receiving this feedback wouldn’t have much to go by, they might just implement the change. In later iterations, the interface might change or they might introduce new features—and maybe that change might not make sense anymore. Without explaining the why, the designer might assume that the change is one of consistency, but what if it wasn’t? So there could now be an underlying concern that changing the buttons would be perceived as a regression.

    Yes, this style of feedback is not always efficient because the points in some comments don’t always need to be exhaustive, sometimes because certain changes may be obvious (” The font used doesn’t follow our guidelines” ) and sometimes because the team may have a lot of internal knowledge such that some of the whys may be implied.

    The equation above is not intended to provide a predetermined template for feedback, but rather a mnemonic to reflect and enhance the practice. Even after years of active work on my critiques, I still from time to time go back to this formula and reflect on whether what I just wrote is effective.

    The tone

    Well-grounded content is the foundation of feedback, but that’s not really enough. The soft skills of the person who’s providing the critique can multiply the likelihood that the feedback will be well received and understood. It has been demonstrated that only positive feedback can lead to sustained change in people, and tone alone can determine whether content is rejected or welcomed.

    Since our goal is to be understood and to have a positive working environment, tone is essential to work on. I’ve tried to summarize the necessary soft skills over the years using a formula that resembles the one for content: the receptivity equation.

    Respectful feedback comes across as grounded, solid, and constructive. It’s the kind of feedback that, whether it’s positive or negative, is perceived as useful and fair.

    The term “timing” describes the moment when the feedback occurs. To-the-point feedback doesn’t have much hope of being well received if it’s given at the wrong time. When a new feature’s entire high-level information architecture is about to go live, it might still be relevant if the questioning raises a significant blocker that no one saw, but those concerns are much more likely to have to wait for a later revision. So in general, attune your feedback to the stage of the project. Early iteration? Iteration later? Polishing work in progress? Each of these needs varies. The right timing will make it more likely that your feedback will be well received.

    Attitude is the equivalent of intent, and in the context of person-to-person feedback, it can be referred to as radical candor. That entails checking before writing to see if what we have in mind will actually help the person and improve the project overall. This might be a hard reflection at times because maybe we don’t want to admit that we don’t really appreciate that person. Hopefully that’s not the case, but it can happen, which is fine. Acknowledging and owning that can help you make up for that: how would I write if I really cared about them? How can I avoid being passive aggressive? How can I encourage constructive behavior?

    Form is relevant especially in a diverse and cross-cultural work environments because having great content, perfect timing, and the right attitude might not come across if the way that we write creates misunderstandings. There could be many reasons for this, including the fact that occasionally certain words may cause specific reactions, that non-native speakers may not be able to comprehend all thenuances of some sentences, that our brains may be different, and that we may perceive the world differently. Neurodiversity is a requirement. Whatever the reason, it’s important to review not just what we write but how.

    A few years back, I was asking for some feedback on how I give feedback. I was given some helpful advice, but I also found a surprise in my comment. They pointed out that when I wrote” Oh, ]… ]”, I made them feel stupid. That’s not what I meant to say! I felt really bad, and I just realized that I provided feedback to them for months, and every time I might have made them feel stupid. I was horrified … but also thankful. I quickly changed my spelling mistake by adding “oh” to my list of replaced words (your choice between aText, TextExpander, or others ) so that when I typed “oh,” it was immediately deleted.

    Something to highlight because it’s quite frequent—especially in teams that have a strong group spirit—is that people tend to beat around the bush. A positive attitude doesn’t necessarily mean giving in to criticism; it just means that you give it in a respectful and constructive manner, whether it be in the form of criticism or criticism. The nicest thing that you can do for someone is to help them grow.

    We have a great advantage in giving feedback in written form: it can be reviewed by another person who isn’t directly involved, which can help to reduce or remove any bias that might be there. When I shared a comment with someone I knew,” How does this sound,”” How can I do it better,” or even” How would you have written it,” I discovered that the two versions had different meanings.

    The format

    Asynchronous feedback also has a significant inherent benefit: it allows us to spend more time making sure that the suggestions ‘ clarity and actionability meet two main objectives.

    Let’s imagine that someone shared a design iteration for a project. You are reviewing it and leaving a comment. There are many ways to accomplish this, and context is of course important, but let’s try to think about some things that might be worthwhile to take into account.

    In terms of clarity, start by grounding the critique that you’re about to give by providing context. This includes specifically describing where you’re coming from: do you have a thorough understanding of the project, or is this your first time seeing it? Are you coming from a high-level perspective, or are you figuring out the details? Are there regressions? Which user’s point of view are you addressing when offering your feedback? Is the design iteration at a point where it would be okay to ship this, or are there major things that need to be addressed first?

    Even if you’re giving feedback to a team that already has some background information on the project, providing context is helpful. And context is absolutely essential when giving cross-team feedback. If I were to review a design that might be indirectly related to my work, and if I had no knowledge about how the project arrived at that point, I would say so, highlighting my take as external.

    We frequently concentrate on the negatives and attempt to list every possible improvement. That’s of course important, but it’s just as important—if not more—to focus on the positives, especially if you saw progress from the previous iteration. Although this may seem superfluous, it’s important to remember that design has a number of possible solutions to each problem. So pointing out that the design solution that was chosen is good and explaining why it’s good has two major benefits: it confirms that the approach taken was solid, and it helps to ground your negative feedback. In the longer term, sharing positive feedback can help prevent regressions on things that are going well because those things will have been highlighted as important. Positive feedback can also help to lessen impostor syndrome as an added bonus.

    There’s one powerful approach that combines both context and a focus on the positives: frame how the design is better than the status quo ( compared to a previous iteration, competitors, or benchmarks ) and why, and then on that foundation, you can add what could be improved. This is powerful because there is a big difference between a critique of a design that is already in good shape and one that is critiqued for a design that isn’t quite there yet.

    Another way that you can improve your feedback is to depersonalize the feedback: the comments should always be about the work, never about the person who made it. It’s” This button isn’t well aligned” versus” You haven’t aligned this button well”. This can be changed in your writing very quickly by reviewing it just before sending.

    In terms of actionability, one of the best approaches to help the designer who’s reading through your feedback is to split it into bullet points or paragraphs, which are easier to review and analyze one by one. You might also think about breaking up the feedback into sections or even across multiple comments if it is longer. Of course, adding screenshots or signifying markers of the specific part of the interface you’re referring to can also be especially useful.

    One approach that I’ve personally used effectively in some contexts is to enhance the bullet points with four markers using emojis. A red square indicates that it is something I consider blocking, a yellow diamond indicates that it needs to be changed, and a green circle provides a thorough, positive confirmation. I also use a blue spiral � � for either something that I’m not sure about, an exploration, an open alternative, or just a note. However, I’d only use this strategy on teams where I’ve already established a high level of trust because it might turn out to be quite demoralizing if I deliver a lot of red squares, and I’d have to reframe how I’d communicate that.

    Let’s see how this would work by reusing the example that we used earlier as the first bullet point in this list:

    • 🔶 Navigation—When I see these two buttons, I anticipate one to go forward and the other to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same two forward and back buttons so that users don’t get confused.
    • � � Overall— I think the page is solid, and this is good enough to be our release candidate for a version 1.0.
    • � � Metrics—Good improvement in the buttons on the metrics area, the improved contrast and new focus style make them more accessible.
    • Button Style: Using the green accent in this context, which conveys that it is a positive action because green is typically seen as a confirmation color. Do we need to explore a different color?
    • Given the number of items on the page and the overall page hierarchy, it seems to me that the tiles should use Subtitle 2 instead of Subtitle 1. This will keep the visual hierarchy more consistent.
    • � � Background—Using a light texture works well, but I wonder whether it adds too much noise in this kind of page. What is the purpose of using that?

    What about giving feedback directly in Figma or another design tool that allows in-place feedback? These are generally difficult to use because they conceal discussions and are harder to follow, but in the right setting, they can be very effective. Just make sure that each of the comments is separate so that it’s easier to match each discussion to a single task, similar to the idea of splitting mentioned above.

    One final note: say the obvious. We don’t say something because we sometimes think it’s obvious that something is either good or wrong. Or sometimes we might have a doubt that we don’t express because the question might sound stupid. Say it, that’s fine. You might have to reword it a little bit to make the reader feel more comfortable, but don’t hold it back. Good feedback is transparent, even when it may be obvious.

    Another benefit of asynchronous feedback is that written feedback automatically monitors decisions. Especially in large projects,” Why did we do this”? There’s nothing better than open, transparent discussions that can be reviewed at any time, which could be a question that arises from time to time. For this reason, I recommend using software that saves these discussions, without hiding them once they are resolved.

    Content, tone, and format. Although each of these subjects offers a useful model, focusing on improving eight of the subjects ‘ focus points, including observation, impact, question, timing, attitude, form, clarity, and actionability, is a lot of work to complete at once. One effective approach is to take them one by one: first identify the area that you lack the most (either from your perspective or from feedback from others ) and start there. Then the third, the third, and so on. At first you’ll have to put in extra time for every piece of feedback that you give, but after a while, it’ll become second nature, and your impact on the work will multiply.

    Thanks to Brie Anne Demkiw and Mike Shelton for reviewing the first draft of this article.

  • Asynchronous Design Critique: Getting Feedback

    Asynchronous Design Critique: Getting Feedback

    ” Any feedback?” is perhaps one of the worst ways to ask for opinions. It’s obscure and unreliable, and it doesn’t give a clear picture of what we’re looking for. Getting good opinions starts sooner than we might hope: it starts with the demand.

    Starting the process of receiving feedback with a question may seem counterintuitive, but it makes sense if we consider that receiving feedback can be considered a form of pattern research. In the same way that we wouldn’t perform any studies without the correct questions to get the insight that we need, the best way to ask for feedback is also to build strong issues.

    Design criticism is not a one-time procedure. Sure, any great comments process continues until the project is finished, but this is especially true for layout because architecture work continues iteration after iteration, from a high level to the finest details. Each stage requires its unique set of questions.

    And suddenly, as with any great research, we need to examine what we got up, get to the base of its perspectives, and take action. Iteration, evaluation, and issue. This look at each of those.

    The query

    Being available to input is important, but we need to be specific about what we’re looking for. Any comments,” What do you think,” or” I’d love to hear your mind” at the conclusion of a presentation are likely to garner a lot of different ideas, or worse, to make everyone follow the lead of the first speaker. And next… we get frustrated because vague issues like those you turn a high-level moves review into folks rather commenting on the borders of buttons. Which topic may be important, so it might be difficult to get the team to pay attention to it.

    But how do we get into this scenario? It’s a combination of various components. One is that we don’t often consider asking as a part of the input approach. Another is how healthy it is to assume that everyone else will agree with the problem and leave it alone. Another is that in nonprofessional debate, there’s usually no need to be that exact. In summary, we tend to undervalue the value of the issues, and we don’t make any improvements to them.

    The work of asking good questions guidelines and focuses the criticism. It also serves as a form of acceptance, outlining your willingness to make comments and the types of comments you want to receive. It puts people in the right emotional state, especially in situations when they weren’t expecting to give opinions.

    There isn’t a second best method to request suggestions. It simply needs to be certain, and sensitivity can take several shapes. The concept of level than depth is a design for design criticism that I’ve found to be particularly helpful in my coaching.

    Stage” refers to each of the steps of the process—in our event, the design process. The type of input changes as the consumer research moves forward to the final design. But within a single stage, one might also examine whether some assumptions are correct and whether there’s been a suitable language of the amassed input into updated designs as the job has evolved. The layers of user experience could serve as a starting point for potential questions. What do you want to know: Project objectives? user requirements? Functionality? Content? Interaction design? a system of information architecture UI design? navigation planning Visual design? branding?

    Here’re a few example questions that are precise and to the point that refer to different layers:

    • Functionality: Is it desirable to automate account creation?
    • Interaction design: Take a look through the updated flow and let me know whether you see any steps or error states that I might’ve missed.
    • Information architecture: This page contains two competing pieces of information. Is the structure effective in communicating them both?
    • User interface design: What do you think about the error counter at the top of the page, which makes sure you see the next error even if it is outside the viewport?
    • Navigation design: From research, we identified these second-level navigation items, but once you’re on the page, the list feels too long and hard to navigate. Are there any ways to deal with this?
    • Visual design: Are the sticky notifications in the bottom-right corner visible enough?

    The other axis of specificity is determined by how far you’d like to go with the information being presented. For example, we might have introduced a new end-to-end flow, but there was a specific view that you found particularly challenging and you’d like a detailed review of that. This can be especially helpful from one iteration to the next when it’s crucial to highlight the areas that have changed.

    There are other things that we can consider when we want to achieve more specific—and more effective—questions.

    A quick fix is to get rid of the generic qualifiers from questions like “good,” “well,” “nice,” “bad,” “okay,” and” cool.” For example, asking,” When the block opens and the buttons appear, is this interaction good”? is possible to appear specific, but the “good” qualifier can be found in an even better question,” When the block opens and the buttons appear, is it clear what the next action is?”

    Sometimes we actually do want broad feedback. Although that’s uncommon, it can occur. In that sense, you might still make it explicit that you’re looking for a wide range of opinions, whether at a high level or with details. Or perhaps you should just say,” At first glance, what do you think”? so that it’s clear that what you’re asking is open ended but focused on someone’s impression after their first five seconds of looking at it.

    Sometimes the project is particularly broad, and some areas may have already been thoroughly explored. In these situations, it might be useful to explicitly say that some parts are already locked in and aren’t open to feedback. Although it’s not something I’d recommend in general, I’ve found it helpful in avoiding getting back into rabbit holes like those that could lead to even more refinement if what’s important right now isn’t.

    Asking specific questions can completely change the quality of the feedback that you receive. People who have less refined critique abilities will now be able to provide more useful feedback, and even experienced designers will appreciate the clarity and effectiveness gained from concentrating solely on what is required. It can save a lot of time and frustration.

    The iteration

    Design iterations are probably the most visible part of the design work, and they provide a natural checkpoint for feedback. Many design tools have inline commenting, but many of those methods typically display changes as a single fluid stream in the same file. These methods cause conversations to vanish once they’re resolved, update shared UI components automatically, and require designs to always display the most recent version unless these would-be useful features were manually turned off. The implied goal that these design tools seem to have is to arrive at just one final copy with all discussions closed, probably because they inherited patterns from how written documents are collaboratively edited. That approach to design critiques is probably not the best approach, but some teams might benefit from it even if I don’t want to be too prescriptive.

    The asynchronous design-critique approach that I find most effective is to create explicit checkpoints for discussion. For this, I’m going to use the term iteration post. It refers to a write-up or presentation of the design iteration followed by a discussion thread of some kind. This can be used on any platform that can accommodate this structure. By the way, when I refer to a “write-up or presentation“, I’m including video recordings or other media too: as long as it’s asynchronous, it works.

    Using iteration posts has a number of benefits:

    • It creates a rhythm in the design work so that the designer can review feedback from each iteration and prepare for the next.
    • Decisions are always available, and conversations are also made accessible for future review.
    • It creates a record of how the design changed over time.
    • Depending on the tool, it might also make it simpler to collect and act on feedback.

    These posts of course don’t mean that no other feedback approach should be used, just that iteration posts could be the primary rhythm for a remote design team to use. And from there, there can develop additional feedback techniques ( such as live critique, pair designing, or inline comments ).

    I don’t think there’s a standard format for iteration posts. However, there are a few high-level components that make sense to include as a baseline:

    1. The goal
    2. The layout
    3. The list of changes
    4. The querys

    Each project is likely to have a goal, and hopefully it’s something that’s already been summarized in a single sentence somewhere else, such as the client brief, the product manager’s outline, or the project owner’s request. In other words, I would copy and paste this into every iteration post to make it work. The idea is to provide context and to repeat what’s essential to make each iteration post complete so that there’s no need to find information spread across multiple posts. The most recent iteration post will have everything I need if I want to know about the most recent design.

    This copy-and-paste part introduces another relevant concept: alignment comes from repetition. Therefore, repeating information in posts helps to ensure that everyone is on the same page.

    The design is then the actual series of information-architecture outlines, diagrams, flows, maps, wireframes, screens, visuals, and any other kind of design work that’s been done. In essence, it’s any design work. For the final stages of work, I prefer the term blueprint to emphasize that I’ll be showing full flows instead of individual screens to make it easier to understand the bigger picture.

    It might also be helpful to have clear names on the objects since it makes them look better to refer to. Write the post in a way that helps people understand the work. It’s not very different from creating a strong live presentation.

    For an efficient discussion, you should also include a bullet list of the changes from the previous iteration to let people focus on what’s new, which can be especially useful for larger pieces of work where keeping track, iteration after iteration, could become a challenge.

    Finally, as mentioned earlier, a list of the questions must be included in order to help you guide the design critique. Doing this as a numbered list can also help make it easier to refer to each question by its number.

    Not every iteration is the same. Earlier iterations don’t need to be as tightly focused—they can be more exploratory and experimental, maybe even breaking some of the design-language guidelines to see what’s possible. Then, later, the iterations begin coming to a decision and improving it until the design process is complete and the feature is ready.

    I want to highlight that even if these iteration posts are written and conceived as checkpoints, by no means do they need to be exhaustive. A post might be a draft, just a concept to start a discussion, or it might be a cumulative list of all the features that have been added over the course of each iteration until the full picture is achieved.

    Over time, I also started using specific labels for incremental iterations: i1, i2, i3, and so on. Although this may seem like a minor labeling tip, it can be useful in many ways:

    • Unique—It’s a clear unique marker. Everyone knows where to go to review things, and it’s simple to say” This was discussed in i4″ with each project.
    • Unassuming—It works like versions ( such as v1, v2, and v3 ) but in contrast, versions create the impression of something that’s big, exhaustive, and complete. Attempts must be exploratory, incomplete, or partial.
    • Future proof—It resolves the “final” naming problem that you can run into with versions. No more files with the title “final final complete no-really-its-done” Within each project, the largest number always represents the latest iteration.

    The wording release candidate (RC ) could be used to indicate when a design is finished enough to be worked on, even if there are some areas that still need improvement and, in turn, require more iterations, such as” with i8 we reached RC” or “i12 is an RC” to indicate when it is finished.

    The review

    A back-and-forth between two people that can be very productive typically occurs during a design critique. This approach is particularly effective during live, synchronous feedback. However, when we work asynchronously, it is more effective to adopt a different strategy: we can adopt a user-research mindset. Written feedback from teammates, stakeholders, or others can be treated as if it were the result of user interviews and surveys, and we can analyze it accordingly.

    Asynchronous feedback is particularly effective around these friction points because of this shift’s significant benefits:

    1. It removes the pressure to reply to everyone.
    2. It lessens the annoyance of snoop-by comments.
    3. It lessens our personal stake.

    The first friction point is having to press yourself to respond to each and every comment. Sometimes we write the iteration post, and we get replies from our team. It’s just a few of them, it’s simple, and there isn’t much of a problem with it. But other times, some solutions might require more in-depth discussions, and the amount of replies can quickly increase, which can create a tension between trying to be a good team player by replying to everyone and doing the next design iteration. If the respondent is a stakeholder or a person directly involved in the project, this might be especially true. We need to accept that this pressure is absolutely normal, and it’s human nature to try to accommodate people who we care about. When we treat a design critique more like user research, we realize that we don’t need to respond to every comment, and there are alternatives: In asynchronous spaces, responding to all comments can be effective.

      One is to let the next iteration speak for itself. That is the response when the design changes and we publish a follow-up iteration. You might tag all the people who were involved in the previous discussion, but even that’s a choice, not a requirement.
    • Another option is to respond politely to acknowledge each comment, such as” Understood. Thank you”,” Good points— I’ll review”, or” Thanks. These will be included in the upcoming iteration. In some cases, this could also be just a single top-level comment along the lines of” Thanks for all the feedback everyone—the next iteration is coming soon”!
    • Another option is to quickly summarize the comments before moving on. Depending on your workflow, this can be particularly useful as it can provide a simplified checklist that you can then use for the next iteration.

    The swoop-by comment, which is the kind of feedback that comes from a member of the project or team who might not be aware of the context, restrictions, decisions, or requirements —or of the discussions from earlier iterations. On their side, there’s something that one can hope that they might learn: they could start to acknowledge that they’re doing this and they could be more conscious in outlining where they’re coming from. It can be annoying to have to repeat the same response repeatedly in swoop-by comments.

    Let’s begin by acknowledging again that there’s no need to reply to every comment. However, a brief response with a link to the previous discussion for additional information is typically sufficient if responding to a previously litigated point might be helpful. Remember, alignment comes from repetition, so it’s okay to repeat things sometimes!

    Swoop-by commenting has two benefits: first, they might point out something that isn’t clear, and second, they might serve as a reference point for someone who is first viewing the design. Sure, you’ll still be frustrated, but that might at least help in dealing with it.

    The personal stake we might have in the design could be the third friction point, which might cause us to feel defensive if the review turned into a discussion. Treating feedback as user research helps us create a healthy distance between the people giving us feedback and our ego ( because yes, even if we don’t want to admit it, it’s there ). And in the end, presenting everything in aggregated form helps us to prioritize our work more.

    Always remember that while you need to listen to stakeholders, project owners, and specific advice, you don’t have to accept every piece of feedback. You must examine it and come to a decision that can be justified, but sometimes “no” is the best choice.

    As the designer leading the project, you’re in charge of that decision. In the end, everyone has their area of specialization, and the designer is the one with the most background and knowledge to make the right choice. And by listening to the feedback that you’ve received, you’re making sure that it’s also the best and most balanced decision.

    Thanks to Mike Shelton and Brie Anne Demkiw for their contributions to the initial draft of this article.

  • Voice Content and Usability

    Voice Content and Usability

    We’ve been conversing for a long time. Whether to present information, perform transactions, or just to check in on one another, people have yammered aside, chattering and gesticulating, through spoken discussion for many generations. Only recently have conversations started to be written, and only recently have we outsourced them to the system, a system that exhibits a significantly higher affinity for written communications than for the vernacular rigors of spoken language.

    Laptops have trouble because between spoken and written speech, talk is more primitive. Machines must wrestle with the complexity of human statement, including the disfluencies and pauses, the gestures and body speech, and the variations in expression choice and spoken dialect, which may impede even the most skillfully crafted human-computer interaction. In the human-to-human situation, spoken language also has the opportunity of face-to-face call, where we can easily interpret visual interpersonal cues.

    In contrast, written language develops its own fossil record of dated terms and phrases as we commit to recording and keeping usages long after they are no longer relevant in spoken communication ( for example, the salutation” To whom it may concern” ). Because it tends to be more consistent, smooth, and proper, written word is necessarily far easier for machines to interpret and know.

    Spoken language is not a luxury in this regard. Besides the nonverbal cues that decorate conversations with emphasis and emotional context, there are also verbal cues and vocal behaviors that modulate conversation in nuanced ways: how something is said, not what. Our spoken language conveys much more than the written word can ever contain, whether it’s rapid-fire, low-pitched, high-decibel, sarcastic, stilted, or sighing. So when it comes to voice interfaces—the machines we conduct spoken conversations with—we face exciting challenges as designers and content strategists.

    Voice-to-voice interactions

    We interact with voice interfaces for a variety of reasons, but according to Michael McTear, Zoraida Callejas, and David Griol in The Conversational Interface, those motivations by and large mirror the reasons we initiate conversations with other people, too ( ). We typically strike up a discussion by:

    • we need something done ( such as a transaction ),
    • we seek knowledge of something ( some kind of information ), or
    • we are social beings and want someone to talk to ( conversation for conversation’s sake ).

    A single conversation from beginning to end that achieves some outcome for the user, starting with the voice interface’s first greeting and ending with the user exiting the interface, also fits into these three categories, which I refer to as transactional, informational, and prosocial. Note here that a conversation in our human sense—a chat between people that leads to some result and lasts an arbitrary length of time—could encompass multiple transactional, informational, and prosocial voice interactions in succession. In other words, a voice interaction is a conversation, but it may not always be one voice interaction.

    Purely prosocial conversations are more gimmicky than captivating in most voice interfaces, because machines don’t yet have the capacity to really want to know how we’re doing and to do the sort of glad-handing humans crave. Additionally, there is ongoing debate about whether users actually prefer the type of organic human conversation that starts with a prosocial voice and progresses seamlessly into new ones. In fact, in Voice User Interface Design, Michael Cohen, James Giangola, and Jennifer Balogh recommend sticking to users ‘ expectations by mimicking how they interact with other voice interfaces rather than trying too hard to be human—potentially alienating them in the process ( ).

    That leaves two different types of conversations we can have with one another that a voice interface can also have easily, including one that is transactional and one that is informational, teaching us something new ( “discuss a musical” ).

    Transactional voice interactions

    When you order a Hawaiian pizza with extra pineapple, you’re typically having a conversation and a voice interaction when you’re tapping buttons on a food delivery app. Even when we walk up to the counter and place an order, the conversation quickly pivots from an initial smattering of neighborly small talk to the real mission at hand: ordering a pizza ( generously topped with pineapple, as it should be ).

    Alison: Hey, how are things going?

    Burhan: Hi, welcome to Crust Deluxe! It’s chilly outside. How can I help you?

    Alison: Can I get a pizza from Hawaii with extra pineapple.

    Burhan: Sure, what size?

    Large, Alison.

    Burhan: Anything else?

    Alison: No, that’s it.

    Burhan: Something to drink?

    I’ll have a bottle of Coke, Alison.

    Burhan: You got it. It will cost about$ 15 and take fifteen minutes to complete.

    Each progressive disclosure in this transactional conversation reveals more and more of the desired outcome of the transaction: a service rendered or a product delivered. Conversations that are transactional have certain characteristics: they are direct, concise, and cost-effective. They quickly dispense with pleasantries.

    Informational voice interactions

    Meanwhile, some conversations are primarily about obtaining information. Alison might visit Crust Deluxe with the sole intention of placing an order, but she might not want to leave with a pizza at all. She might be just as interested in whether they serve halal or kosher dishes, gluten-free options, or something else. Even though we have a prosocial mini-conversation once more at the beginning to practice politeness, we are after much more.

    Alison: Hey, how are things going?

    Burhan: Hi, welcome to Crust Deluxe! It’s chilly outside. How can I help you?

    Alison: Can I ask a few questions?

    Burhan: Of course! Go right ahead.

    Do you have any halal options on the menu, Alison?

    Burhan: Absolutely! On request, we can make any pie halal. We also have lots of vegetarian, ovo-lacto, and vegan options. Are you considering any additional dietary restrictions?

    Alison: What about gluten-free pizzas?

    Burhan: For both our deep-dish and thin-crust pizzas, we can definitely make a gluten-free crust for you, without a problem. Anything else I can answer for you?

    Alison: That’s it for now. Good to know. Thank you!

    Burhan: Anytime, come back soon!

    This dialogue is entirely different. Here, the goal is to get a certain set of facts. Informational conversations are research expeditions that seek the truth through information gathering. Voice interactions that are informational might be more long-winded than transactional conversations by necessity. In order for the customer to understand the key takeaways, responses are typically longer, more in-depth, and carefully communicated.

    Voice Interfaces

    Voice-based user interfaces use speech at the core to assist users in accomplishing their objectives. But simply because an interface has a voice component doesn’t mean that every user interaction with it is mediated through voice. We’re most concerned in this book with pure voice interfaces because multimodal voice interfaces can lean on visual components like screens as crutches, which are completely dependent on spoken conversation and lack any visual component, making them much more nuanced and challenging to deal with.

    Though voice interfaces have long been integral to the imagined future of humanity in science fiction, only recently have those lofty visions become fully realized in genuine voice interfaces.

    IVR ( interactive voice response ) systems

    Though written conversational interfaces have been fixtures of computing for many decades, voice interfaces first emerged in the early 1990s with text-to-speech ( TTS ) dictation programs that recited written text aloud, as well as speech-enabled in-car systems that gave directions to a user-provided address. We became familiar with the first real voice interfaces that could actually be spoken with the help of interactive voice response ( IVR ) systems, which were developed as an alternative to overburdened customer service representatives.

    IVR systems allowed organizations to reduce their reliance on call centers but soon became notorious for their clunkiness. Similar to the corporate world, these systems were primarily created as metaphorical switchboards to direct customers to a real phone agent (” Say Reservations to book a flight or check an itinerary” ), and chances are you’ll have a conversation with one when you call an airline or hotel conglomerate. Despite their functional issues and users ‘ frustration with their inability to speak to an actual human right away, IVR systems proliferated in the early 1990s across a variety of industries (, PDF).

    IVR systems have a reputation for having less scintillating conversations than we’re used to in real life ( or even in science fiction ), despite being extremely repetitive and monotonous conversations that typically don’t veer from a single format.

    Screen readers

    The invention of the screen reader, a tool that converts visual content into synthesized speech, was a development of IVR systems in parallel. For Blind or visually impaired website users, it’s the predominant method of interacting with text, multimedia, or form elements. Perhaps the closest thing we have today to an out-of-the-box delivery of content via voice is represented by screen readers.

    Among the first screen readers known by that moniker was the Screen Reader for the BBC Micro and NEEC Portable developed by the Research Centre for the Education of the Visually Handicapped (RCEVH) at the University of Birmingham in 1986 ( ). In the same year, Jim Thatcher created the first IBM Screen Reader for text-based computers, which was later reworked for computers with graphical user interfaces ( GUIs ) ( ).

    With the rapid growth of the web in the 1990s, the demand for accessible tools for websites exploded. Screen readers started facilitating quick interactions with web pages that ostensibly allow disabled users to traverse the page as an aural and temporal space rather than a visual and physical one with the introduction of semantic HTML and especially ARIA roles in 2008, enabling speedy interactions with the pages. In other words, screen readers for the web “provide mechanisms that translate visual design constructs—proximity, proportion, etc. in A List Apart, writes Aaron Gustafson, “into useful information.” ” At least they do when documents are authored thoughtfully” ( ).

    There is a big draw for screen readers: they’re challenging to use and relentlessly verbose, despite being incredibly instructive for voice interface designers. The visual structures of websites and web navigation don’t translate well to screen readers, sometimes resulting in unwieldy pronouncements that name every manipulable HTML element and announce every formatting change. Working with web-based interfaces takes a cognitive toll for many screen reader users.

    In Wired, accessibility advocate and voice engineer Chris Maury considers why the screen reader experience is ill-suited to users relying on voice:

    I hated the way Screen Readers operated from the beginning. Why are they designed the way they are? It makes no sense to present information visually before converting it to audio only after that. All of the time and energy that goes into creating the perfect user experience for an app is wasted, or even worse, adversely impacting the experience for blind users. ( ) _ _ _

    In many cases, well-designed voice interfaces can speed users to their destination better than long-winded screen reader monologues. After all, users of the visual interface have the advantage of freely scurrying around the viewport to find information without worrying about it. Blind users, meanwhile, are obligated to listen to every utterance synthesized into speech and therefore prize brevity and efficiency. Users with disabilities who have long had no choice but to use clumsy screen readers might benefit from more streamlined user interfaces, especially more advanced voice assistants.

    Voice assistants

    Many of us immediately associate voice assistants with the popular subset of voice interfaces found in living rooms, smart homes, and offices with the film Star Trek or with Majel Barrett’s voice as the omniscient computer. Voice assistants are akin to personal concierges that can answer questions, schedule appointments, conduct searches, and perform other common day-to-day tasks. And they’re quickly gaining more attention from accessibility advocates for their assistive potential.

    Before the earliest IVR systems found success in the enterprise, Apple published a demonstration video in 1987 depicting the Knowledge Navigator, a voice assistant that could transcribe spoken words and recognize human speech to a great degree of accuracy. Then, in 2001, Tim Berners-Lee and others created their vision for a Semantic Web “agent” that would carry out routine tasks like” checking calendars, making appointments, and finding locations” (, behind paywall ). It wasn’t until 2011 that Apple’s Siri finally entered the picture, making voice assistants a tangible reality for consumers.

    There are a lot of variations in the programmability and customization of some voice assistants compared to others ( Fig. 1 ). As a result of the breadth of voice assistants available today ( Fig. 1 ). At one extreme, everything except vendor-provided features is locked down, for example, at the time of their release, the core functionality of Apple’s Siri and Microsoft’s Cortana couldn’t be extended beyond their existing capabilities. There are no other means of developers communicating with Siri at a low level, aside from predefined categories of tasks like messaging, hailing rideshares, making restaurant reservations, and other things, which are still possible today.

    At the opposite end of the spectrum, voice assistants like Amazon Alexa and Google Home offer a core foundation on which developers can build custom voice interfaces. For this reason, developers who feel stifled by the limitations of Siri and Cortana are increasingly using programmable voice assistants that are capable of customization and extensibility. Amazon offers the Alexa Skills Kit, a developer framework for building custom voice interfaces for Amazon Alexa, while Google Home offers the ability to program arbitrary Google Assistant skills. Users of the Amazon Alexa and Google Assistant ecosystems can choose from among the thousands of custom-built skills available today.

    As businesses like Amazon, Apple, Microsoft, and Google continue to occupy their positions, they are also selling and open-sourcing an unheard array of tools and frameworks for designers and developers, aiming to make creating voice interfaces as simple as possible, even without code.

    Often by necessity, voice assistants like Amazon Alexa tend to be monochannel—they’re tightly coupled to a device and can’t be accessed on a computer or smartphone instead. In contrast, many development platforms, such as Google’s Dialogflow, have omnichannel capabilities that allow users to create a single conversational interface that then manifests as a voice interface, textual chatbot, and IVR system upon deployment. I don’t prescribe any specific implementation approaches in this design-focused book, but in Chapter 4 we’ll get into some of the implications these variables might have on the way you build out your design artifacts.

    Voice content

    Simply put, voice content is content delivered through voice. Voice content must be free-flowing and organic, contextless and concise—everything written content isn’t enough to preserve what makes human conversation so compelling in the first place.

    Our world is replete with voice content in various forms: screen readers reciting website content, voice assistants rattling off a weather forecast, and automated phone hotline responses governed by IVR systems. We’re most concerned with the content in this book being delivered auditorically, not as an option but as a necessity.

    For many of us, our first foray into informational voice interfaces will be to deliver content to users. There is only one issue: any content we already have isn’t in any way suitable for this new environment. So how do we make the content trapped on our websites more conversational? And how do we create fresh copy that works with voice-recognition?

    Lately, we’ve begun slicing and dicing our content in unprecedented ways. Websites are, in many ways, colossal vaults of what I call macrocontent: lengthy prose that can last for miles in a browser window, like microfilm viewers of newspaper archives. Back in 2002, well before the present-day ubiquity of voice assistants, technologist Anil Dash defined microcontent as permalinked pieces of content that stay legible regardless of environment, such as email or text messages:

    An example of microcontent can be a day’s weather forecast [sic], an airplane flight’s arrival and departure times, an abstract from a lengthy publication, or a single instant message. ( ) _ _ _

    I would update Dash’s definition of microcontent to include all instances of bite-sized content that transcends written communiqués. After all, today we encounter microcontent in interfaces where a small snippet of copy is displayed alone, unmoored from the browser, like a textbot confirmation of a restaurant reservation. The best way to learn how your content can be stretched to the limits of its potential is through microcontent, which will inform both established and new delivery channels.

    As microcontent, voice content is unique because it’s an example of how content is experienced in time rather than in space. We can instantly look at a digital sign for an instant and be informed when the next train is coming, but voice interfaces keep our attention captive for so long that we can’t quickly evade or skip, a feature that screen reader users are all too familiar with.

    Because microcontent is fundamentally made up of isolated blobs with no relation to the channels where they’ll eventually end up, we need to ensure that our microcontent truly performs well as voice content—and that means focusing on the two most important traits of robust voice content: voice content legibility and voice content discoverability.

    Our voice content’s legibility and discoverability in general both depend on how it manifests in terms of perceived space and time.

  • Designing for the Unexpected

    Designing for the Unexpected

    Although I’m not certain when I first heard this statement, it has stuck with me over the centuries. How do you generate solutions for scenarios you can’t think? Or create materials that are functional on products that have not yet been created?

    Flash, Photoshop, and flexible style

    When I first started designing sites, my go-to technology was Photoshop. I started by making a structure for a 960px canvas that I would later add willing to. The growth phase was about attaining pixel-perfect precision using set widths, fixed levels, and absolute placement.

    All of this was altered by Ethan Marcotte’s speak at An Event Apart and the subsequent article in A Checklist Off in 2010. I was sold on responsive pattern as soon as I heard about it, but I was even terrified. The pixel-perfect models full of special figures that I had formerly prided myself on producing were no longer good enough.

    My first encounter with flexible style didn’t help my fear. My second project was to get an active fixed-width website and make it reactive. I quickly realized that you didn’t just put responsiveness at the end of a job. To make smooth design, you need to prepare throughout the style phase.

    A new way to style

    Making information accessible to all devices a priority when designing responsive or liquid websites has always been the goal. It relies on the use of percentage-based design, which I immediately achieved with local CSS and power groups:

    .column-span-6 { width: 49%; float: left; margin-right: 0.5%; margin-left: 0.5%;}.column-span-4 { width: 32%; float: left; margin-right: 0.5%; margin-left: 0.5%;}.column-span-3 { width: 24%; float: left; margin-right: 0.5%; margin-left: 0.5%;}

    Therefore using Sass to re-use repeated slabs of code and transition to more semantic html:

    .logo { @include colSpan(6);}.search { @include colSpan(3);}.social-share { @include colSpan(3);}

    Media questions

    The next ingredient for flexible design is press queries. Without them, content would shrink to fit the available space, regardless of whether it remained readable ( The exact opposite issue resulted from the development of a mobile-first approach ).

    Media questions prevented this by allowing us to add breakpoints where the design could adapt. Like most people, I started out with three breakpoints: one for desktop, one for tablets, and one for mobile. Over the years, I added more and more for phablets, wide screens, and so on. 

    For years, I happily worked this way and improved both my design and front-end skills in the process. The only problem I encountered was making changes to content, since with our Sass grid system in place, there was no way for the site owners to add content without amending the markup—something a small business owner might struggle with. This is because each row in the grid was defined using a div as a container. Adding content meant creating new row markup, which requires a level of HTML knowledge.

    String premium was a mainstay of early flexible design, present in all the frequently used systems like Bootstrap and Skeleton.

    1 of 7
    2 of 7
    3 of 7
    4 of 7
    5 of 7
    6 of 7
    7 of 7

    Another difficulty arose as I moved from a design firm building websites for smaller- to medium-sized companies, to larger in-house teams where I worked across a collection of related sites. In those capacities, I began to work more with washable pieces.

    Our rely on multimedia queries resulted in parts that were tied to frequent window sizes. If the goal of part libraries is modify, then this is a real problem because you can just use these components if the devices you’re designing for correspond to the viewport sizes used in the pattern library—in the process never really hitting that “devices that don’t already occur” goal.

    Then there’s the problem of space. Media questions allow components to adapt based on the viewport size, but what if I put a component into a sidebar, like in the figure below?

    Container queries: our savior or a false dawn?

    Container queries have long been touted as an improvement upon media queries, but at the time of writing are unsupported in most browsers. Workarounds for JavaScript exist, but they can lead to dependencies and compatibility issues. The basic theory underlying container queries is that elements should change based on the size of their parent container and not the viewport width, as seen in the following illustrations.

    One of the biggest arguments in favor of container queries is that they help us create components or design patterns that are truly reusable because they can be picked up and placed anywhere in a layout. This is an important step in moving toward a form of component-based design that works at any size on any device.

    In other words, responsive elements should be used to replace responsive layouts.

    Container queries will help us move from designing pages that respond to the browser or device size to designing components that can be placed in a sidebar or in the main content, and respond accordingly.

    My issue is that layout is still used to determine when a design needs to adapt. This approach will always be restrictive, as we will still need pre-defined breakpoints. For this reason, my main question with container queries is, How would we decide when to change the CSS used by a component?

    The best place to make that choice is probably a component library that is disconnected from context and real content.

    As the diagrams below illustrate, we can use container queries to create designs for specific container widths, but what if I want to change the design based on the image size or ratio?

    In this example, the dimensions of the container are not what should dictate the design, rather, the image is.

    Without having strong cross-browser support for them, it’s difficult to say for certain whether container queries will be a success story. Responsive component libraries would definitely evolve how we design and would improve the possibilities for reuse and design at scale. However, we might always need to modify these elements to fit our content.

    CSS is changing

    Whilst the container query debate rumbles on, there have been numerous advances in CSS that change the way we think about design. The days of fixed-width elements measured in pixels and floated div elements used to cobble layouts together are long gone, consigned to history along with table layouts. Flexbox and CSS Grid have revolutionized layouts for the web. We can now create elements that wrap onto new rows when they run out of space, not when the device changes.

    .wrapper { display: grid; grid-template-columns: repeat(auto-fit, 450px); gap: 10px;}

    The repeat() function paired with auto-fit or auto-fill allows us to specify how much space each column should use while leaving it up to the browser to decide when to spill the columns onto a new line. Similar things can be achieved with Flexbox, as elements can wrap over multiple rows and “flex” to fill available space. 

    .wrapper { display: flex; flex-wrap: wrap; justify-content: space-between;}.child { flex-basis: 32%; margin-bottom: 20px;}

    You don’t need to wrap elements in container rows, which is the biggest benefit of all of this. Without rows, content isn’t tied to page markup in quite the same way, allowing for removals or additions of content without additional development.

    This is a big step forward when it comes to creating designs that allow for evolving content, but the real game changer for flexible designs is CSS Subgrid.

    Remember the days of crafting perfectly aligned interfaces, only for the customer to add an unbelievably long header almost as soon as they’re given CMS access, like the illustration below?

    Subgrid allows elements to respond to adjustments in their own content and in the content of sibling elements, helping us create designs more resilient to change.

    .wrapper { display: grid; grid-template-columns: repeat(auto-fit, minmax(150px, 1fr)); grid-template-rows: auto 1fr auto; gap: 10px;}.sub-grid { display: grid; grid-row: span 3; grid-template-rows: subgrid; /* sets rows to parent grid */}

    CSS Grid allows us to separate layout and content, thereby enabling flexible designs. Meanwhile, Subgrid allows us to create designs that can adapt in order to suit morphing content. The above code can be implemented behind an @supports feature query even though Firefox is the only browser that supports subgrid at the time of writing.

    Intrinsic layouts

    I’d be remiss not to mention intrinsic layouts, a term used by Jen Simmons to describe a mix of contemporary and traditional CSS features used to create layouts that respond to available space.

    Responsive layouts have flexible columns using percentages. Intrinsic layouts, on the other hand, use the fr unit to create flexible columns that won’t ever shrink so much that they render the content illegible.

    frunits is a statement that says I want you to distribute the extra space in this manner, but never that it should be smaller than the content inside.

    —Jen Simmons,” Designing Intrinsic Layouts”

    Intrinsic layouts can also make use of a mix of fixed and flexible units, letting the content choose how much space it occupies.

    What makes intrinsic design stand out is that it not only creates designs that can withstand future devices but also helps scale design without losing flexibility. Without the requirement of having the same breakpoints or the same amount of content as in the previous implementation, components and patterns can be lifted and reused.

    We can now create designs that adapt to the space they have, the content within them, and the content around them. We can create responsive components using an intrinsic approach without relying on container queries.

    Another 2010 moment?

    This intrinsic approach should in my view be every bit as groundbreaking as responsive web design was ten years ago. It’s another instance of “everything changed,” in my opinion.

    But it doesn’t seem to be moving quite as fast, I haven’t yet had that same career-changing moment I had with responsive design, despite the widely shared and brilliant talk that brought it to my attention.

    One possible explanation for that is that I now work for a sizable company, which is quite different from the design agency position I held in 2010. In my agency days, every new project was a clean slate, a chance to try something new. Nowadays, projects use existing tools and frameworks and are often improvements to existing websites with an existing codebase.

    Another possibility is that I’m now more prepared for change. In 2010 I was new to design in general, the shift was frightening and required a lot of learning. Additionally, an intrinsic approach isn’t exactly new; it’s about applying existing skills and CSS knowledge in a unique way.

    You can’t framework your way out of a content problem

    Another reason for the slightly slower adoption of intrinsic design could be the lack of quick-fix framework solutions available to kick-start the change.

    Ten years ago, responsive grid systems were everywhere. With a framework like Bootstrap or Skeleton, you had a responsive design template at your fingertips.

    Because the benefit of having a selection of units is a hindrance when it comes to creating layout templates, intrinsic design and frameworks do not go hand in hand quite as well. The beauty of intrinsic design is combining different units and experimenting with techniques to get the best for your content.

    And then there are design tools. We probably all used Photoshop templates for desktop, tablet, and mobile devices at some point in our careers to drop designs in and demonstrate how the site would look at each of the three stages.

    How do you do that now, with each component responding to content and layouts flexing as and when they need to? This kind of design must take place in the browser, which is something I’m very fond of.

    The debate about “whether designers should code” is another that has rumbled on for years. When designing a digital product, we should, at the very least, design for a best- and worst-case scenario when it comes to content. It’s not ideal to do this in a graphics-based software package. In code, we can add longer sentences, more radio buttons, and extra tabs, and watch in real time as the design adapts. Does it continue to function? Is the design too reliant on the current content?

    Personally, I look forward to the day intrinsic design is the standard for design, when a design component can be truly flexible and adapt to both its space and content with no reliance on device or container dimensions.

    Content should come first

    Content is not constant. After all, to design for the unanticipated or unexpected, we must take into account changes in content, like in our earlier Subgrid card illustration, which allowed the cards to modify both their own content and that of their sibling components.

    Thankfully, there’s more to CSS than layout, and plenty of properties and values can help us put content first. Subgrid and pseudo-elements like ::first-line and ::first-letter help to separate design from markup so we can create designs that allow for changes.

    Instead of the dated markup tricks below,

    First line of text with different styling...

    —we can target content based on where it appears.

    .element::first-line { font-size: 1.4em;}.element::first-letter { color: red;}

    Much bigger additions to CSS include logical properties, which change the way we construct designs using logical dimensions (start and end) instead of physical ones (left and right), something CSS Grid also does with functions like min(), max(), and clamp().

    This flexibility allows for directional changes according to content, a common requirement when we need to present content in multiple languages. In the past, this was often achieved with Sass mixins but was often limited to switching from left-to-right to right-to-left orientation.

    Directional variables must be specified in the Sass version.

    $direction: rtl;$opposite-direction: ltr;$start-direction: right;$end-direction: left;

    These variables can be used as values—

    body { direction: $direction; text-align: $start-direction;}

    —or as properties.

    margin-#{$end-direction}: 10px;padding-#{$start-direction}: 10px;

    However, now we have native logical properties, removing the reliance on both Sass ( or a similar tool ) and pre-planning that necessitated using variables throughout a codebase. These properties also start to break apart the tight coupling between a design and strict physical dimensions, creating more flexibility for changes in language and in direction.

    margin-block-end: 10px;padding-block-start: 10px;

    There are also native start and end values for properties like text-align, which means we can replace text-align: right with text-align: start.

    Like the earlier examples, these properties help to build out designs that aren’t constrained to one language, the design will reflect the content’s needs.

    Fluid and fixed

    We briefly covered the power of combining fixed widths with fluid widths with intrinsic layouts. The min() and max() functions are a similar concept, allowing you to specify a fixed value with a flexible alternative. 

    For min() this means setting a fluid minimum value and a maximum fixed value.

    .element { width: min(50%, 300px);}

    The element in the figure above will be 50 % of its container as long as the element’s width doesn’t exceed 300px.

    For max() we can set a flexible max value and a minimum fixed value.

    .element { width: max(50%, 300px);}

    Now the element will be 50 % of its container as long as the element’s width is at least 300px. This means we can set limits but allow content to react to the available space.

    The clamp() function builds on this by allowing us to set a preferred value with a third parameter. Now we can allow the element to shrink or grow if it needs to without getting to a point where it becomes unusable.

    .element { width: clamp(300px, 50%, 600px);}

    This time, the element’s width will be 50 % ( the preferred value ) of its container, with no exceptions for 300px and 600px.

    With these techniques, we have a content-first approach to responsive design. We can separate content from markup, meaning the changes users make will not affect the design. By making plans for unanticipated changes in language or direction, we can begin to future-proof designs. And we can increase flexibility by setting desired dimensions alongside flexible alternatives, allowing for more or less content to be displayed correctly.

    Situation first

    Thanks to what we’ve discussed so far, we can cover device flexibility by changing our approach, designing around content and space instead of catering to devices. But what about that last bit of Jeffrey Zeldman’s quote,”… situations you haven’t imagined”?

    Rather than someone using a mobile phone and moving through a crowded street in glaring sunshine, it’s a very different design to be done for someone using a desktop computer. Situations and environments are hard to plan for or predict because they change as people react to their own unique challenges and tasks.

    This is why making a decision is so crucial. One size never fits all, so we need to design for multiple scenarios to create equal experiences for all our users.

    Thankfully, there is a lot we can do to provide choice.

    Responsible design

    ” There are parts of the world where mobile data is prohibitively expensive, and where there is little or no broadband infrastructure”.

    I Used the Web for a Day on a 50 MB Budget.”

    Chris Ashton

    One of the biggest assumptions we make is that people interacting with our designs have a good wifi connection and a wide screen monitor. However, in the real world, our users may be commuters using smaller mobile devices that may experience drops in connectivity while traveling on trains or other modes of transportation. There is nothing more frustrating than a web page that won’t load, but there are ways we can help users use less data or deal with sporadic connectivity.

    The srcset attribute allows the browser to decide which image to serve. This means we can create smaller ‘cropped’ images to display on mobile devices in turn using less bandwidth and less data.

    Image alt text

    The preload attribute can also help us to think about how and when media is downloaded. It can be used to tell a browser about any critical assets that need to be downloaded with high priority, improving perceived performance and the user experience. 

      

    There’s also native lazy loading, which indicates assets that should only be downloaded when they are needed.

    …

    With srcset, preload, and lazy loading, we can start to tailor a user’s experience based on the situation they find themselves in. What none of this does, however, is allow the user themselves to decide what they want downloaded, as the decision is usually the browser’s to make. 

    So how can we put users in control?

    The return of media inquiries

    Media questions have always been about much more than device sizes. They allow content to adapt to different situations, with screen size being just one of them.

    We’ve long been able to check for media types like print and speech and features such as hover, resolution, and color. These checks allow us to provide options that suit more than one scenario, it’s less about one-size-fits-all and more about serving adaptable content.

    The Level 5 spec for Media Queries is still being developed at this writing. It introduces some really exciting queries that in the future will help us design for multiple other unexpected situations.

    For instance, there is a light-level feature that enables you to alter a user’s style when they are in the sun or the darkness. Paired with custom properties, these features allow us to quickly create designs or themes for specific environments.

    @media (light-level: normal) { --background-color: #fff; --text-color: #0b0c0c; }@media (light-level: dim) { --background-color: #efd226; --text-color: #0b0c0c;}

    Another key feature of the Level 5 spec is personalization. Instead of creating designs that are the same for everyone, users can choose what works for them. This is achieved by using features like prefers-reduced-data, prefers-color-scheme, and prefers-reduced-motion, the latter two of which already enjoy broad browser support. These features tap into preferences set via the operating system or browser so people don’t have to spend time making each site they visit more usable. 

    Media questions like this go beyond choices made by a browser to grant more control to the user.

    Expect the unexpected

    In the end, we should always anticipate that things will change. Devices in particular change faster than we can keep up, with foldable screens already on the market.

    We can design for content, but we can’t do it for this constantly changing landscape. By putting content first and allowing that content to adapt to whatever space surrounds it, we can create more robust, flexible designs that increase the longevity of our products.

    A lot of the CSS discussed here is about moving away from layouts and putting content at the heart of design. There is a lot more we can do to adopt a more intrinsic approach, from responsive components to fixed and fluid units. Even better, we can test these techniques during the design phase by designing in-browser and watching how our designs adapt in real-time.

    When it comes to unexpected circumstances, we need to make sure our goods are usable when people need them, whenever and wherever that may be. We can move closer to achieving this by involving users in our design decisions, by creating choice via browsers, and by giving control to our users with user-preference-based media queries.

    Good design for the unexpected should allow for change, provide choice, and give control to those we serve: our users themselves.

  • Sustainable Web Design, An Excerpt

    Sustainable Web Design, An Excerpt

    Some members of the elite running group were beginning to think it was impossible to run a hour in less than four hours in the 1950s. Riders had been attempting it since the later 19th century and were beginning to draw the conclusion that the human body just wasn’t built for the job.

    But Roger Bannister surprised people on May 6, 1956. It was a cold, damp morning in Oxford, England—conditions no one expected to give themselves to record-setting—and but Bannister did really that, running a mile in 3: 59.4 and becoming the first people in the history books to run a mile in under four hours.

    The world today knew that the four-minute hour could be accomplished thanks to this change in the criterion. Bannister’s history lasted just forty-six days, when it was snatched aside by American sprinter John Landy. Finally, in the same race, three athletes all managed to cross the four-minute challenge. Since therefore, over 1, 400 walkers have actually run a mile in under four days, the current document is 3: 43.13, held by Moroccan performer Hicham El Guerrouj.

    We do a lot more when we think something is possible, and we only think it can be done when we see someone else doing it once more. As for individual running speed, we also think there are strict guidelines for how a website should do.

    Establishing requirements for a green website

    The key indicators of climate performance in most big sectors are pretty well established, such as power per square metre for homes and miles per gallon for cars. The tools and methods for calculating those measures are standardized as well, which keeps everyone on the same site when doing economic evaluations. However, we aren’t held to any specific environmental standards in the world of websites and apps, and we only recently have access to the tools and techniques we need to do so.

    The main objective in green web layout is to reduce carbon emissions. However, it’s nearly impossible to accurately assess the amount of CO2 that a website merchandise produces. We can’t assess the pollutants coming out of the exhaust valves on our laptops. Our sites produce far-away, invisible, and unremarkable pollutants when they leave fuel and gas-burning power plants. We have no way to track the particles from a website or app up to the power station where the light is being generated and really know the exact amount of house oil produced. What then do we do?

    If we can‘t measure the actual carbon emissions, then we need to get what we can estimate. The following are the main elements that could be used as measures of coal emissions:

    1. Transfer of data
    2. Electricity’s coal power

    Let’s take a look at how we can use these indicators to calculate the energy use, and in turn the carbon footprint, of the sites and web applications we create.

    Transfer of data

    Most researchers use kilowatt-hours per gigabyte (k Wh/GB ) as a metric of energy efficiency when measuring the amount of data transferred over the internet when a website or application is used. This serves as a wonderful example of how much energy is consumed and how much carbon is released. As a rule of thumb, the more files transferred, the more electricity used in the data center, telecoms systems, and end users products.

    The easiest way to calculate data transfer for a single visit for web pages is to measure the page weight, which is the page’s transfer size in kilobytes when someone first visits the page. It’s fairly easy to measure using the developer tools in any modern web browser. Frequently, any web application’s overall data transfer statistics will be included in your web hosting account ( Fig. 2.1 ).

    The nice thing about page weight as a metric is that it allows us to compare the efficiency of web pages on a level playing field without confusing the issue with constantly changing traffic volumes.

    A large scope is necessary to reduce page weight. By early 2020, the median page weight was 1.97 MB for setups the HTTP Archive classifies as “desktop” and 1.77 MB for “mobile”, with desktop increasing 36 percent since January 2016 and mobile page weights nearly doubling in the same period ( Fig 2.2 ). Image files account for roughly half of this data transfer, making them the single biggest contributor to carbon emissions on the typical website.

    History clearly shows us that our web pages can be smaller, if only we set our minds to it. While the majority of technologies, including the web’s underlying technology like data centers and transmission networks, become more and more energy-efficient, websites themselves become less effective as time goes on.

    You might be aware of the idea behind performance budgeting as a method for directing a project team to deliver faster user experiences. For example, we might specify that the website must load in a maximum of one second on a broadband connection and three seconds on a 3G connection. Performance budgets are upper limits rather than vague suggestions, much like speed limits while driving, so the goal should always be to come within budget.

    Designing for fast performance does often lead to reduced data transfer and emissions, but it isn’t always the case. Page weight and transfer size are more objective and reliable benchmarks for sustainable web design, whereas web performance is frequently more about the subjective perception of load times than it is about the underlying system’s actual efficiency.

    We can set a page weight budget in reference to a benchmark of industry averages, using data from sources like HTTP Archive. We can also use competitor page weight to compare the new website to the old one. For example, we might set a maximum page weight budget as equal to our most efficient competitor, or we could set the benchmark lower to guarantee we are best in class.

    We could start looking at the transferability of our web pages for repeat visitors if we want to take it one step further. Although page weight for the first time someone visits is the easiest thing to measure, and easy to compare on a like-for-like basis, we can learn even more if we start looking at transfer size in other scenarios too. For instance, repeat users who load the same page frequently will likely have a high percentage of the files cached in their browser, which means they won’t need to move all of the files back on subsequent visits. Likewise, a visitor who navigates to new pages on the same website will likely not need to load the full page each time, as some global assets from areas like the header and footer may already be cached in their browser. Moving beyond the first visit and measuring page weight budgets for scenarios beyond this level of detail can help us learn even more about how to optimize efficiency for users who regularly visit our pages.

    Page weight budgets are easy to track throughout a design and development process. Although they don’t directly disclose carbon emissions and energy consumption data, they do provide a clear indicator of efficiency in comparison to other websites. And as transfer size is an effective analog for energy consumption, we can actually use it to estimate energy consumption too.

    In summary, less data transfer leads to more energy efficiency, which is a crucial component of reducing web product carbon emissions. The more efficient our products, the less electricity they use, and the less fossil fuels need to be burned to produce the electricity to power them. However, as we’ll see next, it’s important to take into account the source of that electricity because all web products require some.

    Electricity’s coal power

    Regardless of energy efficiency, the level of pollution caused by digital products depends on the carbon intensity of the energy being used to power them. The term” carbon intensity” (gCO2/k Wh ) is used to describe how much carbon dioxide is produced for each kilowatt-hour of electricity ). This varies widely, with renewable energy sources and nuclear having an extremely low carbon intensity of less than 10 gCO2/k Wh ( even when factoring in their construction ), whereas fossil fuels have very high carbon intensity of approximately 200–400 gCO2/k Wh.

    The majority of electricity is produced by national or state grids, where different levels of carbon intensity are combined with energy from a variety of sources. The distributed nature of the internet means that a single user of a website or app might be using energy from multiple different grids simultaneously, a website user in Paris uses electricity from the French national grid to power their home internet and devices, but the website’s data center could be in Dallas, USA, pulling electricity from the Texas grid, while the telecoms networks use energy from everywhere between Dallas and Paris.

    Although we don’t have complete control over the energy supply of web services, we do have some control over where our projects are hosted. With a data center using a significant proportion of the energy of any website, locating the data center in an area with low carbon energy will tangibly reduce its carbon emissions. Danish startup Tomorrow reports and maps the user-provided data, and a look at their map demonstrates how, for instance, choosing a data center in France will have significantly lower carbon emissions than choosing a data center in the Netherlands ( Fig. 2.3 ).

    Having said that, we don’t want to locate our servers too far away from our users; however, it takes energy to transmit data through the telecom’s networks, and the more energy is used, the further the data travels. Just like food miles, we can think of the distance from the data center to the website’s core user base as “megabyte miles” —and we want it to be as small as possible.

    We can use website analytics to determine the country, state, or even city where our core user group is located and measure the distance from that location to the data center used by our hosting company by using the distance itself as a benchmark. This will be a somewhat fuzzy metric as we don’t know the precise center of mass of our users or the exact location of a data center, but we can at least get a rough idea.

    For instance, if a website is hosted in London but the main audience is on the United States ‘ West Coast, we could calculate the distance between San Francisco and London, which is 5,300 miles. That’s a long way! We can see how hosting it somewhere in North America, ideally on the West Coast, would significantly shorten the distance and the amount of energy needed to transmit the data. In addition, locating our servers closer to our visitors helps reduce latency and delivers better user experience, so it’s a win-win.

    Reverting it to carbon emissions

    If we combine carbon intensity with a calculation for energy consumption, we can calculate the carbon emissions of our websites and apps. A tool my team created accomplishes this by measuring the data transfer over the wire when a web page is loaded, calculating the associated electricity consumption, and then converting that data into a CO2 figure ( Fig. 2.4). It also factors in whether or not the web hosting is powered by renewable energy.

    The Energy and Emissions Worksheet that comes with this book teaches you how to improve it and tailor the data more appropriately to your project’s unique features.

    With the ability to calculate carbon emissions for our projects, we could actually expand our page weight budget and establish carbon budgets as well. CO2 is not a metric commonly used in web projects, we’re more familiar with kilobytes and megabytes, and can fairly easily look at design options and files to assess how big they are. Although translating that into carbon adds an air of abstraction, carbon budgets do focus our minds on the main issue we’re trying to reduce, which also supports the main goal of sustainable web design: reducing carbon emissions.

    Browser Energy

    Transfer of data might be the simplest and most complete analog for energy consumption in our digital projects, but by giving us one number to represent the energy used in the data center, the telecoms networks, and the end user’s devices, it can’t offer us insights into the efficiency in any specific part of the system.

    One part of the system we can look at in more detail is the energy used by end users ‘ devices. The computational load is increasingly shifting from the data center to users ‘ devices, whether they are phones, tablets, laptops, desktops, or even smart TVs, as front-end web technologies advance. Modern web browsers allow us to implement more complex styling and animation on the fly using CSS and JavaScript. Additionally, JavaScript libraries like Angular and React allow us to create applications where the” thinking” process is performed either partially or completely in the browser.

    All of these advances are exciting and open up new possibilities for what the web can do to serve society and create positive experiences. However, more data is processed in a web browser, which means more energy is used by the user’s devices. This has implications not just environmentally, but also for user experience and inclusivity. Applications that put a lot of processing power on a user’s device unintentionally exclude those who have older, slower devices and make the batteries on phones and laptops drain more quickly. Furthermore, if we build web applications that require the user to have up-to-date, powerful devices, people throw away old devices much more frequently. The poorest members of society are also under disproportionate financial burdens due to this, which is not just bad for the environment.

    In part because the tools are limited, and partly because there are so many different models of devices, it’s difficult to measure website energy consumption on end users ‘ devices. The Energy Impact monitor inside the developer console of the Safari browser is one of the tools we currently have ( Fig. 2.5 ).

    You are aware of the moment your computer’s cooling fans start spinning so frantically that you mistakenly believe it might take off when you load a website? That’s essentially what this tool is measuring.

    It uses these figures to create an energy impact rating based on the percentage of CPU used and how long it took the web page to load. It doesn’t give us precise data for the amount of electricity used in kilowatts, but the information it does provide can be used to benchmark how efficiently your websites use energy and set targets for improvement.

  • A Content Model Is Not a Design System

    A Content Model Is Not a Design System

    Do you recall the days when having a fantastic site was sufficient? Today, people are getting answers from Siri, Google search fragments, and mobile applications, not only our websites. Companies with forward-thinking goals have adopted an holistic information plan whose goal is to reach people across a variety of digital stations and platforms.

    However, how can a content management system ( CMS ) be set up to reach your audience both now and in the future? I learned the hard way that creating a content model—a concept of information types, attributes, and relationships that let people and systems understand content—with my more comfortable design-system wondering would collapse my patient’s holistic information strategy. By developing willing versions that are conceptual and that also connect related information, you can avoid that result.

    A Fortune 500 company recently tapped me to guide the CMS application. The customer was excited by the benefits of an holistic information plan, including material modify, multichannel marketing, and robot delivery—designing content to be comprehensible to bots, Google knowledge panels, snippets, and voice user interfaces.

    For our information to be understood by many systems, the unit needed conceptual types, which are names given based on their meaning rather than their presentation. This is crucial for an multichannel content strategy. Our goal was to allow writers to produce original content that could be used wherever they felt was most useful. But as the job proceeded, I realized that supporting material utilize at the range that my client needed required the whole group to identify a new pattern.

    Despite our best efforts, we remained influenced by what we were more common with: design techniques. An holistic content strategy cannot rely on WYSIWYG equipment for design and layout, unlike web-focused material strategies. Our tendency to approach the material model with our common design-system thinking frequently led us to veer away from one of the main purposes of a material model: delivering content to audiences on various marketing channels.

    Two fundamental tenets must be followed in order to create a successful content model

    We had to explain to our designers, developers, and stakeholders that their previous web projects had taught them that content should be treated as visual building blocks that fit into layouts. The previous approach was not only more familiar but also more intuitive—at least at first—because it made the designs feel more tangible. We discovered two guiding principles that helped the team understand how a content model and the design processes we were familiar with were:

    1. Instead of layout, content models must define semantics.
    2. And content models should connect content that belongs together.

    Semantic content models

    A semantic content model uses type and attribute names that reflect the content’s intended purpose and not its intended display. For example, in a nonsemantic model, teams might create types like teasers, media blocks, and cards. These types may simplify the presentation of content, but they do not aid in understanding the meaning of the content, which would have opened the door to the content presented in each marketing channel. In contrast, a semantic content model uses type names like “product,”” service,” and “testimonial” to allow for each delivery channel to interpret and use the content as it sees fit.

    When you’re creating a semantic content model, a great place to start is to look over the types and properties defined by Schema. a curated resource for type definitions that are understandable on platforms like Google search, type definitions .org

    Benefits of a semantic content model include:

      Even if your team doesn’t care about omnichannel content, a semantic content model decouples content from its presentation so that teams can evolve the website’s design without needing to refactor its content. In this way, content can withstand irrational website redesigns.
    • A semantic content model also gives you an advantage in the market. By adding structured data based on Schema. A website can provide hints to Google to understand the content, display it in search snippets or knowledge panels, and use it to respond to user voice-interface queries. Potential visitors could access your content without ever walking into your website.
    • Beyond those practical benefits, you’ll also need a semantic content model if you want to deliver omnichannel content. Delivery channels must be able to understand the same content in order to use it across multiple marketing channels. For instance, if your content model provided a list of questions and answers, it could be easily displayed on a frequently asked questions ( FAQ ) page as well as be used by a bot to answer frequently asked questions.

    For example, using a semantic content model for articles, events, people, and locations lets A List Apart provide cleanly structured data for search engines so that users can read the content on the website, in Google knowledge panels, and even with hypothetical voice interfaces in the future.

    connective content models

    Instead of slicing up related content across disparate content components, I’ve come to the realization that the best models are those that are semantic and also connect related content components ( such as a FAQ item’s question and answer pair ). A good content model connects content that should remain together so that multiple delivery channels can use it without needing to first put those pieces back together.

    Write an essay or article about it. The unity of an article’s parts determines its meaning and usefulness. Would one of the headings or paragraphs be meaningful on their own without the context of the full article? Our well-known design-system thinking on our project frequently led us to want to develop content models that would divide content into distinct chunks to fit the web-centric layout. This had a similar effect to an article that had its headline removed. Because we were slicing content into standalone pieces based on layout, content that belonged together became difficult to manage and nearly impossible for multiple delivery channels to understand.

    Let’s take a look at how connecting related content works in a real-world setting to illustrate. A complex layout for a software product page that included multiple tabs and sections was presented by the client’s design team. Our instincts were to follow suit with the content model. Shouldn’t we make adding any number of tabs in the future as simple and as flexible as possible?

    Because our design-system instincts were so well-known, it appeared that we needed a “tab section” content type so that multiple tab sections could be added to a page. Each tab section would display various types of content. The software’s overview or specifications might be available in one tab. A list of resources might be provided by another tab.

    Our inclination to break down the content model into “tab section” pieces would have led to an unnecessarily complex model and a cumbersome editing experience, and it would have also created content that couldn’t have been understood by additional delivery channels. How would a different system have been able to determine which “tab section” referred to a product’s specifications or resource list, for instance? Would that system have had to have used tab sections and content blocks to calculate these terms? This would have prevented the tabs from ever being rearranged, and it would have required adding logic to each other delivery channel to interpret the layout of the design system. Furthermore, if the customer were to have no longer wanted to display this content in a tab layout, it would have been tedious to migrate to a new content model to reflect the new page redesign.

    Our customer had a breakthrough when we realized that each tab’s specific information, such as the software product’s overview, specifications, related resources, and pricing, was intended to reveal a specific purpose. Once implementation began, our inclination to focus on what’s visual and familiar had obscured the intent of the designs. It wasn’t long after a little digging that it became clear that the idea of tabs wasn’t applicable to the content model. What was important was the meaning of the information that they intended to display in the tabs.

    In fact, the customer could have decided to display this content in a different way—without tabs—somewhere else. In response to this realization, we created content types for the software product based on the meaningful attributes the client wanted to display on the web. There were rich attributes like screenshots, software requirements, and feature lists as well as obvious semantic attributes like name and description. The software’s product information stayed together because it wasn’t sliced across separate components like “tab sections” that were derived from the content’s presentation. Any delivery channel, including those that follow, could comprehend and display this content.

    Conclusion

    In this omnichannel marketing project, we discovered that the best way to keep our content model on track was to ensure that it was semantic ( with type and attribute names that reflected the meaning of the content ) and that it kept content together that belonged together ( instead of fragmenting it ). These two ideas made it easier for us to shape the content model based on the design. Remember: If you’re developing a content model to support an omnichannel content strategy, or even if you just want to make sure Google and other interfaces understand your content, remember:

    • A design system isn’t a content model. You should maintain the semantic value and contextual structure of the content strategy throughout the entire implementation process because team members might be drawn to conflate them and force your content model to resemble your design system. Without the use of a magic decoder ring, every delivery channel will be able to consume the content.
    • If your team is struggling to make this transition, you can still reap some of the benefits by using Schema. Your website uses structured data from org. The benefit of search engine optimization is a compelling argument on its own, even if additional delivery channels aren’t on the horizon at this time.
    • Additionally, remind the team that decoupling the content model from the design will let them update the designs more easily because they won’t be held back by the cost of content migrations. They will be prepared for the upcoming big thing, and they will be able to create new designs without compromising compatibility between the design and the content.

    By firmly defending these ideas, you’ll help your team view content as the most important component of your user experience and as the most effective way to engage with your audience.