Blog

  • Personalization Pyramid: A Framework for Designing with User Data

    Personalization Pyramid: A Framework for Designing with User Data

    As a UX professional in today’s data-driven surroundings, it ’s extremely likely that you ’ve been asked to design a personal digital experience, whether it ’s a public website, person site, or local program. However while there continues to be no lack of marketing buzz around personalization systems, we also have very few defined approaches for implementing personalized UX.

    That’s where we come in. After completing tens of personalisation projects over the past few years, we gave ourselves a purpose: could you make a systematic personalization platform especially for UX practitioners? The Personalization Pyramid is a designer-centric model for standing up human-centered customisation programs, spanning information, classification, content delivery, and general goals. By using this strategy, you will be able to understand the core components of a modern, UX-driven personalization system ( or at the very least understand enough to get started ).

    Getting Started

    For the sake of this essay, we’ll believe you’re already familiar with the basics of online personalization. A nice guide can be found these: Website Personalization Planning. While Graphic jobs in this area can take on several different forms, they usually stem from identical starting positions.

    Common scenarios for starting a customisation task:

    • Your organization or client purchased a content management system ( CMS ) or marketing automation platform ( MAP ) or related technology that supports personalization
    • The CMO, CDO, or CIO has identified personalisation as a target
    • User data is disjointed or confusing
    • You are running some secluded targeting strategies or A/B tests
    • Partners disagree on personalization technique
    • Mission of client privacy rules ( e. g. GDPR ) requires revisiting existing user targeting practices

    Regardless of where you begin, a powerful personalization system will require the same key building stones. We’ve captured these as the “levels ” on the pyramid. Whether you are a UX artist, scholar, or planner, understanding the core components may help make your contribution effective.

    From top to bottom, the amounts include:

      North Star: What larger geopolitical target is driving the personalization system?
    1. Objectives: What are the specific, tangible benefits of the system?
    2. Touchpoints: Where will the personalized experience become served?
    3. Contexts and Campaigns: What personalization information does the person view?
    4. User Segments: What constitutes a special, suitable market?
    5. Actionable Data: What dependable and credible information is captured by our professional platform to generate personalization?
    6. Natural Data: What wider set of data is potentially available ( now in our environment ) allowing you to optimize?

    We’ll go through each of these amounts in turn. To help make this practical, we created an following deck of cards to demonstrate specific examples from each degree. We’ve found them helpful in customisation brainstorming periods, and will include cases for you here.

    Starting at the Top

    The parts of the pyramids are as follows:

    North Star

    A northern sun is what you are aiming for total with your personalization system ( big or small ). The North Star defines the (one ) overall mission of the personalization program. What do you wish to achieve? North Stars cast a ghost. The bigger the sun, the bigger the dark. Example of North Starts may contain:

      Function: Personalize based on basic customer sources. Examples: “Raw ” notifications, basic search results, system user settings and configuration options, general customization, basic optimizations
    1. Feature: Self-contained personalisation componentry. Examples: “Cooked ” notifications, advanced optimizations ( geolocation ), basic dynamic messaging, customized modules, automations, recommenders
    2. Experience: Personal user experiences across numerous interactions and consumer flows. Example: Email activities, landing pages, sophisticated communication ( i. electronic. C2C chat ) or conversational interfaces, larger user flows and content-intensive optimizations ( localization ).
    3. Solution: Highly differentiating customized product experiences. Example: Standalone, branded encounters with personalization at their base, like the “algotorial” songs by Spotify quite as Discover Weekly.

    Goals

    As in any good UX design, personalization can help accelerate designing with customer intentions. Goals are the tactical and measurable metrics that will prove the overall program is successful. A good place to start is with your current analytics and measurement program and metrics you can benchmark against. In some cases, new goals may be appropriate. The key thing to remember is that personalization itself is not a goal, rather it is a means to an end. Common goals include:

    • Conversion
    • Time on task
    • Net promoter score ( NPS)
    • Customer satisfaction

    Touchpoints

    Touchpoints are where the personalization happens. As a UX designer, this will be one of your largest areas of responsibility. The touchpoints available to you will depend on how your personalization and associated technology capabilities are instrumented, and should be rooted in improving a user’s experience at a particular point in the journey. Touchpoints can be multi-device ( mobile, in-store, website ) but also more granular ( web banner, web pop-up etc. ). Here are some examples:

    Channel-level Touchpoints

    • Email: Role
    • Email: Time of open
    • In-store display ( JSON endpoint )
    • Native app
    • Search

    Wireframe-level Touchpoints

    • Web overlay
    • Web alert bar
    • Web banner
    • Web content block
    • Web menu

    If you’re designing for web interfaces, for example, you will likely need to include personalized “zones ” in your wireframes. The content for these can be presented programmatically in touchpoints based on our next step, contexts and campaigns.

    Contexts and Campaigns

    Once you ’ve outlined some touchpoints, you can consider the actual personalized content a user will receive. Many personalization tools will refer to these as “campaigns ” ( so, for example, a campaign on a web banner for new visitors to the website ). These will programmatically be shown at certain touchpoints to certain user segments, as defined by user data. At this stage, we find it helpful to consider two separate models: a context model and a content model. The context helps you consider the level of engagement of the user at the personalization moment, for example a user casually browsing information vs. doing a deep-dive. Think of it in terms of information retrieval behaviors. The content model can then help you determine what type of personalization to serve based on the context ( for example, an “Enrich ” campaign that shows related articles may be a suitable supplement to extant content ).

    Personalization Context Model:

    1. Browse
    2. Skim
    3. Nudge
    4. Feast

    Personalization Content Model:

    1. Alert
    2. Make Easier
    3. Cross-Sell
    4. Enrich

    We’ve written extensively about each of these models elsewhere, so if you’d like to read more you can check out Colin’s Personalization Content Model and Jeff’s Personalization Context Model.

    User Segments

    User segments can be created prescriptively or adaptively, based on user research ( e. g. via rules and logic tied to set user behaviors or via A/B testing ). At a minimum you will likely need to consider how to treat the unknown or first-time visitor, the guest or returning visitor for whom you may have a stateful cookie ( or equivalent post-cookie identifier ), or the authenticated visitor who is logged in. Here are some examples from the personalization pyramid:

    • Unknown
    • Guest
    • Authenticated
    • Default
    • Referred
    • Role
    • Cohort
    • Unique ID

    Actionable Data

    Every organization with any digital presence has data. It’s a matter of asking what data you can ethically collect on users, its inherent reliability and value, as to how can you use it ( sometimes known as “data activation. ” ) Fortunately, the tide is turning to first-party data: a recent study by Twilio estimates some 80 % of businesses are using at least some type of first-party data to personalize the customer experience.

    First-party data represents multiple advantages on the UX front, including being relatively simple to collect, more likely to be accurate, and less susceptible to the “creep factor” of third-party data. So a key part of your UX strategy should be to determine what the best form of data collection is on your audiences. Here are some examples:

    There is a progression of profiling when it comes to recognizing and making decisioning about different audiences and their signals. It tends to move towards more granular constructs about smaller and smaller cohorts of users as time and confidence and data volume grow.

    While some combination of implicit / explicit data is generally a prerequisite for any implementation ( more commonly referred to as first party and third-party data ) ML efforts are typically not cost-effective directly out of the box. This is because a strong data backbone and content repository is a prerequisite for optimization. But these approaches should be considered as part of the larger roadmap and may indeed help accelerate the organization’s overall progress. Typically at this point you will partner with key stakeholders and product owners to design a profiling model. The profiling model includes defining approach to configuring profiles, profile keys, profile cards and pattern cards. A multi-faceted approach to profiling which makes it scalable.

    Pulling it Together

    While the cards comprise the starting point to an inventory of sorts ( we provide blanks for you to tailor your own ), a set of potential levers and motivations for the style of personalization activities you aspire to deliver, they are more valuable when thought of in a grouping.

    In assembling a card “hand”, one can begin to trace the entire trajectory from leadership focus down through a strategic and tactical execution. It is also at the heart of the way both co-authors have conducted workshops in assembling a program backlog—which is a fine subject for another article.

    In the meantime, what is important to note is that each colored class of card is helpful to survey in understanding the range of choices potentially at your disposal, it is threading through and making concrete decisions about for whom this decisioning will be made: where, when, and how.

    Lay Down Your Cards

    Any sustainable personalization strategy must consider near, mid and long-term goals. Even with the leading CMS platforms like Sitecore and Adobe or the most exciting composable CMS DXP out there, there is simply no “easy button ” wherein a personalization program can be stood up and immediately view meaningful results. That said, there is a common grammar to all personalization activities, just like every sentence has nouns and verbs. These cards attempt to map that territory.

  • Humility: An Essential Value

    Humility: An Essential Value

    Humility, a designer’s necessary value—that has a good ring to it. What about sincerity, an workplace manager’s necessary value? Or a dentist’s? Or a librarian’s? They all good wonderful. When humility is our guiding light, the course is usually available for fulfillment, development, relation, and commitment. In this section, we’re going to talk about why.

    That said, this is a guide for developers, and to that conclusion, I’d like to start with a story—well, a voyage, actually. It’s a private one, and I’m going to make myself a little prone along the way. I call it:

    The Tale of Justin’s Preposterous Pate

    When I was coming out of arts school, a long-haired, goateed novice, write was a known number to me; pattern on the web, however, was riddled with challenges to understand and learn, a problem to be solved. Though I had been fully trained in graphic design, font, and design, what fascinated me was how these classic skills may be applied to a budding online landscape. This theme may eventually form the rest of my profession.

    So rather than student and go into write like many of my friends, I devoured HTML and JavaScript books into the wee hours of the morning and taught myself how to code during my freshman year. I wanted—nay, needed—to better understand the underlying relevance of what my design decisions may think when rendered in a website.

    The late ’90s and early 2000s were the so-called “Wild West ” of web design. Manufacturers at the time were all figuring out how to use layout and visual connection to the online environment. What were the laws? How may we break them and also engage, entertain, and present information? At a more micro level, how was my values, inclusive of modesty, admiration, and link, coincide in combination with that? I was eager to find out.

    Though I’m talking about a diverse time, those are amazing factors between non-career connections and the world of style. What are your main passions, or ideals, that elevate medium? It’s basically the same idea we discussed previously on the primary parallels between what fulfills you, independent of the visible or electronic realms; the key elements are all the same.

    First within tables, animated GIFs, Flash, then with Web Standards, divs, and CSS, there was personality, raw unbridled creativity, and unique means of presentment that often defied any semblance of a visible grid. Splash screens and “browser requirement” pages aplenty. Usability and accessibility were typically victims of such a creation, but such paramount facets of any digital design were largely (and, in hindsight, unfairly) disregarded at the expense of experimentation.

    For example, this iteration of my personal portfolio site ( “the pseudoroom” ) from that era was experimental, if not a bit heavy- handed, in the visual communication of the concept of a living sketchbook. Very skeuomorphic. I collaborated with fellow designer and dear friend Marc Clancy ( now a co-founder of the creative project organizing app Milanote ) on this one, where we’d first sketch and then pass a Photoshop file back and forth to trick things out and play with varied user interactions. Then, I’d break it down and code it into a digital layout.

    Along with design folio pieces, the site also offered free downloads for Mac OS customizations: desktop wallpapers that were effectively design experimentation, custom-designed typefaces, and desktop icons.

    From around the same time, GUI Galaxy was a design, pixel art, and Mac-centric news portal some graphic designer friends and I conceived, designed, developed, and deployed.

    Design news portals were incredibly popular during this period, featuring ( what would now be considered ) Tweet-size, small-format snippets of pertinent news from the categories I previously mentioned. If you took Twitter, curated it to a few categories, and wrapped it in a custom-branded experience, you’d have a design news portal from the late 90s / early 2000s.

    We as designers had evolved and created a bandwidth-sensitive, web standards award-winning, much more accessibility-conscious website. Still ripe with experimentation, yet more mindful of equitable engagement. You can see a couple of content panes here, noting general news (tech, design ) and Mac-centric news below. We also offered many of the custom downloads I cited before as present on my folio site but branded and themed to GUI Galaxy.

    The site’s backbone was a homegrown CMS, with the presentation layer consisting of global design + illustration + news author collaboration. And the collaboration effort here, in addition to experimentation on a ‘brand ’ and content delivery, was hitting my core. We were designing something bigger than any single one of us and connecting with a global audience.

    Collaboration and connection transcend medium in their impact, immensely fulfilling me as a designer.

    Now, why am I taking you down this trip of design memory lane? Two reasons.

    First, there’s a reason for the nostalgia for that design era ( the “Wild West ” era, as I called it earlier ): the inherent exploration, personality, and creativity that saturated many design portals and personal portfolio sites. Ultra-finely detailed pixel art UI, custom illustration, bespoke vector graphics, all underpinned by a strong design community.

    Today’s web design has been in a period of stagnation. I suspect there’s a strong chance you ’ve seen a site whose structure looks something like this: a hero image / banner with text overlaid, perhaps with a lovely rotating carousel of images ( laying the snark on heavy there ), a call to action, and three columns of sub-content directly beneath. Maybe an icon library is employed with selections that vaguely relate to their respective content.

    Design, as it ’s applied to the digital landscape, is in dire need of thoughtful layout, typography, and visual engagement that goes hand-in-hand with all the modern considerations we now know are paramount: usability. Accessibility. Load times and bandwidth- sensitive content delivery. A responsive presentation that meets human beings wherever they’re engaging from. We must be mindful of, and respectful toward, those concerns—but not at the expense of creativity of visual communication or via replicating cookie-cutter layouts.

    Pixel Problems

    Websites during this period were often designed and built on Macs whose OS and desktops looked something like this. This is Mac OS 7. 5, but 8 and 9 weren’t that different.

    Desktop icons fascinated me: how could any single one, at any given point, stand out to get my attention? In this example, the user’s desktop is tidy, but think of a more realistic example with icon pandemonium. Or, say an icon was part of a larger system grouping ( fonts, extensions, control panels ) —how did it also maintain cohesion amongst a group?

    These were 32 x 32 pixel creations, utilizing a 256-color palette, designed pixel-by-pixel as mini mosaics. To me, this was the embodiment of digital visual communication under such ridiculous constraints. And often, ridiculous restrictions can yield the purification of concept and theme.

    So I began to research and do my homework. I was a student of this new medium, hungry to dissect, process, discover, and make it my own.

    Expanding upon the notion of exploration, I wanted to see how I could push the limits of a 32×32 pixel grid with that 256-color palette. Those ridiculous constraints forced a clarity of concept and presentation that I found incredibly appealing. The digital gauntlet had been tossed, and that challenge fueled me. And so, in my dorm room into the wee hours of the morning, I toiled away, bringing conceptual sketches into mini mosaic fruition.

    These are some of my creations, utilizing the only tool available at the time to create icons called ResEdit. ResEdit was a clunky, built-in Mac OS utility not really made for exactly what we were using it for. At the core of all of this work: Research. Challenge. Problem- solving. Again, these core connection-based values are agnostic of medium.

    There’s one more design portal I want to talk about, which also serves as the second reason for my story to bring this all together.

    This is K10k, short for Kaliber 1000. K10k was founded in 1998 by Michael Schmidt and Toke Nygaard, and was the design news portal on the web during this period. With its pixel art-fueled presentation, ultra-focused care given to every facet and detail, and with many of the more influential designers of the time who were invited to be news authors on the site, well… it was the place to be, my friend. With respect where respect is due, GUI Galaxy’s concept was inspired by what these folks were doing.

    For my part, the combination of my web design work and pixel art exploration began to get me some notoriety in the design scene. Eventually, K10k noticed and added me as one of their very select group of news authors to contribute content to the site.

    Amongst my personal work and side projects —and now with this inclusion—in the design community, this put me on the map. My design work also began to be published in various printed collections, in magazines domestically and overseas, and featured on other design news portals. With that degree of success while in my early twenties, something else happened:

    I evolved—devolved, really—into a colossal asshole ( and in just about a year out of art school, no less ). The press and the praise became what fulfilled me, and they went straight to my head. They inflated my ego. I actually felt somewhat superior to my fellow designers.

    The casualties? My design stagnated. Its evolution—my evolution — stagnated.

    I felt so supremely confident in my abilities that I effectively stopped researching and discovering. When previously sketching concepts or iterating ideas in lead was my automatic step one, I instead leaped right into Photoshop. I drew my inspiration from the smallest of sources ( and with blinders on ). Any critique of my work from my peers was often vehemently dismissed. The most tragic loss: I had lost touch with my values.

    My ego almost cost me some of my friendships and burgeoning professional relationships. I was toxic in talking about design and in collaboration. But thankfully, those same friends gave me a priceless gift: candor. They called me out on my unhealthy behavior.

    Admittedly, it was a gift I initially did not accept but ultimately was able to deeply reflect upon. I was soon able to accept, and process, and course correct. The realization laid me low, but the re-awakening was essential. I let go of the “reward ” of adulation and re-centered upon what stoked the fire for me in art school. Most importantly: I got back to my core values.

    Always Students

    Following that short-term regression, I was able to push forward in my personal design and career. And I could self-reflect as I got older to facilitate further growth and course correction as needed.

    As an example, let’s talk about the Large Hadron Collider. The LHC was designed “to help answer some of the fundamental open questions in physics, which concern the basic laws governing the interactions and forces among the elementary objects, the deep structure of space and time, and in particular the interrelation between quantum mechanics and general relativity. ” Thanks, Wikipedia.

    Around fifteen years ago, in one of my earlier professional roles, I designed the interface for the application that generated the LHC’s particle collision diagrams. These diagrams are the rendering of what’s actually happening inside the Collider during any given particle collision event and are often considered works of art unto themselves.

    Designing the interface for this application was a fascinating process for me, in that I worked with Fermilab physicists to understand what the application was trying to achieve, but also how the physicists themselves would be using it. To that end, in this role,

    I cut my teeth on usability testing, working with the Fermilab team to iterate and improve the interface. How they spoke and what they spoke about was like an alien language to me. And by making myself humble and working under the mindset that I was but a student, I made myself available to be a part of their world to generate that vital connection.

    I also had my first ethnographic observation experience: going to the Fermilab location and observing how the physicists used the tool in their actual environment, on their actual terminals. For example, one takeaway was that due to the level of ambient light-driven contrast within the facility, the data columns ended up using white text on a dark gray background instead of black text-on-white. This enabled them to pore over reams of data during the day and ease their eye strain. And Fermilab and CERN are government entities with rigorous accessibility standards, so my knowledge in that realm also grew. The barrier-free design was another essential form of connection.

    So to those core drivers of my visual problem-solving soul and ultimate fulfillment: discovery, exposure to new media, observation, human connection, and evolution. What opened the door for those values was me checking my ego before I walked through it.

    An evergreen willingness to listen, learn, understand, grow, evolve, and connect yields our best work. In particular, I want to focus on the words ‘grow ’ and ‘evolve’ in that statement. If we are always students of our craft, we are also continually making ourselves available to evolve. Yes, we have years of applicable design study under our belt. Or the focused lab sessions from a UX bootcamp. Or the monogrammed portfolio of our work. Or, ultimately, decades of a career behind us.

    But all that said: experience does not equal “expert. ”

    As soon as we close our minds via an inner monologue of ‘knowing it all ’ or branding ourselves a “#thoughtleader ” on social media, the designer we are is our final form. The designer we can be will never exist.

  • I am a creative.

    I am a creative.

    I am a artistic. What I do is alchemy. It is a secret. I do not so little do it, when let it be done through me.

    I am a artistic. Not all imaginative people like this brand. No all see themselves this method. Some creative individuals see research in what they do. That is their reality, and I regard it. Sometimes I even envy them, a minor. But my operation is different—my becoming is unique.

    Apologizing and qualifying in advance is a diversion. That’s what my head does to destroy me. I set it aside for today. I may come back later to forgive and count. After I’ve said what I came to say. Which is challenging enough.

    Except when it is simple and flows like a valley of wine.

    Sometimes it does come that method. Maybe what I need to build comes in an instant. I have learned not to say it at that time, because if you admit that sometimes the thought just comes and it is the best idea and you know it is the best idea, they think you don’t work hard enough.

    Maybe I work and work and work until the plan comes. Often it comes suddenly and I don’t tell everyone for three weeks. Often I’m so excited by the idea that came immediately that I blurt it out, can’t support myself. Like a child who found a medal in his Cracker Jacks. Maybe I get away with this. Maybe other people agree: yes, that is the best plan. Most days they don’t and I regret having given way to joy.

    Passion is best saved for the conference where it will make a difference. Certainly the everyday get-together that accompanies that gathering by two different meetings. Anyone knows why we have all these discussions. We keep saying we’re doing away with them, but then only finding different ways to include them. Sometimes they are also good. But other days they are a distraction from the actual labor. The percentages between when conferences are important, and when they are a sad distraction, vary, depending on what you do and where you do it. And who you are and how you do it. Suddenly I digress. I am a artistic. That is the style.

    Often many hours of hard and persistent work produce something that is rarely serviceable. Maybe I have to accept that and move on to the next task.

    Don’t ask about method. I am a artistic.

    I am a artistic. I don’t command my goals. And I don’t command my best thoughts.

    I can nail apart, surround myself with information or photos, and maybe that works. I can go for a walk, and occasionally that works. I may be making breakfast and there’s a Eureka having nothing to do with sizzling oil and boiling pots. Usually I know what to do the instant I wake up. And then, nearly as often, as I become aware and part of the world once, the idea that may have saved me turns to vanishing sand in a senseless storm of nothingness. For ingenuity, I believe, comes from that other world. The one we enter in aspirations, and possibly, before conception and after death. But that ’s for writers to know, and I am hardly a writer. I am a innovative. And it ’s for theologians to large forces about in their artistic world that they insist is true. But that is another diversion. And a sad one. Even on a much more important issue than whether I am a inventive or not. But nevertheless a diversion from what I came here to say.

    Often the process is mitigation. And hardship. You know the cliché about the abused designer? It’s true, even when the artist ( and let’s put that noun in comments ) is trying to write a soft drink jingle, a call in a tired show, a budget demand.

    Some people who hate being called artistic may be closeted artists, but that ’s between them and their angels. No offence meant. Your reality is correct, too. But mine is for me.

    Creatives identify artists.

    Creatives identify creatives like faggots recognize queers, like true rappers recognize true performers, like cons know cons. Creatives feel large regard for creatives. We love, respect, emulate, and nearly deify the excellent ones. To revere any man is, of course, a horrible mistake. We have been warned. We know much. We know people are really people. They dispute, they are depressed, they regret their most critical decisions, they are weak and thirsty, they can be cruel, they can be just as terrible as we can, if, like us, they are clay. But. But. But they make this wonderful issue. They beginning something that did not exist before them, and could not occur without them. They are the mother of tips. And I suppose, since it ’s merely lying there, I have to put that they are the mother of invention. Ba ho backside! Okay, that ’s done. Continue.

    Creatives disparage our personal small successes, because we compare them to those of the wonderful people. Wonderful video! Also, I’m no Miyazaki. Now THAT is glory. That is glory directly from the mind of God. This half-starved small item that I made? It more or less fell off the back of the pumpkin vehicle. And the carrots weren’t yet fresh.

    Creatives knows that, at best, they are Salieri. Yet the creatives who are He think that.

    I am a artistic. I have n’t worked in advertising in 30 years, but in my hallucinations, it ’s my previous artistic managers who judge me. And they are appropriate to do so. I am very lazy, overly simplistic, and when it actually counts, my mind goes blank. There is no medication for artistic function.

    I am a artistic. Every date I make is an experience that makes Indiana Jones look like a retiree snoring in a balcony seat. The longer I remain a artistic, the faster I am when I do my job and the longer I brood and move in circles and gaze blankly before I do that job.

    I am also 10 times faster than people who are not artistic, or people who have just been imaginative a short while, or people who have just been properly imaginative a short while. It’s merely that, before I work 10 times as fast as they do, I spend twice as long as they do putting the work out. I am that confident in my ability to do a great task when I put my mind to it. I am that attached to the excitement rush of delay. I am also that scared of the climb.

    I am not an actor.

    I am a innovative. No an actor. Though I dreamed, as a child, of eventually being that. Some of us denigrate our products and like ourselves because we are not Michelangelos and Warhols. That is narcissism—but at least we aren’t in elections.

    I am a artistic. Though I believe in reason and science, I decide by intelligence and desire. And sit with what follows—the calamities as well as the successes.

    I am a artistic. Every term I’ve said these did offend another artists, who see things differently. Ask two artists a problem, get three ideas. Our dispute, our enthusiasm about it, and our responsibility to our own reality are, at least to me, the facts that we are artists, no matter how we may think about it.

    I am a artistic. I lament my lack of taste in the places about which I know quite small, which is to say almost all areas of human knowledge. And I trust my preference above all other items in the regions closest to my soul, or perhaps, more precisely, to my passions. Without my passions, I would probably have to spend my time looking career in the eye, and virtually none of us can do that for longer. No actually. No truly. Because many in existence, if you really look at it, is terrible.

    I am a innovative. I believe, as a family believes, that when I am gone, some little good part of me will take on in the head of at least one other people.

    Working saves me from worrying about job.

    I am a innovative. I live in despair of my little present immediately going ahead.

    I am a artistic. I am very active making the next thing to spend too much time seriously considering that almost nothing I make does come anywhere near the glory I awkwardly aspire to.

    I am a artistic. I believe in the greatest mystery of operation. I believe in it so many, I am even stupid enough to submit an article I dictated into a small appliance and did n’t take time to evaluate or update. I won’t would this generally, I promise. But I did it just now, that, as frightened as I might be of your seeing through my sad movements toward the wonderful, I was even more frightened of forgetting what I came to say.

    There. I think I’ve said it.

  • Opportunities for AI in Accessibility

    Opportunities for AI in Accessibility

    In studying Joe Dolson’s new item on the crossroads of AI and affordability, I positively appreciated the suspicion that he has for AI in public as well as for the ways that many have been using it. In reality, I’m extremely wary of AI myself, despite my role at Microsoft as an accessibility technology strategist who helps manage the AI for Accessibility award program. As with any device, AI can be used in quite productive, equitable, and accessible ways; and it can also be used in dangerous, unique, and dangerous people. And there are a ton of combines somewhere in the poor center as effectively.

    I’d like you to consider this a “yes … and ” piece to complement Joe’s post. I’m not trying to reject any of what he’s saying but instead provide some awareness to projects and possibilities where AI can generate substantial differences for people with disabilities. To be clear, I’m not saying that there aren’t real challenges or pressing problems with AI that need to be addressed—there are, and we’ve needed to address them, like, yesterday—but I want to take a little time to talk about what’s probable in hopes that we’ll find it one day.

    Other words

    Joe’s part spends a lot of time talking about computer-vision types generating other words. He highlights a ton of true issues with the current state of things. And while computer-vision concepts continue to improve in the quality and complexity of information in their information, their effects aren’t wonderful. As he rightly points out, the current state of image analysis is pretty poor—especially for certain image types—in large part because current AI systems examine images in isolation rather than within the contexts that they’re in ( which is a consequence of having separate “foundation ” models for text analysis and image analysis ). Today’s models aren’t trained to distinguish between images that are contextually relevant ( that should probably have descriptions ) and those that are purely decorative ( which might not need a description ) either. However, I still think there’s possible in this area.

    As Joe mentions, human-in-the-loop publishing of alt word should definitely be a factor. And if AI is pop in to offer a starting point for ctrl text—even if that starting point might be a quick saying What is this BS? That’s not right at all … Let me try to offer a starting point— I think that ’s a win.

    Taking things a step further, if we can specifically station a model to evaluate image usage in context, it may assist us more quickly identify which images are likely to be elegant and which ones possibly require a description. That will help reinforce which contexts call for image descriptions and it’ll improve authors ’ efficiency toward making their pages more accessible.

    While complex images—like graphs and charts—are challenging to describe in any sort of succinct way ( even for humans ), the image example shared in the GPT4 announcement points to an interesting opportunity as well. Let’s suppose that you came across a chart whose description was simply the title of the chart and the kind of visualization it was, such as: Pie chart comparing smartphone usage to feature phone usage among US households making under$ 30,000 a year. ( That would be a pretty awful alt text for a chart since that would tend to leave many questions about the data unanswered, but then again, let’s suppose that that was the description that was in place. ) If your browser knew that that image was a pie chart ( because an onboard model concluded this ), imagine a world where users could ask questions like these about the graphic:

    • Do more people use smartphones or feature phones?
    • How many more?
    • Is there a group of people that don’t fall into either of these buckets?
    • How many is that?

    Setting aside the realities of large language model ( LLM) hallucinations—where a model just makes up plausible-sounding “facts ”—for a moment, the opportunity to learn more about images and data in this way could be revolutionary for blind and low-vision folks as well as for people with various forms of color blindness, cognitive disabilities, and so on. It could also be useful in educational contexts to help people who can see these charts, as is, to understand the data in the charts.

    Taking things a step further: What if you could ask your browser to simplify a complex chart? What if you could ask it to isolate a single line on a line graph? What if you could ask your browser to transpose the colors of the different lines to work better for form of color blindness you have? What if you could ask it to swap colors for patterns? Given these tools ’ chat-based interfaces and our existing ability to manipulate images in today’s AI tools, that seems like a possibility.

    Now imagine a purpose-built model that could extract the information from that chart and convert it to another format. For example, perhaps it could turn that pie chart ( or better yet, a series of pie charts ) into more accessible ( and useful ) formats, like spreadsheets. That would be amazing!

    Matching algorithms

    Safiya Umoja Noble absolutely hit the nail on the head when she titled her book Algorithms of Oppression. While her book was focused on the ways that search engines reinforce racism, I think that it ’s equally true that all computer models have the potential to amplify conflict, bias, and intolerance. Whether it ’s Twitter always showing you the latest tweet from a bored billionaire, YouTube sending us into a Q-hole, or Instagram warping our ideas of what natural bodies look like, we know that poorly authored and maintained algorithms are incredibly harmful. A lot of this stems from a lack of diversity among the people who shape and build them. When these platforms are built with inclusively baked in, however, there’s real potential for algorithm development to help people with disabilities.

    Take Mentra, for example. They are an employment network for neurodivergent people. They use an algorithm to match job seekers with potential employers based on over 75 data points. On the job-seeker side of things, it considers each candidate’s strengths, their necessary and preferred workplace accommodations, environmental sensitivities, and so on. On the employer side, it considers each work environment, communication factors related to each job, and the like. As a company run by neurodivergent folks, Mentra made the decision to flip the script when it came to typical employment sites. They use their algorithm to propose available candidates to companies, who can then connect with job seekers that they are interested in; reducing the emotional and physical labor on the job-seeker side of things.

    When more people with disabilities are involved in the creation of algorithms, that can reduce the chances that these algorithms will inflict harm on their communities. That’s why diverse teams are so important.

    Imagine that a social media company ’s recommendation engine was tuned to analyze who you’re following and if it was tuned to prioritize follow recommendations for people who talked about similar things but who were different in some key ways from your existing sphere of influence. For example, if you were to follow a bunch of nondisabled white male academics who talk about AI, it could suggest that you follow academics who are disabled or aren’t white or aren’t male who also talk about AI. If you took its recommendations, perhaps you’d get a more holistic and nuanced understanding of what’s happening in the AI field. These same systems should also use their understanding of biases about particular communities—including, for instance, the disability community—to make sure that they aren’t recommending any of their users follow accounts that perpetuate biases against (or, worse, spewing hate toward ) those groups.

    Other ways that AI can helps people with disabilities

    If I weren’t trying to put this together between other tasks, I’m sure that I could go on and on, providing all kinds of examples of how AI could be used to help people with disabilities, but I’m going to make this last section into a bit of a lightning round. In no particular order:

      Voice preservation. You may have seen the VALL-E paper or Apple’s Global Accessibility Awareness Day announcement or you may be familiar with the voice-preservation offerings from Microsoft, Acapela, or others. It’s possible to train an AI model to replicate your voice, which can be a tremendous boon for people who have ALS ( Lou Gehrig’s disease ) or motor-neuron disease or other medical conditions that can lead to an inability to talk. This is, of course, the same tech that can also be used to create audio deepfakes, so it ’s something that we need to approach responsibly, but the tech has truly transformative potential.
    • Voice recognition. Researchers like those in the Speech Accessibility Project are paying people with disabilities for their help in collecting recordings of people with atypical speech. As I type, they are actively recruiting people with Parkinson’s and related conditions, and they have plans to expand this to other conditions as the project progresses. This research will result in more inclusive data sets that will let more people with disabilities use voice assistants, dictation software, and voice-response services as well as control their computers and other devices more easily, using only their voice.
    • Text transformation. The current generation of LLMs is quite capable of adjusting existing text content without injecting hallucinations. This is hugely empowering for people with cognitive disabilities who may benefit from text summaries or simplified versions of text or even text that ’s prepped for Bionic Reading.

    The importance of diverse teams and data

    We need to recognize that our differences matter. Our lived experiences are influenced by the intersections of the identities that we exist in. These lived experiences—with all their complexities ( and joys and pain )—are valuable inputs to the software, services, and societies that we shape. Our differences need to be represented in the data that we use to train new models, and the folks who contribute that valuable information need to be compensated for sharing it with us. Inclusive data sets yield more robust models that foster more equitable outcomes.

    Want a model that does n’t demean or patronize or objectify people with disabilities? Make sure that you have content about disabilities that ’s authored by people with a range of disabilities, and make sure that that ’s well represented in the training data.

    Want a model that does n’t use ableist language? You may be able to use existing data sets to build a filter that can intercept and remediate ableist language before it reaches readers. That being said, when it comes to sensitivity reading, AI models won’t be replacing human copy editors anytime soon.

    Want a coding copilot that gives you accessible recommendations from the jump? Train it on code that you know to be accessible.


    I have no doubt that AI can and will harm people … today, tomorrow, and well into the future. But I also believe that we can acknowledge that and, with an eye towards accessibility ( and, more broadly, inclusion ), make thoughtful, considerate, and intentional changes in our approaches to AI that will reduce harm over time as well. Today, tomorrow, and well into the future.


    Many thanks to Kartik Sawhney for helping me with the development of this piece, Ashley Bischoff for her invaluable editorial assistance, and, of course, Joe Dolson for the prompt.

  • The Wax and the Wane of the Web

    The Wax and the Wane of the Web

    I offer a second bit of advice to friends and family when they become new relatives: When you start to think that you ’ve got everything figured out, everything will change. Just as you start to get the hang of injections, babies, and normal sleep, it ’s occasion for solid food, potty training, and nighttime sleep. When you figure those outside, it ’s occasion for school and unique sleep. The cycle goes on and on.

    The same applies for those of us working in design and development these times. Having worked on the web for about three years at this point, I’ve seen the typical wax and wane of concepts, strategies, and systems. Each day that we as developers and designers get into a regular rhythm, some innovative idea or technology comes down to shake things up and copy our world.

    How we got below

    I built my first website in the mid-’90s. Design and development on the web back then was a free-for-all, with few established norms. For any layout aside from a single column, we used table elements, often with empty cells containing a single pixel spacer GIF to add empty space. We styled text with numerous font tags, nesting the tags every time we wanted to vary the font style. And we had only three or four typefaces to choose from: Arial, Courier, or Times New Roman. When Verdana and Georgia came out in 1996, we rejoiced because our options had nearly doubled. The only safe colors to choose from were the 216 “web safe” colors known to work across platforms. The few interactive elements (like contact forms, guest books, and counters) were mostly powered by CGI scripts (predominantly written in Perl at the time). Achieving any kind of unique look involved a pile of hacks all the way down. Interaction was often limited to specific pages in a site.

    The beginning of website standards

    At the turn of the century, a new cycle started. Crufty code littered with table layouts and font tags waned, and a push for web standards waxed. Newer technologies like CSS got more widespread adoption by browsers makers, developers, and designers. This shift toward standards didn’t happen accidentally or overnight. It took active engagement between the W3C and browser vendors and heavy evangelism from folks like the Web Standards Project to build standards. A List Apart and books like Designing with Web Standards by Jeffrey Zeldman played key roles in teaching developers and designers why standards are important, how to implement them, and how to sell them to their organizations. And approaches like progressive enhancement introduced the idea that content should be available for all browsers—with additional enhancements available for more advanced browsers. Meanwhile, sites like the CSS Zen Garden showcased just how powerful and versatile CSS can be when combined with a solid semantic HTML structure.

    Server-side languages like PHP, Java, and. NET overtook Perl as the predominant back-end processors, and the cgi-bin was tossed in the trash bin. With these better server-side tools came the first era of web applications, starting with content-management systems ( particularly in the blogging space with tools like Blogger, Grey Matter, Movable Type, and WordPress ). In the mid-2000s, AJAX opened doors for asynchronous interaction between the front end and back end. Suddenly, pages could update their content without needing to reload. A crop of JavaScript frameworks like Prototype, YUI, and jQuery arose to help developers build more reliable client-side interaction across browsers that had wildly varying levels of standards support. Techniques like image replacement let crafty designers and developers display fonts of their choosing. And technologies like Flash made it possible to add animations, games, and even more interactivity.

    These new technologies, standards, and techniques reinvigorated the industry in many ways. Web design flourished as designers and developers explored more diverse styles and layouts. But we still relied on tons of hacks. Early CSS was a huge improvement over table-based layouts when it came to basic layout and text styling, but its limitations at the time meant that designers and developers still relied heavily on images for complex shapes ( such as rounded or angled corners ) and tiled backgrounds for the appearance of full-length columns (among other hacks ). Complicated layouts required all manner of nested floats or absolute positioning ( or both ). Flash and image replacement for custom fonts was a great start toward varying the typefaces from the big five, but both hacks introduced accessibility and performance problems. And JavaScript libraries made it easy for anyone to add a dash of interaction to pages, although at the cost of doubling or even quadrupling the download size of simple websites.

    The web as software platform

    The symbiosis between the front end and back end continued to improve, and that led to the current era of modern web applications. Between expanded server-side programming languages ( which kept growing to include Ruby, Python, Go, and others ) and newer front-end tools like React, Vue, and Angular, we could build fully capable software on the web. Alongside these tools came others, including collaborative version control, build automation, and shared package libraries. What was once primarily an environment for linked documents became a realm of infinite possibilities.

    At the same time, mobile devices became more capable, and they gave us internet access in our pockets. Mobile apps and responsive design opened up opportunities for new interactions anywhere and any time.

    This combination of capable mobile devices and powerful development tools contributed to the waxing of social media and other centralized tools for people to connect and consume. As it became easier and more common to connect with others directly on Twitter, Facebook, and even Slack, the desire for hosted personal sites waned. Social media offered connections on a global scale, with both the good and bad that that entails.

    Want a much more extensive history of how we got here, with some other takes on ways that we can improve? Jeremy Keith wrote “Of Time and the Web. ” Or check out the “Web Design History Timeline ” at the Web Design Museum. Neal Agarwal also has a fun tour through “Internet Artifacts. ”

    Where we are now

    In the last couple of years, it ’s felt like we’ve begun to reach another major inflection point. As social-media platforms fracture and wane, there’s been a growing interest in owning our own content again. There are many different ways to make a website, from the tried-and-true classic of hosting plain HTML files to static site generators to content management systems of all flavors. The fracturing of social media also comes with a cost: we lose crucial infrastructure for discovery and connection. Webmentions, RSS, ActivityPub, and other tools of the IndieWeb can help with this, but they’re still relatively underimplemented and hard to use for the less nerdy. We can build amazing personal websites and add to them regularly, but without discovery and connection, it can sometimes feel like we may as well be shouting into the void.

    Browser support for CSS, JavaScript, and other standards like web components has accelerated, especially through efforts like Interop. New technologies gain support across the board in a fraction of the time that they used to. I often learn about a new feature and check its browser support only to find that its coverage is already above 80 percent. Nowadays, the barrier to using newer techniques often is n’t browser support but simply the limits of how quickly designers and developers can learn what’s available and how to adopt it.

    Today, with a few commands and a couple of lines of code, we can prototype almost any idea. All the tools that we now have available make it easier than ever to start something new. But the upfront cost that these frameworks may save in initial delivery eventually comes due as upgrading and maintaining them becomes a part of our technical debt.

    If we rely on third-party frameworks, adopting new standards can sometimes take longer since we may have to wait for those frameworks to adopt those standards. These frameworks—which used to let us adopt new techniques sooner—have now become hindrances instead. These same frameworks often come with performance costs too, forcing users to wait for scripts to load before they can read or interact with pages. And when scripts fail ( whether through poor code, network issues, or other environmental factors ), there’s often no alternative, leaving users with blank or broken pages.

    Where do we go from here?

    Today’s hacks help to shape tomorrow’s standards. And there’s nothing inherently wrong with embracing hacks —for now—to move the present forward. Problems only arise when we’re unwilling to admit that they’re hacks or we hesitate to replace them. So what can we do to create the future we want for the web?

    Build for the long haul. Optimize for performance, for accessibility, and for the user. Weigh the costs of those developer-friendly tools. They may make your job a little easier today, but how do they affect everything else? What’s the cost to users? To future developers? To standards adoption? Sometimes the convenience may be worth it. Sometimes it ’s just a hack that you ’ve grown accustomed to. And sometimes it ’s holding you back from even better options.

    Start from standards. Standards continue to evolve over time, but browsers have done a remarkably good job of continuing to support older standards. The same is n’t always true of third-party frameworks. Sites built with even the hackiest of HTML from the ’90s still work just fine today. The same can’t always be said of sites built with frameworks even after just a couple years.

    Design with care. Whether your craft is code, pixels, or processes, consider the impacts of each decision. The convenience of many a modern tool comes at the cost of not always understanding the underlying decisions that have led to its design and not always considering the impact that those decisions can have. Rather than rushing headlong to “move fast and break things, ” use the time saved by modern tools to consider more carefully and design with deliberation.

    Always be learning. If you’re always learning, you’re also growing. Sometimes it may be hard to pinpoint what’s worth learning and what’s just today’s hack. You might end up focusing on something that won’t matter next year, even if you were to focus solely on learning standards. ( Remember XHTML? ) But constant learning opens up new connections in your brain, and the hacks that you learn one day may help to inform different experiments another day.

    Play, experiment, and be weird! This web that we’ve built is the ultimate experiment. It’s the single largest human endeavor in history, and yet each of us can create our own pocket within it. Be courageous and try new things. Build a playground for ideas. Make goofy experiments in your own mad science lab. Start your own small business. There has never been a more empowering place to be creative, take risks, and explore what we’re capable of.

    Share and amplify. As you experiment, play, and learn, share what’s worked for you. Write on your own website, post on whichever social media site you prefer, or shout it from a TikTok. Write something for A List Apart! But take the time to amplify others too: find new voices, learn from them, and share what they’ve taught you.

    Go forth and make

    As designers and developers for the web ( and beyond ), we’re responsible for building the future every day, whether that may take the shape of personal websites, social media tools used by billions, or anything in between. Let’s imbue our values into the things that we create, and let’s make the web a better place for everyone. Create that thing that only you are uniquely qualified to make. Then share it, make it better, make it again, or make something new. Learn. Make. Share. Grow. Rinse and repeat. Every time you think that you ’ve mastered the web, everything will change.

  • To Ignite a Personalization Practice, Run this Prepersonalization Workshop

    To Ignite a Personalization Practice, Run this Prepersonalization Workshop

    Photo this. You’ve joined a club at your business that ’s designing innovative product attributes with an emphasis on technology or AI. Or your business has really implemented a personalization website. Either approach, you’re designing with information. Then what? When it comes to designing for personalization, there are many warning stories, no immediately achievement, and some guidelines for the baffled.

    Between the dream of getting it right and the fear of it going wrong—like when we encounter “persofails ” in the spirit of a company regularly imploring everyday users to buy additional potty seats—the personalization difference is real. It’s an particularly confusing place to be a modern competent without a chart, a map, or a plan.

    For those of you venturing into customisation, there’s no Lonely Planet and some tour guides because powerful personalization is so specific to each organization’s talent, technology, and market place.

    But you can ensure that your group has packed its carriers rationally.

    There’s a DIY method to increase your chances for victory. At least, you’ll relieve your boss’s foolish excitement. Before the gathering you’ll needed to successfully plan.

    We call it prepersonalization.

    Behind the audio

    Acquire Spotify’s DJ element, which debuted this past year.

    We’re used to seeing the polished final outcome of a personalization have. Before the year-end prize, the making-of story, or the behind-the-scenes success shoulder, a personalized have had to be conceived, budgeted, and valued. Before any customisation have goes live in your product or service, it lives amid a delay of valuable ideas for expressing consumer experiences more automatically.

    So how do you know where to place your personalization bets? How do you design consistent interactions that won’t trip up users or—worse—breed mistrust? We’ve found that for many budgeted programs to justify their ongoing investments, they first needed one or more workshops to convene key stakeholders and internal customers of the technology. Make yours count.

    ​ From Big Tech to fledgling startups, we’ve seen the same evolution up close with our clients. In our experiences with working on small and large personalization efforts, a program’s ultimate track record—and its ability to weather tough questions, work steadily toward shared answers, and organize its design and technology efforts—turns on how effectively these prepersonalization activities play out.

    Time and again, we’ve seen effective workshops separate future success stories from unsuccessful efforts, saving countless time, resources, and collective well-being in the process.

    A personalization practice involves a multiyear effort of testing and feature development. It’s not a switch-flip moment in your tech stack. It’s best managed as a backlog that often evolves through three steps:

    1. customer experience optimization ( CXO, also known as A/B testing or experimentation )
    2. always-on automations ( whether rules-based or machine-generated )
    3. mature features or standalone product development ( such as Spotify’s DJ experience )

    This is why we created our progressive personalization framework and why we’re field-testing an accompanying deck of cards: we believe that there’s a base grammar, a set of “nouns and verbs ” that your organization can use to design experiences that are customized, personalized, or automated. You won’t need these cards. But we strongly recommend that you create something similar, whether that might be digital or physical.

    Set your kitchen timer

    How long does it take to cook up a prepersonalization workshop? The surrounding assessment activities that we recommend including can ( and often do ) span weeks. For the core workshop, we recommend aiming for two to three days. Here’s a summary of our broader approach along with details on the essential first-day activities.

    The full arc of the wider workshop is threefold:

    1. Kickstart: This sets the terms of engagement as you focus on the opportunity as well as the readiness and drive of your team and your leadership. .
    2. Plan your work: This is the heart of the card-based workshop activities where you specify a plan of attack and the scope of work.
    3. Work your plan: This phase is all about creating a competitive environment for team participants to individually pitch their own pilots that each contain a proof-of-concept project, its business case, and its operating model.

    Give yourself at least a day, split into two large time blocks, to power through a concentrated version of those first two phases.

    Kickstart: Whet your appetite

    We call the first lesson the “landscape of connected experience. ” It explores the personalization possibilities in your organization. A connected experience, in our parlance, is any UX requiring the orchestration of multiple systems of record on the backend. This could be a content-management system combined with a marketing-automation platform. It could be a digital-asset manager combined with a customer-data platform.

    Spark conversation by naming consumer examples and business-to-business examples of connected experience interactions that you admire, find familiar, or even dislike. This should cover a representative range of personalization patterns, including automated app-based interactions ( such as onboarding sequences or wizards ), notifications, and recommenders. We have a catalog of these in the cards. Here’s a list of 142 different interactions to jog your thinking.

    This is all about setting the table. What are the possible paths for the practice in your organization? If you want a broader view, here ’s a long-form primer and a strategic framework.

    Assess each example that you discuss for its complexity and the level of effort that you estimate that it would take for your team to deliver that feature ( or something similar ). In our cards, we divide connected experiences into five levels: functions, features, experiences, complete products, and portfolios. Size your own build here. This will help to focus the conversation on the merits of ongoing investment as well as the gap between what you deliver today and what you want to deliver in the future.

    Next, have your team plot each idea on the following 2×2 grid, which lays out the four enduring arguments for a personalized experience. This is critical because it emphasizes how personalization can not only help your external customers but also affect your own ways of working. It’s also a reminder ( which is why we used the word argument earlier ) of the broader effort beyond these tactical interventions.

    Each team member should vote on where they see your product or service putting its emphasis. Naturally, you can’t prioritize all of them. The intention here is to flesh out how different departments may view their own upsides to the effort, which can vary from one to the next. Documenting your desired outcomes lets you know how the team internally aligns across representatives from different departments or functional areas.

    The third and final kickstart activity is about naming your personalization gap. Is your customer journey well documented? Will data and privacy compliance be too big of a challenge? Do you have content metadata needs that you have to address? ( We’re pretty sure that you do: it ’s just a matter of recognizing the relative size of that need and its remedy. ) In our cards, we’ve noted a number of program risks, including common team dispositions. Our Detractor card, for example, lists six stakeholder behaviors that hinder progress.

    Effectively collaborating and managing expectations is critical to your success. Consider the potential barriers to your future progress. Press the participants to name specific steps to overcome or mitigate those barriers in your organization. As studies have shown, personalization efforts face many common barriers.

    At this point, you ’ve hopefully discussed sample interactions, emphasized a key area of benefit, and flagged key gaps? Good—you’re ready to continue.

    Hit that test kitchen

    Next, let’s look at what you’ll need to bring your personalization recipes to life. Personalization engines, which are robust software suites for automating and expressing dynamic content, can intimidate new customers. Their capabilities are sweeping and powerful, and they present broad options for how your organization can conduct its activities. This presents the question: Where do you begin when you’re configuring a connected experience?

    What’s important here is to avoid treating the installed software like it were a dream kitchen from some fantasy remodeling project ( as one of our client executives memorably put it ). These software engines are more like test kitchens where your team can begin devising, tasting, and refining the snacks and meals that will become a part of your personalization program’s regularly evolving menu.

    The ultimate menu of the prioritized backlog will come together over the course of the workshop. And creating “dishes ” is the way that you’ll have individual team stakeholders construct personalized interactions that serve their needs or the needs of others.

    The dishes will come from recipes, and those recipes have set ingredients.

    Verify your ingredients

    Like a good product manager, you’ll make sure—andyou’ll validate with the right stakeholders present—that you have all the ingredients on hand to cook up your desired interaction ( or that you can work out what needs to be added to your pantry ). These ingredients include the audience that you’re targeting, content and design elements, the context for the interaction, and your measure for how it’ll come together.

    This is n’t just about discovering requirements. Documenting your personalizations as a series of if-then statements lets the team:

    1. compare findings toward a unified approach for developing features, not unlike when artists paint with the same palette;
    2. specify a consistent set of interactions that users find uniform or familiar;
    3. and develop parity across performance measurements and key performance indicators too.

    This helps you streamline your designs and your technical efforts while you deliver a shared palette of core motifs of your personalized or automated experience.

    Compose your recipe

    What ingredients are important to you? Think of a who-what-when-why construct:

    • Who are your key audience segments or groups?
    • What kind of content will you give them, in what design elements, and under what circumstances?
    • And for which business and user benefits?

    We first developed these cards and card categories five years ago. We regularly play-test their fit with conference audiences and clients. And we still encounter new possibilities. But they all follow an underlying who-what-when-why logic.

    Here are three examples for a subscription-based reading app, which you can generally follow along with right to left in the cards in the accompanying photo below.

    1. Nurture personalization: When a guest or an unknown visitor interacts with a product title, a banner or alert bar appears that makes it easier for them to encounter a related title they may want to read, saving them time.
    2. Welcome automation: When there’s a newly registered user, an email is generated to call out the breadth of the content catalog and to make them a happier subscriber.
    3. Winback automation: Before their subscription lapses or after a recent failed renewal, a user is sent an email that gives them a promotional offer to suggest that they reconsider renewing or to remind them to renew.

    A useful preworkshop activity may be to think through a first draft of what these cards might be for your organization, although we’ve also found that this process sometimes flows best through cocreating the recipes themselves. Start with a set of blank cards, and begin labeling and grouping them through the design process, eventually distilling them to a refined subset of highly useful candidate cards.

    You can think of the later stages of the workshop as moving from recipes toward a cookbook in focus—like a more nuanced customer-journey mapping. Individual “cooks” will pitch their recipes to the team, using a common jobs-to-be-done format so that measurability and results are baked in, and from there, the resulting collection will be prioritized for finished design and delivery to production.

    Better kitchens require better architecture

    Simplifying a customer experience is a complicated effort for those who are inside delivering it. Beware anyone who says otherwise. With that being said, “Complicated problems can be hard to solve, but they are addressable with rules and recipes. ”

    When personalization becomes a laugh line, it ’s because a team is overfitting: they aren’t designing with their best data. Like a sparse pantry, every organization has metadata debt to go along with its technical debt, and this creates a drag on personalization effectiveness. Your AI’s output quality, for example, is indeed limited by your IA. Spotify’s poster-child prowess today was unfathomable before they acquired a seemingly modest metadata startup that now powers its underlying information architecture.

    You can definitely stand the heat …

    Personalization technology opens a doorway into a confounding ocean of possible designs. Only a disciplined and highly collaborative approach will bring about the necessary focus and intention to succeed. So banish the dream kitchen. Instead, hit the test kitchen to save time, preserve job satisfaction and security, and safely dispense with the fanciful ideas that originate upstairs of the doers in your organization. There are meals to serve and mouths to feed.

    This workshop framework gives you a fighting shot at lasting success as well as sound beginnings. Wiring up your information layer is n’t an overnight affair. But if you use the same cookbook and shared recipes, you’ll have solid footing for success. We designed these activities to make your organization’s needs concrete and clear, long before the hazards pile up.

    While there are associated costs toward investing in this kind of technology and product design, your ability to size up and confront your unique situation and your digital capabilities is time well spent. Don’t squander it. The proof, as they say, is in the pudding.

  • User Research Is Storytelling

    User Research Is Storytelling

    Ever since I was a child, I’ve been fascinated with shows. I loved the figures and the excitement—but most of all the reports. I wanted to be an actor. And I believed that I’d get to do the things that Indiana Jones did and go on interesting activities. I also dreamed up suggestions for videos that my friends and I could render and sun in. But they never went any farther. I did, however, end up working in user experience ( UX). Today, I realize that there’s an element of drama to UX— I had n’t actually considered it before, but consumer research is story. And to get the most out of consumer research, you need to show a good account where you bring stakeholders—the solution team and choice makers—along and getting them interested in learning more.

    Think of your favorite film. More than likely it follows a three-act framework that ’s frequently seen in story: the layout, the fight, and the quality. The second act shows what exists now, and it helps you get to know the characters and the challenges and problems that they face. Act two introduces the fight, where the action is. Here, difficulties grow or get worse. And the third and final work is the solution. This is where the issues are resolved and the figures learn and change. I believe that this architecture is also a great way to think about customer study, and I think that it can be particularly helpful in explaining person exploration to others.

    Use story as a framework to complete research

    It’s sad to say, but many have come to view studies as being inconsequential. If finances or timelines are small, analysis tends to be one of the first points to go. Instead of investing in research, some product managers rely on designers or—worse—their own opinion to make the “right ” choices for users based on their experience or accepted best practices. That may get clubs some of the way, but that approach is so quickly miss out on solving people ’ real problems. To be user-centered, this is something we really avoid. User studies puts style. It keeps it on trail, pointing to problems and opportunities. Being aware of the issues with your product and reacting to them can help you stay ahead of your competition.

    In the three-act structure, each action corresponds to a part of the process, and each part is important to telling the whole story. Let’s look at the different functions and how they align with customer study.

    Act one: installation

    The layout is all about understanding the history, and that ’s where basic research comes in. Basic research ( also called conceptual, discovery, or original research ) helps you understand people and identify their problems. You’re learning about what exists now, the obstacles people have, and how the problems affect them—just like in the videos. To do basic research, you may conduct cultural inquiries or journal studies ( or both! ), which can help you start to identify issues as well as opportunities. It does n’t need to be a great expense in time or money.

    Erika Hall writes about minimum practical anthropology, which can be as easy as spending 15 minutes with a customer and asking them one point: ‘Walk me through your day yesterday. ’ That’s it. Present that one request. Shut up and listen to them for 15 minutes. Do your damndest to keep yourself and your interests out of it. Bam, you’re doing ethnography. ” According to Hall,[ This ] will probably prove quite illuminating. In the highly unlikely case that you did n’t learn anything new or useful, carry on with enhanced confidence in your direction. ”

    This makes total sense to me. And I love that this makes user research so accessible. You don’t need to prepare a lot of documentation; you can just recruit participants and do it! This can yield a wealth of information about your users, and it’ll help you better understand them and what’s going on in their lives. That’s really what act one is all about: understanding where users are coming from.

    Jared Spool talks about the importance of foundational research and how it should form the bulk of your research. If you can draw from any additional user data that you can get your hands on, such as surveys or analytics, that can supplement what you ’ve heard in the foundational studies or even point to areas that need further investigation. Together, all this data paints a clearer picture of the state of things and all its shortcomings. And that ’s the beginning of a compelling story. It’s the point in the plot where you realize that the main characters—or the users in this case—are facing challenges that they need to overcome. Like in the movies, this is where you start to build empathy for the characters and root for them to succeed. And hopefully stakeholders are now doing the same. Their sympathy may be with their business, which could be losing money because users can’t complete certain tasks. Or maybe they do empathize with users ’ struggles. Either way, act one is your initial hook to get the stakeholders interested and invested.

    Once stakeholders begin to understand the value of foundational research, that can open doors to more opportunities that involve users in the decision-making process. And that can guide product teams toward being more user-centered. This benefits everyone—users, the product, and stakeholders. It’s like winning an Oscar in movie terms—it often leads to your product being well received and successful. And this can be an incentive for stakeholders to repeat this process with other products. Storytelling is the key to this process, and knowing how to tell a good story is the only way to get stakeholders to really care about doing more research.

    This brings us to act two, where you iteratively evaluate a design or concept to see whether it addresses the issues.

    Act two: conflict

    Act two is all about digging deeper into the problems that you identified in act one. This usually involves directional research, such as usability tests, where you assess a potential solution ( such as a design ) to see whether it addresses the issues that you found. The issues could include unmet needs or problems with a flow or process that ’s tripping users up. Like act two in a movie, more issues will crop up along the way. It’s here that you learn more about the characters as they grow and develop through this act.

    Usability tests should typically include around five participants according to Jakob Nielsen, who found that that number of users can usually identify most of the problems: “As you add more and more users, you learn less and less because you will keep seeing the same things again and again… After the fifth user, you are wasting your time by observing the same findings repeatedly but not learning much new. ”

    There are parallels with storytelling here too; if you try to tell a story with too many characters, the plot may get lost. Having fewer participants means that each user’s struggles will be more memorable and easier to relay to other stakeholders when talking about the research. This can help convey the issues that need to be addressed while also highlighting the value of doing the research in the first place.

    Researchers have run usability tests in person for decades, but you can also conduct usability tests remotely using tools like Microsoft Teams, Zoom, or other teleconferencing software. This approach has become increasingly popular since the beginning of the pandemic, and it works well. You can think of in-person usability tests like going to a play and remote sessions as more like watching a movie. There are advantages and disadvantages to each. In-person usability research is a much richer experience. Stakeholders can experience the sessions with other stakeholders. You also get real-time reactions—including surprise, agreement, disagreement, and discussions about what they’re seeing. Much like going to a play, where audiences get to take in the stage, the costumes, the lighting, and the actors ’ interactions, in-person research lets you see users up close, including their body language, how they interact with the moderator, and how the scene is set up.

    If in-person usability testing is like watching a play—staged and controlled—then conducting usability testing in the field is like immersive theater where any two sessions might be very different from one another. You can take usability testing into the field by creating a replica of the space where users interact with the product and then conduct your research there. Or you can go out to meet users at their location to do your research. With either option, you get to see how things work in context, things come up that would n’t have in a lab environment—and conversion can shift in entirely different directions. As researchers, you have less control over how these sessions go, but this can sometimes help you understand users even better. Meeting users where they are can provide clues to the external forces that could be affecting how they use your product. In-person usability tests provide another level of detail that ’s often missing from remote usability tests.

    That’s not to say that the “movies ”—remote sessions—aren’t a good option. Remote sessions can reach a wider audience. They allow a lot more stakeholders to be involved in the research and to see what’s going on. And they open the doors to a much wider geographical pool of users. But with any remote session there is the potential of time wasted if participants can’t log in or get their microphone working.

    The benefit of usability testing, whether remote or in person, is that you get to see real users interact with the designs in real time, and you can ask them questions to understand their thought processes and grasp of the solution. This can help you not only identify problems but also glean why they’re problems in the first place. Furthermore, you can test hypotheses and gauge whether your thinking is correct. By the end of the sessions, you’ll have a much clearer picture of how usable the designs are and whether they work for their intended purposes. Act two is the heart of the story—where the excitement is—but there can be surprises too. This is equally true of usability tests. Often, participants will say unexpected things, which change the way that you look at things —and these twists in the story can move things in new directions.

    Unfortunately, user research is sometimes seen as expendable. And too often usability testing is the only research process that some stakeholders think that they ever need. In fact, if the designs that you’re evaluating in the usability test aren’t grounded in a solid understanding of your users ( foundational research ), there’s not much to be gained by doing usability testing in the first place. That’s because you’re narrowing the focus of what you’re getting feedback on, without understanding the users ‘ needs. As a result, there’s no way of knowing whether the designs might solve a problem that users have. It’s only feedback on a particular design in the context of a usability test.

    On the other hand, if you only do foundational research, while you might have set out to solve the right problem, you won’t know whether the thing that you’re building will actually solve that. This illustrates the importance of doing both foundational and directional research.

    In act two, stakeholders will—hopefully—get to watch the story unfold in the user sessions, which creates the conflict and tension in the current design by surfacing their highs and lows. And in turn, this can help motivate stakeholders to address the issues that come up.

    Act three: resolution

    While the first two acts are about understanding the background and the tensions that can propel stakeholders into action, the third part is about resolving the problems from the first two acts. While it ’s important to have an audience for the first two acts, it ’s crucial that they stick around for the final act. That means the whole product team, including developers, UX practitioners, business analysts, delivery managers, product managers, and any other stakeholders that have a say in the next steps. It allows the whole team to hear users ’ feedback together, ask questions, and discuss what’s possible within the project’s constraints. And it lets the UX research and design teams clarify, suggest alternatives, or give more context behind their decisions. So you can get everyone on the same page and get agreement on the way forward.

    This act is mostly told in voiceover with some audience participation. The researcher is the narrator, who paints a picture of the issues and what the future of the product could look like given the things that the team has learned. They give the stakeholders their recommendations and their guidance on creating this vision.

    Nancy Duarte in the Harvard Business Review offers an approach to structuring presentations that follow a persuasive story. “The most effective presenters use the same techniques as great storytellers: By reminding people of the status quo and then revealing the path to a better way, they set up a conflict that needs to be resolved, ” writes Duarte. “That tension helps them persuade the audience to adopt a new mindset or behave differently. ”

    This type of structure aligns well with research results, and particularly results from usability tests. It provides evidence for “what is ”—the problems that you ’ve identified. And “what could be”—your recommendations on how to address them. And so on and so forth.

    You can reinforce your recommendations with examples of things that competitors are doing that could address these issues or with examples where competitors are gaining an edge. Or they can be visual, like quick mockups of how a new design could look that solves a problem. These can help generate conversation and momentum. And this continues until the end of the session when you ’ve wrapped everything up in the conclusion by summarizing the main issues and suggesting a way forward. This is the part where you reiterate the main themes or problems and what they mean for the product—the denouement of the story. This stage gives stakeholders the next steps and hopefully the momentum to take those steps!

    While we are nearly at the end of this story, let’s reflect on the idea that user research is storytelling. All the elements of a good story are there in the three-act structure of user research:

      Act one: You meet the protagonists ( the users ) and the antagonists ( the problems affecting users ). This is the beginning of the plot. In act one, researchers might use methods including contextual inquiry, ethnography, diary studies, surveys, and analytics. The output of these methods can include personas, empathy maps, user journeys, and analytics dashboards.
    • Act two: Next, there’s character development. There’s conflict and tension as the protagonists encounter problems and challenges, which they must overcome. In act two, researchers might use methods including usability testing, competitive benchmarking, and heuristics evaluation. The output of these can include usability findings reports, UX strategy documents, usability guidelines, and best practices.
      Act three: The protagonists triumph and you see what a better future looks like. In act three, researchers may use methods including presentation decks, storytelling, and digital media. The output of these can be: presentation decks, video clips, audio clips, and pictures.

    The researcher has multiple roles: they’re the storyteller, the director, and the producer. The participants have a small role, but they are significant characters ( in the research ). And the stakeholders are the audience. But the most important thing is to get the story right and to use storytelling to tell users ’ stories through research. By the end, the stakeholders should walk away with a purpose and an eagerness to resolve the product’s ills.

    So the next time that you’re planning research with clients or you’re speaking to stakeholders about research that you ’ve done, think about how you can weave in some storytelling. Ultimately, user research is a win-win for everyone, and you just need to get stakeholders interested in how the story ends.

  • From Beta to Bedrock: Build Products that Stick.

    From Beta to Bedrock: Build Products that Stick.

    As a solution contractor over too many years to explain, I’ve lost count of the number of days I’ve seen promising suggestions go from zero to warrior in a few days, only to fade out within months.

    Financial items, which is the area I work in, are no exception. With people’s true hard-earned money on the line, person objectives running high, and a crowded industry, it’s tempting to put as many functions at the ceiling as possible and expect something sticks. But this strategy is a recipe for disaster. Here’s why:

    The perils of feature-first growth

    When you start building a financial product from the ground up, or are migrating existing client expeditions from paper or phone channels onto online bank or mobile software, it’s easy to get caught up in the pleasure of creating new functions. You might think,” If I can only add one more thing that solves this particular person trouble, they’ll love me! ” But what happens when you ultimately hit a barrier because the narcs (your safety group! ) don’t like it? When a hard-fought have isn’t as common as you thought, or it breaks due to unforeseen difficulty?

    This is where the concept of Minimum Viable Product ( MVP ) comes in. Jason Fried’s text Getting True and his audio Overhaul usually touch on this idea, even if he does n’t often call it that. An MVP is a product that provides just enough significance to your users to keep them engaged, but not so much that it becomes overwhelming or difficult to maintain. It sounds like an easy concept but it requires a razor sharp eye, a ruthless edge and having the courage to stick by your opinion because it is easy to be seduced by “the Columbo Effect”… when there’s always “just one more thing … ” that someone wants to add.

    The problem with most fund apps, however, is that they often become a mirror of the domestic politics of the business rather than an encounter entirely designed around the consumer. This means that the focus is on delivering as many features and functionalities as possible to satisfy the needs and desires of competing internal departments, rather than providing a clear value proposition that is focused on what the people out there in the real world want. As a result, these products can very easily bloat to become a mixed bag of confusing, unrelated and ultimately unlovable customer experiences—a feature salad, you might say.

    The importance of bedrock

    So what’s a better approach? How can we build products that are stable, user-friendly, and—most importantly—stick?

    That’s where the concept of “bedrock” comes in. Bedrock is the core element of your product that truly matters to users. It’s the fundamental building block that provides value and stays relevant over time.

    In the world of retail banking, which is where I work, the bedrock has got to be in and around the regular servicing journeys. People open their current account once in a blue moon but they look at it every day. They sign up for a credit card every year or two, but they check their balance and pay their bill at least once a month.

    Identifying the core tasks that people want to do and then relentlessly striving to make them easy to do, dependable, and trustworthy is where the gravy’s at.

    But how do you get to bedrock? By focusing on the” MVP” approach, prioritizing simplicity, and iterating towards a clear value proposition. This means cutting out unnecessary features and focusing on delivering real value to your users.

    It also means having some guts, because your colleagues might not always instantly share your vision to start with. And controversially, sometimes it can even mean making it clear to customers that you’re not going to come to their house and make their dinner. The occasional “opinionated user interface design ” ( i. e. clunky workaround for edge cases ) might sometimes be what you need to use to test a concept or buy you space to work on something more important.

    Practical strategies for building financial products that stick

    So what are the key strategies I’ve learned from my own experience and research?

    1. Start with a clear “why”: What problem are you trying to solve? For whom? Make sure your mission is crystal clear before building anything. Make sure it aligns with your company ’s objectives, too.
    2. Focus on a single, core feature and obsess on getting that right before moving on to something else: Resist the temptation to add too many features at once. Instead, choose one that delivers real value and iterate from there.
    3. Prioritize simplicity over complexity: Less is often more when it comes to financial products. Cut out unnecessary bells and whistles and keep the focus on what matters most.
    4. Embrace continuous iteration: Bedrock isn’t a fixed destination—it’s a dynamic process. Continuously gather user feedback, refine your product, and iterate towards that bedrock state.
    5. Stop, look and listen: Don’t just test your product as part of your delivery process—test it repeatedly in the field. Use it yourself. Run A/B tests. Gather user feedback. Talk to people who use it, and refine accordingly.

    The bedrock paradox

    There’s an interesting paradox at play here: building towards bedrock means sacrificing some short-term growth potential in favour of long-term stability. But the payoff is worth it—products built with a focus on bedrock will outlast and outperform their competitors, and deliver sustained value to users over time.

    So, how do you start your journey towards bedrock? Take it one step at a time. Start by identifying those core elements that truly matter to your users. Focus on building and refining a single, powerful feature that delivers real value. And above all, test obsessively—for, in the words of Abraham Lincoln, Alan Kay, or Peter Drucker ( whomever you believe! ! ), “The best way to predict the future is to create it. ”

  • The Last of Us Season 2 Episode 3 Review: Picking Up the Pieces

    The Last of Us Season 2 Episode 3 Review: Picking Up the Pieces

    This assessment contains trailers for The Last of Us season 2 show 3. Last month, The Last of Us delivered an event full of season-finale quality emotional stress. Abby ( Kaitlyn Dever ) got her revenge on Joel ( Pedro Pascal ) and Jackson survived an attack by infected all within one action-packed episode. It may seem daunting to [ … ]

    The blog The Last of Us Season 2 Episode 3 Review: Picking Up the Pieces appeared first on Den of Geek.

    Warning: contains trailers for Doctor Who set 15 show 3 “The Well”.

    The first time the Doctor survived the malicious object on Planet Midnight, he was n’t able to accurately show Donna Noble what it was, or whether it was still intact. Thousands of centuries later, after his second meeting with the mysterious force in new show “The Well”, the Doctor told Belinda unequivocally that it had gone and that they were healthy.

    Well, the Doctor lies, but apparently here, he just was n’t paying enough attention to know that he was bad. One tiny, very-easy-to-miss detail in “The Well ” reveals a secret about the “Midnight ” monster ( listed as “It Has No Name” as played by Who creature actor Paul Kasey on IMDb), namely: there’s more than one of them.

    cnx. command. push(function( ) {cnx({playerId:” 106e33c0-3911-473c-b599-b1426db57530″ ,}). render ( “0270c398a82f44f49c23c16122516796” ); });

    We know that the thing attaches itself parasitically to a network, and when that number is killed, it jumps to the predator. That’s how colony base cook Aliss ( Rose Ayling-Ellis ) ended up with the entity permanently behind her back, after she murdered her infected best friend in self-defence. The Doctor’s mercury-reflection plan got the monster off Aliss ’ back without killing her, after which point we understood that it chased the gang and attached itself to Belinda, who could hear its malevolent whispers at her ear.

    Troop leader Shayla ( Caoilfhionn Dunne ) heroically volunteered to ‘kill ’ Belinda with a tricky shot through the heart that, if carried out with enough precision, would prove non-fatal, and thereby attract the creature to attach itself to her. The plan worked and Shayla sacrificed herself by jumping with the demon into the miles-deep me wheel, killing them both. Job done?

    Half. Just before Belinda’s disease was discovered, a small detail suggests that one of the animals had previously escaped. Look at the lift section when Aliss and Troopers Seven and Nine are in ready to get transported to health. Shayla ordered two soldiers to visit Aliss, meaning there are only three individuals in the pull, but the screen counts four.

    Screengrab of a lift panel read-out from Doctor Who episode "The Well"
    Screengrab of a scene of space troopers running around and three people in a lift from Doctor Who episode "The Well"

    Is one of those Soldiers infected with an unseen thing that the elevate weight system is able to find? ( Aliss can’t become infected because the Soldiers are standing directly behind her, which would make them vulnerable to attack. ) Are they able to learn its whispers because of their hi-tech place helmets, just as Aliss being blind made her unable to discover it on the colony base?

    In the episode’s closing scene, after Trooper Mo ( Bethany Antonia ) has been debriefed by Mrs Flood disguised as a space mining corp leader, Mo’ s partner appears to see a display of something identical behind her again, just like Belinda saw behind Aliss. That suggests at least one thing left Planet 6-7-6-7 after Shayla’s devotion. If one now went off in the lift riffing on a Trooper, then it could already be two out there.

    Unless, you know, the lift screen reading is just a generation mistake caused by brain freeze due to the extreme cold of the episode’s Welsh quarry night shoot, but certainly that would have been fixed in post by then? If there was a thing in that pull, let’ s hope that Aliss really did get home safely to her child …

    Doctor Who continues with “Lucky Day ” on Saturday May 3 on BBC One and iPlayer in the UK and on Disney + around the world.

    The post Doctor Who: Blink-And-You’ll-Miss-It Clue Feeds Theory About the Monster in “The Well ” appeared first on Den of Geek.

  • The Last of Us Season 2: Who Are the Seraphites?

    The Last of Us Season 2: Who Are the Seraphites?

    This article contains spoilers for HBO’s The Last of Us winter 2 show 3 and The Last of Us Part II Episode 3 of The Last of Us winter 2 has introduced viewers to yet another party in this world, providing a look back at what kinds of people call post-apocalyptic Seattle house. We know [ … ]

    The blog The Last of Us Season 2: Who Are the Seraphites? appeared primary on Den of Geek.

    Warning: contains trailers for Doctor Who set 15 show 3 “The Well”.

    The first time the Doctor survived the malicious object on Planet Midnight, he was n’t able to accurately show Donna Noble what it was, or whether it was still intact. Thousands of centuries later, after his second meeting with the mysterious force in new show “The Well”, the Doctor told Belinda unequivocally that it had gone and that they were healthy.

    Well, the Doctor lies, but apparently here, he just was n’t paying enough attention to know that he was bad. One tiny, very-easy-to-miss detail in “The Well ” reveals a secret about the “Midnight ” monster ( listed as “It Has No Name” as played by Who creature actor Paul Kasey on IMDb), namely: there’s more than one of them.

    cnx. command. push(function( ) {cnx({playerId:” 106e33c0-3911-473c-b599-b1426db57530″ ,}). render ( “0270c398a82f44f49c23c16122516796” ); });

    We know that the thing attaches itself parasitically to a network, and when that number is killed, it jumps to the predator. That’s how colony base cook Aliss ( Rose Ayling-Ellis ) ended up with the entity permanently behind her back, after she murdered her infected best friend in self-defence. The Doctor’s mercury-reflection plan got the monster off Aliss ’ back without killing her, after which point we understood that it chased the gang and attached itself to Belinda, who could hear its malevolent whispers at her ear.

    Troop leader Shayla ( Caoilfhionn Dunne ) heroically volunteered to ‘kill ’ Belinda with a tricky shot through the heart that, if carried out with enough precision, would prove non-fatal, and thereby attract the creature to attach itself to her. The plan worked and Shayla sacrificed herself by jumping with the demon into the miles-deep me wheel, killing them both. Job done?

    Largely. Just before Belinda’s illness was discovered, a small detail suggests that one of the animals had now escaped. Look at the lift section when Aliss and Troopers Seven and Nine are in ready to get transported to health. Shayla ordered two soldiers to visit Aliss, meaning there are only three individuals in the pull, but the screen counts four.

    Screengrab of a lift panel read-out from Doctor Who episode "The Well"
    Screengrab of a scene of space troopers running around and three people in a lift from Doctor Who episode "The Well"

    Is one of those Soldiers infected with an unseen thing that the elevate weight system is able to find? ( Aliss can’t become infected because the Soldiers are standing directly behind her, which would make them vulnerable to attack. ) Are they able to learn its whispers because of their hi-tech place helmets, just as Aliss being blind made her unable to discover it on the colony base?

    In the episode’s closing scene, after Trooper Mo ( Bethany Antonia ) has been debriefed by Mrs Flood disguised as a space mining corp leader, Mo’ s partner appears to see a display of things identical behind her again, just like Belinda saw behind Aliss. That suggests at least one thing left Planet 6-7-6-7 after Shayla’s devotion. If one now went off in the lift riffing on a Trooper, then it could already be two out there.

    Unless, you know, the lift screen reading is just a generation error caused by brain freeze due to the extreme cold of the episode’s Welsh quarry night shoot, but certainly that would have been fixed in post by then? If there was a creature in that lift, let’ s hope that Aliss really did get home safely to her daughter …

    Doctor Who continues with “Lucky Day ” on Saturday May 3 on BBC One and iPlayer in the UK and on Disney + around the world.

    The post Doctor Who: Blink-And-You’ll-Miss-It Clue Feeds Theory About the Monster in “The Well ” appeared first on Den of Geek.