Blog

  • Opportunities for AI in Accessibility

    Opportunities for AI in Accessibility

    I thoroughly enjoyed reading Joe Dolson’s most recent article on the crossroads of AI and availability because of how skeptical he is of AI in general and how many people have been using it. In fact, I’m very skeptical of AI myself, despite my role at Microsoft as an accessibility technology strategist who helps manage the AI for Accessibility award program. AI can be used in quite productive, equitable, and accessible ways, as well as in harmful, exclusive, and harmful ways, like with any tool. Additionally, there are a bit of uses in the subpar center as well.

    I’d like you to consider this a “yes … and” piece to complement Joe’s post. I’m just trying to reject what he’s saying, but I’m just trying to give some context to initiatives and opportunities where AI can make a difference for people with disabilities. To be clear, I want to take some time to speak about what’s possible in hope that we’ll get there one evening. There are, and we’ve needed to address them, like, yesterday.

    Other text

    Joe’s article spends a lot of time addressing computer-vision types ‘ ability to create other words. He raises a number of true points about the state of affairs right now. And while computer-vision concepts continue to improve in the quality and complexity of information in their information, their benefits aren’t wonderful. He argues to be accurate that the state of image research is currently very poor, especially for some graphic types, in large part due to the absence of contextual contexts in which to look at images ( as a result of having separate “foundation” models for words analysis and image analysis ). Today’s models aren’t trained to distinguish between images that are contextually relevant ( should probably have descriptions ) and those that are purely decorative ( couldn’t possibly need a description ) either. However, I still think there’s possible in this area.

    As Joe points out, far word authoring by human-in-the-loop should definitely be a thing. And if AI can intervene and provide a starting point for alt text, even if the quick reads,” What is this BS?” That’s not correct at all … Let me try to offer a starting point— I think that’s a gain.

    If we can specifically station a design to examine image usage in context, this may help us more quickly determine which images are likely to be elegant and which ones are likely to be descriptive. That will clarify which situations require image descriptions, and it will increase authors ‘ effectiveness in making their sites more visible.

    While complex images—like graphs and charts—are challenging to describe in any sort of succinct way ( even for humans ), the image example shared in the GPT4 announcement points to an interesting opportunity as well. Let’s say you came across a map that was simply the name of the table and the type of visualization it was: Pie table comparing smartphone use to have phone use among US households making under$ 30, 000 annually. ( That would be a pretty bad alt text for a chart because it would frequently leave many unanswered questions about the data, but let’s just assume that that was the description in place. ) If your website knew that that picture was a pie chart ( because an onboard model concluded this ), imagine a world where people could ask questions like these about the creative:

    • Are there more smartphone users than feature phones?
    • How many more?
    • Is there a group of people that don’t fall into either of these buckets?
    • What number is that?

    For a moment, the chance to learn more about images and data in this way could be revolutionary for people with low vision and blindness as well as for those with various forms of color blindness, cognitive disabilities, and other issues. It could also be useful in educational contexts to help people who can see these charts, as is, to understand the data in the charts.

    What if you could ask your browser to make a complicated chart simpler? What if you asked it to separate a single line from a line graph? What if you could ask your browser to transpose the colors of the different lines to work better for form of color blindness you have? What if you demanded that it switch colors in favor of patterns? That seems like a possibility given the chat-based interfaces and our current ability to manipulate images in modern AI tools.

    Now imagine a purpose-built model that could extract the information from that chart and convert it to another format. Perhaps it could convert that pie chart (or, better yet, a series of pie charts ) into more usable ( and useful ) formats, like spreadsheets, for instance. That would be incredible!

    Matching algorithms

    When Safiya Umoja Noble chose to put her book Algorithms of Oppression, she hit the nail on the head. Although her book focused on how search engines can foster racism, I believe it’s equally true that all computer models have the potential to foster conflict, prejudice, and intolerance. Whether it’s Twitter always showing you the latest tweet from a bored billionaire, YouTube sending us into a Q-hole, or Instagram warping our ideas of what natural bodies look like, we know that poorly authored and maintained algorithms are incredibly harmful. Many of these are the result of a lack of diversity in the people who create and build them. However, when these platforms are built with inclusive features in mind, there is real potential for algorithm development to help people with disabilities.

    Take Mentra, for example. They serve as a network of employment for people who are neurodivers. They match job seekers with potential employers using an algorithm based on more than 75 data points. On the job-seeker side of things, it considers each candidate’s strengths, their necessary and preferred workplace accommodations, environmental sensitivities, and so on. It takes into account the workplace, the communication environment, and other factors. Mentra made the decision to change the script when it came to the typical employment websites because it was run by neurodivergent people. They use their algorithm to propose available candidates to companies, who can then connect with job seekers that they are interested in, reducing the emotional and physical labor on the job-seeker side of things.

    When more people with disabilities are involved in the development of algorithms, this can lower the likelihood that these algorithms will harm their communities. Diverse teams are crucial because of this.

    Imagine that a social media company’s recommendation engine was tuned to analyze who you’re following and if it was tuned to prioritize follow recommendations for people who talked about similar things but who were different in some key ways from your existing sphere of influence. For instance, if you follow a group of white men who are not white or aren’t white and who also discuss AI, it might be wise to follow those who are also disabled or who are not white. If you followed its recommendations, you might learn more about what’s happening in the AI field. These same systems should also use their understanding of biases about particular communities—including, for instance, the disability community—to make sure that they aren’t recommending any of their users follow accounts that perpetuate biases against (or, worse, spewing hate toward ) those groups.

    Other ways that AI can assist people with disabilities

    I’m sure I could go on and on about using AI to assist people with disabilities, but I’m going to make this last section into a bit of a lightning round if I weren’t trying to put this together in between other tasks. In no particular order:

      Voice preservation You might have heard about the voice-preserve offerings from Microsoft, Acapela, or others, or have seen the VALL-E paper or Apple’s Global Accessibility Awareness Day announcement. It’s possible to train an AI model to replicate your voice, which can be a tremendous boon for people who have ALS ( Lou Gehrig’s disease ) or motor-neuron disease or other medical conditions that can lead to an inability to talk. This technology can also be used to create audio deepfakes, so it’s something we need to approach responsibly, but the technology has truly transformative potential.
    • voice recognition is. Researchers like those in the Speech Accessibility Project are paying people with disabilities for their help in collecting recordings of people with atypical speech. As I type, they are currently hiring people with Parkinson’s and related conditions, and they intend to expand this list as the project develops. More people with disabilities will be able to use voice assistants, dictation software, and voice-response services, as well as to use only their voices to control computers and other devices, according to this research.
    • Text transformation. LLMs of the current generation are quite capable of changing text without creating hallucinations. This is incredibly empowering for those who have cognitive disabilities and who may benefit from text summaries or simplified versions, or even text that has been prepared for bionic reading.

    The importance of diverse teams and data

    Our differences must be acknowledged as important. The intersections of the identities we live in have an impact on our lived experiences. These lived experiences—with all their complexities ( and joys and pain ) —are valuable inputs to the software, services, and societies that we shape. Our differences must be reflected in the data we use to develop new models, and those who provide it need to be compensated for doing so. Stronger models can be created using inclusive data sets, which lead to more equitable outcomes.

    Want a model that doesn’t demean or patronize or objectify people with disabilities? Make sure that the training data includes information about disabilities written by people with a range of disabilities.

    Want a model that doesn’t speak in ableist language? You may be able to use existing data sets to build a filter that can intercept and remediate ableist language before it reaches readers. Despite this, AI models won’t be replacing human copy editors anytime soon when it comes to sensitivity reading.

    Want a copilot for coding that provides recommendations that are accessible after the jump? Train it on code that you know to be accessible.


    I have no doubts about how dangerous AI can and will be for people today, tomorrow, and for the rest of the world. However, I also think we should acknowledge this and make thoughtful, thoughtful, and intentional changes to our approaches to AI that will reduce harm over time as well. Today, tomorrow, and well into the future.


    Many thanks to Kartik Sawhney for supporting the development of this article, Ashley Bischoff for providing me with invaluable editorial support, and, of course, Joe Dolson for the prompt.

  • The Wax and the Wane of the Web

    The Wax and the Wane of the Web

    When you begin to believe you have all figured out, everyone does change, in my experience. Simply as you start to get the hang of injections, diapers, and ordinary sleep, it’s time for solid foods, potty training, and nighttime sleep. When those are determined, school and occasional sleeps are in order. The cycle goes on and on.

    The same holds true for those of us who are currently employed in design and development. Having worked on the web for about three years at this point, I’ve seen the typical wax and wane of concepts, strategies, and systems. Every day we as developers and designers re-enter a routine pattern, a brand-new concept or technology emerges to shake things up and completely alter our world.

    How we got below

    I built my first website in the mid-’90s. Design and development on the web back then was a free-for-all, with few established norms. For any layout aside from a single column, we used table elements, often with empty cells containing a single pixel spacer GIF to add empty space. We styled text with numerous font tags, nesting the tags every time we wanted to vary the font style. And we had only three or four typefaces to choose from: Arial, Courier, or Times New Roman. When Verdana and Georgia came out in 1996, we rejoiced because our options had nearly doubled. The only safe colors to choose from were the 216 “web safe” colors known to work across platforms. The few interactive elements (like contact forms, guest books, and counters) were mostly powered by CGI scripts (predominantly written in Perl at the time). Achieving any kind of unique look involved a pile of hacks all the way down. Interaction was often limited to specific pages in a site.

    online standards were born.

    At the turn of the century, a new cycle started. Crufty code littered with table layouts and font tags waned, and a push for web standards waxed. Newer technologies like CSS got more widespread adoption by browsers makers, developers, and designers. This shift toward standards didn’t happen accidentally or overnight. It took active engagement between the W3C and browser vendors and heavy evangelism from folks like the Web Standards Project to build standards. A List Apart and books like Designing with Web Standards by Jeffrey Zeldman played key roles in teaching developers and designers why standards are important, how to implement them, and how to sell them to their organizations. And approaches like progressive enhancement introduced the idea that content should be available for all browsers—with additional enhancements available for more advanced browsers. Meanwhile, sites like the CSS Zen Garden showcased just how powerful and versatile CSS can be when combined with a solid semantic HTML structure.

    Server-side language like PHP, Java, and.NET took Perl as the primary back-end computers, and the cgi-bin was tossed in the garbage bin. The first age of internet programs started with content-management systems (especially those used in blogs like Blogger, Grey Matter, Movable Type, and WordPress ), with these better server-side equipment. In the mid-2000s, AJAX opened gates for sequential interaction between the front end and back close. Pages had now revise their content without having to reload. A grain of Script frameworks like Prototype, YUI, and ruby arose to aid developers develop more credible client-side conversation across browsers that had wildly varying levels of standards support. Techniques like image replacement enable the use of fonts by skilled designers and developers. And technologies like Flash made it possible to add animations, games, and even more interactivity.

    These new methods, standards, and technologies greatly reenergized the sector. Web design flourished as designers and developers explored more diverse styles and layouts. However, we still relied heavily on numerous hacks. Early CSS was a huge improvement over table-based layouts when it came to basic layout and text styling, but its limitations at the time meant that designers and developers still relied heavily on images for complex shapes ( such as rounded or angled corners ) and tiled backgrounds for the appearance of full-length columns (among other hacks ). All kinds of nested floats or absolute positioning were required for complicated layouts ( or both ). Flash and image replacement for custom fonts was a great start toward varying the typefaces from the big five, but both hacks introduced accessibility and performance problems. Additionally, JavaScript libraries made it simple for anyone to add a dash of interaction to pages, even at the expense of double or even quadrupling the download size of basic websites.

    The web as software platform

    The interplay between the front end and the back end continued to grow, which led to the development of the current era of modern web applications. Between expanded server-side programming languages ( which kept growing to include Ruby, Python, Go, and others ) and newer front-end tools like React, Vue, and Angular, we could build fully capable software on the web. Along with these tools, there were additional options, such as shared package libraries, build automation, and collaborative version control. What was once primarily an environment for linked documents became a realm of infinite possibilities.

    Mobile devices increased in their capabilities as well, and they gave us access to the internet in our pockets at the same time. Mobile apps and responsive design opened up opportunities for new interactions anywhere and any time.

    This fusion of potent mobile devices and potent development tools contributed to the growth of social media and other centralized tools for people to use and interact with. As it became easier and more common to connect with others directly on Twitter, Facebook, and even Slack, the desire for hosted personal sites waned. Social media made connections on a global scale, with both positive and negative outcomes.

    Want a much more extensive history of how we got here, with some other takes on ways that we can improve? ” Of Time and the Web” was written by Jeremy Keith. Or check out the” Web Design History Timeline” at the Web Design Museum. Additionally, Neal Agarwal takes a fascinating tour of” Internet Artifacts.”

    Where we are now

    It seems like we’ve been at a new significant inflection point over the past couple of years. As social-media platforms fracture and wane, there’s been a growing interest in owning our own content again. There are many different ways to create a website, from the tried-and-true classic of hosting plain HTML files to static site generators to content management systems of all varieties. The fracturing of social media also comes with a cost: we lose crucial infrastructure for discovery and connection. Webmentions, RSS, ActivityPub, and other IndieWeb tools can be useful in this regard, but they’re still largely underdeveloped and difficult to use for the less geeky. We can build amazing personal websites and add to them regularly, but without discovery and connection, it can sometimes feel like we may as well be shouting into the void.

    Browser support for standards like web components like CSS, JavaScript, and other standards has increased, particularly with efforts like Interop. New technologies gain support across the board in a fraction of the time that they used to. I frequently find out about a new feature and check its browser support only to discover that its coverage has already exceeded 80 %. Nowadays, the barrier to using newer techniques often isn’t browser support but simply the limits of how quickly designers and developers can learn what’s available and how to adopt it.

    We can prototype almost any idea today with just a few commands and a few lines of code. All the tools that we now have available make it easier than ever to start something new. However, as the initial cost of these frameworks may be saved in the beginning, it eventually becomes due as their upkeep and maintenance becomes a component of our technical debt.

    If we rely on third-party frameworks, adopting new standards can sometimes take longer since we may have to wait for those frameworks to adopt those standards. These frameworks, which previously made it easier to adopt new techniques sooner, have since evolved into obstacles. These same frameworks often come with performance costs too, forcing users to wait for scripts to load before they can read or interact with pages. And frequently, when scripts fail ( whether due to poor code, network problems, or other environmental factors ), users are left with blank or broken pages.

    Where do we go from here?

    Hacks of today help to shape standards for tomorrow. And there’s nothing inherently wrong with embracing hacks —for now—to move the present forward. Problems only arise when we refuse to acknowledge that they are hacks or when we choose not to replace them. So what can we do to create the future we want for the web?

    Build for the long haul. Optimize for performance, for accessibility, and for the user. weigh the price of those user-friendly tools. They may make your job a little easier today, but how do they affect everything else? What does each user pay? To future developers? To adoption of standards? Sometimes the convenience may be worth it. It’s occasionally just a hack that you’ve gotten used to. And sometimes it’s holding you back from even better options.

    Start with the basics. Standards continue to evolve over time, but browsers have done a remarkably good job of continuing to support older standards. The same holds true for third-party frameworks, though. Sites built with even the hackiest of HTML from the’ 90s still work just fine today. Even after a few years, the same can’t be said about websites created with frameworks.

    Design with care. Consider the effects of each choice, whether your craft is code, pixels, or processes. The convenience of many a modern tool comes at the cost of not always understanding the underlying decisions that have led to its design and not always considering the impact that those decisions can have. Use the time saved by modern tools to think more carefully and make decisions with care rather than rushing to “move fast and break things.”

    Always be learning. If you’re constantly learning, you’re also developing. Sometimes it may be hard to pinpoint what’s worth learning and what’s just today’s hack. Even if you were to concentrate solely on learning standards, you might end up focusing on something that won’t matter next year. ( Remember XHTML? ) However, ongoing learning opens up new connections in your brain, and the techniques you learn in one day may be used to guide different experiments in the future.

    Play, experiment, and be weird! This website we created is the most incredible experiment. It’s the single largest human endeavor in history, and yet each of us can create our own pocket within it. Be brave and make new friends. Build a playground for ideas. In your own bizarre science lab, perform bizarre experiments. Start your own small business. There has never been a place where we have more room to be creative, take risks, and discover our potential.

    Share and amplify. Share what you think has worked for you as you experiment, play, and learn. Write on your own website, post on whichever social media site you prefer, or shout it from a TikTok. Write something for A List Apart! But take the time to amplify others too: find new voices, learn from them, and share what they’ve taught you.

    Go ahead and create a masterpiece.

    As designers and developers for the web ( and beyond ), we’re responsible for building the future every day, whether that may take the shape of personal websites, social media tools used by billions, or anything in between. Let’s incorporate our values into the products we produce, and let’s improve the world for everyone. Create that thing that only you are uniquely qualified to make. Then distribute it, improve it, re-use it, or create something new with it. Learn. Make. Share. Grow. Rinse and repeat. Everything will change whenever you believe you have the ability to use the internet.

  • To Ignite a Personalization Practice, Run this Prepersonalization Workshop

    To Ignite a Personalization Practice, Run this Prepersonalization Workshop

    Photo this. You’ve joined a club at your business that’s designing innovative product features with an focus on technology or AI. Or perhaps your business only started using a personalization website. Either way, you’re designing with information. What then? When it comes to designing for personalization, there are many warning stories, no immediately achievement, and some guidelines for the baffled.

    The personalization space is real, between the dream of getting it right and the fear of it going wrong ( like when we encounter “persofails” in the spirit of a company that regularly asks regular people to buy more toilet seats ). It’s an particularly confusing place to be a modern professional without a map, a map, or a strategy.

    Because successful personalization is so dependent on each group’s skill, technology, and market position, there are no Lonely Planet and some tour guides for those of you who want to personalize.

    But you can ensure that your group has packed its carriers reasonably.

    There’s a DIY method to increase your chances for achievement. You’ll at least at least disarm your boss ‘ irrational exuberance. Before the group you’ll need to properly plan.

    It’s known as prepersonalization.

    Behind the audio

    Take into account Spotify’s DJ element, which debuted this year.

    We’re used to seeing the polished final outcome of a personalization have. A personal have had to be conceived, budgeted, and prioritized before the year-end prize, the making-of-backstory, or the behind-the-scenes success chest. Before any customisation have goes live in your product or service, it lives amid a delay of valuable ideas for expressing consumer experiences more automatically.

    How do you decide where to position personalization wagers? How do you design regular interactions that didn’t journey up users or—worse—breed mistrust? We’ve discovered that several budgeted programs second required one or more workshops to join key stakeholders and domestic customers of the technology in order to justify their continuous investments. Create it count.

    We’ve closely monitored the same evolution with our consumers, from major software to young companies. In our experience with working on small and large personalization attempts, a program’s best monitor record—and its capacity to weather tough questions, work steadily toward shared answers, and manage its design and engineering efforts—turns on how successfully these prepersonalization activities play out.

    Effective workshops consistently separate successful future endeavors from unsuccessful ones, saving countless hours of time, resources, and overall well-being.

    A personalization practice involves a multiyear effort of testing and feature development. Your tech stack is not experiencing a switch-flip. It’s best managed as a backlog that often evolves through three steps:

    1. customer experience optimization ( CXO, also known as A/B testing or experimentation )
    2. always-on automations ( whether rules-based or machine-generated )
    3. mature features or standalone product development ( like Spotify’s DJ experience )?

    This is why we created our progressive personalization framework and why we’re field-testing an accompanying deck of cards: we believe that there’s a base grammar, a set of “nouns and verbs” that your organization can use to design experiences that are customized, personalized, or automated. You won’t require these cards. But we strongly recommend that you create something similar, whether that might be digital or physical.

    Set the timer for your kitchen.

    How long does it take to cook up a prepersonalization workshop? The evaluation activities that we suggest include can last for a number of weeks ( and frequently do ). For the core workshop, we recommend aiming for two to three days. Here’s a summary of our more general approach as well as information on the crucial first-day activities.

    The full arc of the wider workshop is threefold:

      Kickstart: This specifies the terms of your engagement as you concentrate on both your team’s and your team’s readiness and drive.
    1. Plan your work: This is the heart of the card-based workshop activities where you specify a plan of attack and the scope of work.
    2. Work your plan: This stage consists of making it possible for team members to individually pitch their own pilots that each include a proof-of-concept project, business case, and operating model.

    Give yourself at least a day, split into two large time blocks, to power through a concentrated version of those first two phases.

    Kickstart: Apt your appetite

    We call the first lesson the “landscape of connected experience“. It looks at the possibilities for personalization at your company. A connected experience, in our parlance, is any UX requiring the orchestration of multiple systems of record on the backend. A marketing-automation platform and a content-management system could be used together. It could be a digital-asset manager combined with a customer-data platform.

    Create a conversation by mentioning consumer and business-to-business examples of connected experience interactions that you admire, find familiar, or even dislike. This should cover a representative range of personalization patterns, including automated app-based interactions ( such as onboarding sequences or wizards ), notifications, and recommenders. We have a list of these in the cards. Here’s a list of 142 different interactions to jog your thinking.

    It’s all about setting the tone. What are the possible paths for the practice in your organization? Here’s a long-form primer and a strategic framework for a broader perspective.

    Assess each example that you discuss for its complexity and the level of effort that you estimate that it would take for your team to deliver that feature ( or something similar ). We break down connected experiences into five categories in our cards: functions, features, experiences, complete products, and portfolios. Size your own build here. This will help to draw attention to both the benefits of ongoing investment and the difference between what you currently offer and what you intend to deliver in the future.

    Next, have your team plot each idea on the following 2×2 grid, which lays out the four enduring arguments for a personalized experience. This is crucial because it emphasizes how personalization can affect your own methods of working as well as your external customers. It’s also a reminder ( which is why we used the word argument earlier ) of the broader effort beyond these tactical interventions.

    Each team member should decide where their focus should be placed for your product or service. Naturally, you can’t prioritize all of them. Here, the goal is to show how various departments may view their own benefits from the effort, which can vary from one department to the next. Documenting your desired outcomes lets you know how the team internally aligns across representatives from different departments or functional areas.

    The third and final kickstart activity is about filling in the personalization gap. Is your customer journey well documented? Will ensuring data and privacy is a major challenge too much? Do you have content metadata needs that you have to address? ( We’re pretty sure you do; it’s just a matter of acknowledging the magnitude of that need and finding a solution. ) In our cards, we’ve noted a number of program risks, including common team dispositions. For instance, our Detractor card lists six protracted behavior that is harmful to the development of our country.

    Effectively collaborating and managing expectations is critical to your success. Consider the potential obstacles to your upcoming progress. Press the participants to name specific steps to overcome or mitigate those barriers in your organization. According to research, personalization initiatives face a number of common obstacles.

    At this point, you’ve hopefully discussed sample interactions, emphasized a key area of benefit, and flagged key gaps? Good, you’re ready to go on.

    Hit that test kitchen

    What will you need next to bring your personalized recipes to life. Personalization engines, which are robust software suites for automating and expressing dynamic content, can intimidate new customers. They give you a variety of options for how your organization can conduct its activities because of their broad and potent capabilities. This presents the question: Where do you begin when you’re configuring a connected experience?

    The key here is to avoid treating the installed software like some imagined kitchen from a fantasy remodeling project ( as one of our client executives humorously put it ). These software engines are more like test kitchens where your team can begin devising, tasting, and refining the snacks and meals that will become a part of your personalization program’s regularly evolving menu.

    Over the course of the workshop, the final menu of the prioritized backlog will be created. And creating “dishes” is the way that you’ll have individual team stakeholders construct personalized interactions that serve their needs or the needs of others.

    The dishes will be made from recipes, which have predetermined ingredients.

    Verify your ingredients

    Like a good product manager, you’ll make sure you have everything you need to make your desired interaction ( or that you can figure out what needs to be added to your pantry ) and that you validate with the right stakeholders present. These ingredients include the audience that you’re targeting, content and design elements, the context for the interaction, and your measure for how it’ll come together.

    Not just discovering requirements, it is. Documenting your personalizations as a series of if-then statements lets the team:

    1. compare findings to a unified approach for developing features, similar to how artists paint with the same color palette,
    2. specify a consistent set of interactions that users find uniform or familiar,
    3. and establish parity between all important performance indicators and performance metrics.

    This helps you streamline your designs and your technical efforts while you deliver a shared palette of core motifs of your personalized or automated experience.

    Create a recipe.

    What ingredients are important to you? Consider the construct of a who-what-when-why

    • Who are your key audience segments or groups?
    • What kind of content will you provide for them, what design elements, and under what circumstances?
    • And for which business and user benefits?

    Five years ago, we created these cards and card categories. We regularly play-test their fit with conference audiences and clients. And we still come across fresh possibilities. But they all follow an underlying who-what-when-why logic.

    In the cards in the accompanying photo below, you can typically follow along with right to left in three examples of subscription-based reading apps.

    1. Nurture personalization: When a guest or an unknown visitor interacts with a product title, a banner or alert bar appears that makes it easier for them to encounter a related title they may want to read, saving them time.
    2. Welcome automation: An email is sent when a newly registered user is a subscriber and is able to highlight the breadth of the content catalog.
    3. Winback automation: Before their subscription lapses or after a recent failed renewal, a user is sent an email that gives them a promotional offer to suggest that they reconsider renewing or to remind them to renew.

    We’ve also found that sometimes this process comes together more effectively by cocreating the recipes themselves, so a good preworkshop activity might be to think about what these cards might be for your organization. Start with a set of blank cards, and begin labeling and grouping them through the design process, eventually distilling them to a refined subset of highly useful candidate cards.

    The workshop’s later stages could be characterized as shifting from focusing on a cookbook to a more nuanced customer-journey mapping. Individual” cooks” will pitch their recipes to the team, using a common jobs-to-be-done format so that measurability and results are baked in, and from there, the resulting collection will be prioritized for finished design and delivery to production.

    Better architecture is required for better kitchens.

    Simplifying a customer experience is a complicated effort for those who are inside delivering it. Beware of anyone who contradicts your advice. With that being said,” Complicated problems can be hard to solve, but they are addressable with rules and recipes“.

    When a team overfits: they aren’t designing with their best data, personalization turns into a laughing line. Like a sparse pantry, every organization has metadata debt to go along with its technical debt, and this creates a drag on personalization effectiveness. For instance, your AI’s output quality is in fact impacted by your IA. Spotify’s poster-child prowess today was unfathomable before they acquired a seemingly modest metadata startup that now powers its underlying information architecture.

    You can withstand the heat without a doubt.

    Personalization technology opens a doorway into a confounding ocean of possible designs. Only a deliberate and cooperative approach will produce the desired outcome. So banish the dream kitchen. Instead, head to the test kitchen to burn off the fantastical ideas that the doers in your organization have in store for time, to preserve job satisfaction and security, and to avoid unnecessary distractions. There are meals to serve and mouths to feed.

    This organizational framework gives you a fighting chance at long-term success as well as solid ground. Wiring up your information layer isn’t an overnight affair. However, you’ll have solid ground for success if you use the same cookbook and the same recipes. We designed these activities to make your organization’s needs concrete and clear, long before the hazards pile up.

    Your time well spent is being able to assess your unique situation and digital skills, despite the associated costs associated with investing in this kind of technology and product design. Don’t squander it. The pudding is the proof, as they say.

  • User Research Is Storytelling

    User Research Is Storytelling

    Ever since I was a boy, I’ve been fascinated with movies. I loved the characters and the excitement—but most of all the stories. I wanted to be an actor. And I believed that I’d get to do the things that Indiana Jones did and go on exciting adventures. I even dreamed up ideas for movies that my friends and I could make and star in. But they never went any further. I did, however, end up working in user experience (UX). Now, I realize that there’s an element of theater to UX—I hadn’t really considered it before, but user research is storytelling. And to get the most out of user research, you need to tell a good story where you bring stakeholders—the product team and decision makers—along and get them interested in learning more.

    Think of your favorite movie. More than likely it follows a three-act structure that’s commonly seen in storytelling: the setup, the conflict, and the resolution. The first act shows what exists today, and it helps you get to know the characters and the challenges and problems that they face. Act two introduces the conflict, where the action is. Here, problems grow or get worse. And the third and final act is the resolution. This is where the issues are resolved and the characters learn and change. I believe that this structure is also a great way to think about user research, and I think that it can be especially helpful in explaining user research to others.

    Use storytelling as a structure to do research

    It’s sad to say, but many have come to see research as being expendable. If budgets or timelines are tight, research tends to be one of the first things to go. Instead of investing in research, some product managers rely on designers or—worse—their own opinion to make the “right” choices for users based on their experience or accepted best practices. That may get teams some of the way, but that approach can so easily miss out on solving users’ real problems. To remain user-centered, this is something we should avoid. User research elevates design. It keeps it on track, pointing to problems and opportunities. Being aware of the issues with your product and reacting to them can help you stay ahead of your competitors.

    In the three-act structure, each act corresponds to a part of the process, and each part is critical to telling the whole story. Let’s look at the different acts and how they align with user research.

    Act one: setup

    The setup is all about understanding the background, and that’s where foundational research comes in. Foundational research (also called generative, discovery, or initial research) helps you understand users and identify their problems. You’re learning about what exists today, the challenges users have, and how the challenges affect them—just like in the movies. To do foundational research, you can conduct contextual inquiries or diary studies (or both!), which can help you start to identify problems as well as opportunities. It doesn’t need to be a huge investment in time or money.

    Erika Hall writes about minimum viable ethnography, which can be as simple as spending 15 minutes with a user and asking them one thing: “‘Walk me through your day yesterday.’ That’s it. Present that one request. Shut up and listen to them for 15 minutes. Do your damndest to keep yourself and your interests out of it. Bam, you’re doing ethnography.” According to Hall, [This] will probably prove quite illuminating. In the highly unlikely case that you didn’t learn anything new or useful, carry on with enhanced confidence in your direction.”  

    This makes total sense to me. And I love that this makes user research so accessible. You don’t need to prepare a lot of documentation; you can just recruit participants and do it! This can yield a wealth of information about your users, and it’ll help you better understand them and what’s going on in their lives. That’s really what act one is all about: understanding where users are coming from. 

    Jared Spool talks about the importance of foundational research and how it should form the bulk of your research. If you can draw from any additional user data that you can get your hands on, such as surveys or analytics, that can supplement what you’ve heard in the foundational studies or even point to areas that need further investigation. Together, all this data paints a clearer picture of the state of things and all its shortcomings. And that’s the beginning of a compelling story. It’s the point in the plot where you realize that the main characters—or the users in this case—are facing challenges that they need to overcome. Like in the movies, this is where you start to build empathy for the characters and root for them to succeed. And hopefully stakeholders are now doing the same. Their sympathy may be with their business, which could be losing money because users can’t complete certain tasks. Or maybe they do empathize with users’ struggles. Either way, act one is your initial hook to get the stakeholders interested and invested.

    Once stakeholders begin to understand the value of foundational research, that can open doors to more opportunities that involve users in the decision-making process. And that can guide product teams toward being more user-centered. This benefits everyone—users, the product, and stakeholders. It’s like winning an Oscar in movie terms—it often leads to your product being well received and successful. And this can be an incentive for stakeholders to repeat this process with other products. Storytelling is the key to this process, and knowing how to tell a good story is the only way to get stakeholders to really care about doing more research. 

    This brings us to act two, where you iteratively evaluate a design or concept to see whether it addresses the issues.

    Act two: conflict

    Act two is all about digging deeper into the problems that you identified in act one. This usually involves directional research, such as usability tests, where you assess a potential solution (such as a design) to see whether it addresses the issues that you found. The issues could include unmet needs or problems with a flow or process that’s tripping users up. Like act two in a movie, more issues will crop up along the way. It’s here that you learn more about the characters as they grow and develop through this act. 

    Usability tests should typically include around five participants according to Jakob Nielsen, who found that that number of users can usually identify most of the problems: “As you add more and more users, you learn less and less because you will keep seeing the same things again and again… After the fifth user, you are wasting your time by observing the same findings repeatedly but not learning much new.” 

    There are parallels with storytelling here too; if you try to tell a story with too many characters, the plot may get lost. Having fewer participants means that each user’s struggles will be more memorable and easier to relay to other stakeholders when talking about the research. This can help convey the issues that need to be addressed while also highlighting the value of doing the research in the first place.

    Researchers have run usability tests in person for decades, but you can also conduct usability tests remotely using tools like Microsoft Teams, Zoom, or other teleconferencing software. This approach has become increasingly popular since the beginning of the pandemic, and it works well. You can think of in-person usability tests like going to a play and remote sessions as more like watching a movie. There are advantages and disadvantages to each. In-person usability research is a much richer experience. Stakeholders can experience the sessions with other stakeholders. You also get real-time reactions—including surprise, agreement, disagreement, and discussions about what they’re seeing. Much like going to a play, where audiences get to take in the stage, the costumes, the lighting, and the actors’ interactions, in-person research lets you see users up close, including their body language, how they interact with the moderator, and how the scene is set up.

    If in-person usability testing is like watching a play—staged and controlled—then conducting usability testing in the field is like immersive theater where any two sessions might be very different from one another. You can take usability testing into the field by creating a replica of the space where users interact with the product and then conduct your research there. Or you can go out to meet users at their location to do your research. With either option, you get to see how things work in context, things come up that wouldn’t have in a lab environment—and conversion can shift in entirely different directions. As researchers, you have less control over how these sessions go, but this can sometimes help you understand users even better. Meeting users where they are can provide clues to the external forces that could be affecting how they use your product. In-person usability tests provide another level of detail that’s often missing from remote usability tests. 

    That’s not to say that the “movies”—remote sessions—aren’t a good option. Remote sessions can reach a wider audience. They allow a lot more stakeholders to be involved in the research and to see what’s going on. And they open the doors to a much wider geographical pool of users. But with any remote session there is the potential of time wasted if participants can’t log in or get their microphone working. 

    The benefit of usability testing, whether remote or in person, is that you get to see real users interact with the designs in real time, and you can ask them questions to understand their thought processes and grasp of the solution. This can help you not only identify problems but also glean why they’re problems in the first place. Furthermore, you can test hypotheses and gauge whether your thinking is correct. By the end of the sessions, you’ll have a much clearer picture of how usable the designs are and whether they work for their intended purposes. Act two is the heart of the story—where the excitement is—but there can be surprises too. This is equally true of usability tests. Often, participants will say unexpected things, which change the way that you look at things—and these twists in the story can move things in new directions. 

    Unfortunately, user research is sometimes seen as expendable. And too often usability testing is the only research process that some stakeholders think that they ever need. In fact, if the designs that you’re evaluating in the usability test aren’t grounded in a solid understanding of your users (foundational research), there’s not much to be gained by doing usability testing in the first place. That’s because you’re narrowing the focus of what you’re getting feedback on, without understanding the users’ needs. As a result, there’s no way of knowing whether the designs might solve a problem that users have. It’s only feedback on a particular design in the context of a usability test.  

    On the other hand, if you only do foundational research, while you might have set out to solve the right problem, you won’t know whether the thing that you’re building will actually solve that. This illustrates the importance of doing both foundational and directional research. 

    In act two, stakeholders will—hopefully—get to watch the story unfold in the user sessions, which creates the conflict and tension in the current design by surfacing their highs and lows. And in turn, this can help motivate stakeholders to address the issues that come up.

    Act three: resolution

    While the first two acts are about understanding the background and the tensions that can propel stakeholders into action, the third part is about resolving the problems from the first two acts. While it’s important to have an audience for the first two acts, it’s crucial that they stick around for the final act. That means the whole product team, including developers, UX practitioners, business analysts, delivery managers, product managers, and any other stakeholders that have a say in the next steps. It allows the whole team to hear users’ feedback together, ask questions, and discuss what’s possible within the project’s constraints. And it lets the UX research and design teams clarify, suggest alternatives, or give more context behind their decisions. So you can get everyone on the same page and get agreement on the way forward.

    This act is mostly told in voiceover with some audience participation. The researcher is the narrator, who paints a picture of the issues and what the future of the product could look like given the things that the team has learned. They give the stakeholders their recommendations and their guidance on creating this vision.

    Nancy Duarte in the Harvard Business Review offers an approach to structuring presentations that follow a persuasive story. “The most effective presenters use the same techniques as great storytellers: By reminding people of the status quo and then revealing the path to a better way, they set up a conflict that needs to be resolved,” writes Duarte. “That tension helps them persuade the audience to adopt a new mindset or behave differently.”

    This type of structure aligns well with research results, and particularly results from usability tests. It provides evidence for “what is”—the problems that you’ve identified. And “what could be”—your recommendations on how to address them. And so on and so forth.

    You can reinforce your recommendations with examples of things that competitors are doing that could address these issues or with examples where competitors are gaining an edge. Or they can be visual, like quick mockups of how a new design could look that solves a problem. These can help generate conversation and momentum. And this continues until the end of the session when you’ve wrapped everything up in the conclusion by summarizing the main issues and suggesting a way forward. This is the part where you reiterate the main themes or problems and what they mean for the product—the denouement of the story. This stage gives stakeholders the next steps and hopefully the momentum to take those steps!

    While we are nearly at the end of this story, let’s reflect on the idea that user research is storytelling. All the elements of a good story are there in the three-act structure of user research: 

    • Act one: You meet the protagonists (the users) and the antagonists (the problems affecting users). This is the beginning of the plot. In act one, researchers might use methods including contextual inquiry, ethnography, diary studies, surveys, and analytics. The output of these methods can include personas, empathy maps, user journeys, and analytics dashboards.
    • Act two: Next, there’s character development. There’s conflict and tension as the protagonists encounter problems and challenges, which they must overcome. In act two, researchers might use methods including usability testing, competitive benchmarking, and heuristics evaluation. The output of these can include usability findings reports, UX strategy documents, usability guidelines, and best practices.
    • Act three: The protagonists triumph and you see what a better future looks like. In act three, researchers may use methods including presentation decks, storytelling, and digital media. The output of these can be: presentation decks, video clips, audio clips, and pictures. 

    The researcher has multiple roles: they’re the storyteller, the director, and the producer. The participants have a small role, but they are significant characters (in the research). And the stakeholders are the audience. But the most important thing is to get the story right and to use storytelling to tell users’ stories through research. By the end, the stakeholders should walk away with a purpose and an eagerness to resolve the product’s ills. 

    So the next time that you’re planning research with clients or you’re speaking to stakeholders about research that you’ve done, think about how you can weave in some storytelling. Ultimately, user research is a win-win for everyone, and you just need to get stakeholders interested in how the story ends.

  • Why AI Isn’t Replacing You—It’s Freeing You

    Why AI Isn’t Replacing You—It’s Freeing You

    Why AI Isn’t Replacing You—It’s Freeing You written by Jarret Redding read more at Duct Tape Marketing

    The Duct Tape Marketing Podcast with Keith Lauver In this episode of the Duct Tape Marketing Podcast, I interviewed Keith Lauver, a serial entrepreneur, product launch expert, and founder of Atomic Elevator—an AI-powered marketing company behind Ella, a high-definition marketing platform. With six startups and over $34 million in product launches under his belt, Keith […]

    Why AI Isn’t Replacing You—It’s Freeing You written by Jarret Redding read more at Duct Tape Marketing

    The Duct Tape Marketing Podcast with Keith Lauver

    In this episode of the Duct Tape Marketing Podcast, I interviewed Keith Lauver, a serial entrepreneur, product launch expert, and founder of Atomic Elevator—an AI-powered marketing company behind Ella, a high-definition marketing platform. With six startups and over $34 million in product launches under his belt, Keith brings a sharp, practical lens to how AI can be used to transform marketing and business operations—especially for small business owners and agencies.

    During our conversation, Keith broke down the real-world applications of AI marketing and how it’s not here to replace people—but to remove bottlenecks, automate repetitive tasks, and unlock creativity. By shifting the way we think about tools like ChatGPT and agent-based workflows, Keith challenges small businesses to stop treating AI like search and start viewing it as a team of collaborators. He also shares how his own company operates without a traditional org chart—thanks to the power of strategic marketing tools and automation.

    Whether you’re leading a team, launching a new product, or running a solo consultancy, this episode offers a practical look at how AI and marketing automation can help you grow smarter, leaner, and more focused.

    Key Takeaways:

    • AI is an amplifier, not a replacement. It removes low-value tasks so entrepreneurs can focus on strategy, creativity, and relationships.
    • Small businesses are underusing AI tools. Many still treat AI like a search engine instead of leveraging its full potential for automation and productivity.
    • High-definition marketing creates clarity. Tools like Ella reduce “fuzzy” marketing by integrating proven marketing frameworks and better data.
    • Agent-based AI is coming. The future involves task-specific agents collaborating in workflows—streamlining execution across teams.
    • Forget the org chart. Keith’s company operates around tasks, not job titles—powered by AI and fractional expertise.
    • Personalization needs data. AI in business thrives when it can access behavior, style, and preferences—delivering truly tailored content.
    • AI unlocks your superpower. By automating what you’re not great at, it helps you focus on the work that energizes you and drives business growth.

    Chapters:

    • [00:09] Introducing Keith Lauver
    • [01:52] Understanding the Practical Uses of AI
    • [04:17] What are AI Agents?
    • [07:45] How Does AI Affect Organizational Structure
    • [11:46] AI Doesn’t Change Human Value
    • [16:22] Personalized Marketing
    • [17:37] Ella AI
    • [21:22] Privacy Concerns with AI

    More About Keith Lauver: 

    Check out Keith Lauver’s Website
    Connect with Keith Lauver on LinkedIn

    This episode of the Duct Tape Marketing Podcast is brought to you by

    Want to elevate your marketing game? AdCritter pairs Connected TV ads with precise digital retargeting to drive real results. Discover how their full-funnel strategy can help your business grow smarter. Let them know Duct Tape Marketing sent you, and you’ll get a dollar-for-dollar match on your first campaign! Learn more at adcritter.com.

     

    John Jantsch (00:00.923)

    Hello and welcome to another episode of the Duct Tape Marketing Podcast. This is John Jantsch. My guest today is Keith Lover. He is a serial entrepreneur and marketing expert who has founded six companies, raised over $34 million for product launches and now leads Atomic Elevator. His team specializes in product launch support and created Ella, a pioneering tool for high-definition marketing.

    He started his entrepreneurial journey at 14. He secured clients like Trader Joe’s, Whole Foods, and inspires others as a speaker and mentor. We were just talking about it. He lives in Red Lodge, Montana, active community in community service through Young Life. So Keith, welcome to the show.

    Keith Lauver (00:44.526)

    Thanks so much, John. Good beer.

    John Jantsch (00:46.172)

    So what did you do at 14?

    Keith Lauver (00:48.026)

    my gosh. So I had the opportunity to, build a software platform for an airport in Billings. was painting pipes in the summer and they found out I knew something about computers. And during the regular smoke break time, I started creating a database to keep track of the paper towels and other inventory got invited upstairs. That turned into an invitation to build the software.

    John Jantsch (00:56.883)

    Ha

    John Jantsch (01:09.907)

    Ha

    Keith Lauver (01:14.862)

    And apparently KPMG had offered him a bid for about $20,000. I said I’ll do it for two and they took it. So that was the very first commercial client I had.

    John Jantsch (01:26.547)

    Well, I think I started my first business when was 16. It was not nearly as glamorous. I was going door to door convincing people to let me seal their driveway. I paid my way through high school and college doing similar things.

    Keith Lauver (01:32.723)

    my gosh.

    Keith Lauver (01:38.016)

    That’s.

    Keith Lauver (01:44.072)

    I think the idea of asphalt going down and paint going up, we do what we have to do. I just caught a lucky break that day, right?

    John Jantsch (01:47.347)

    So we, we’re going to talk about, AI a lot today. I think, it’s a hot topic. It’s probably the hottest topic going right now. I, have, in fact, I’ve started a group I call practical AI for marketing. because I think it’s just a lot of, with any technology, there’s all this futuristic talk of what it can do.

    or what it, you know, is going to do someday. And I really liked always bring it down to, okay, that’s great, but what should it do? So in terms of, of your conversations with smaller businesses, how do you help them see the practical uses of AI and not sort of the robots running the world, you know, future.

    Keith Lauver (02:25.271)

    Yeah

    Keith Lauver (02:40.046)

    You know, one of things that I like to do is separate the application side of things from the construction side of things. And I think there’s a lot of people that are confused about that, John. think, you know, I’m reminded of a workshop that was being done for business owners in Montana a couple of weeks ago, and they brought in a prompt engineer and machine learning expert for the day to teach them how to do stuff that most of them really didn’t care about and frankly didn’t understand.

    John Jantsch (03:06.557)

    Right?

    Keith Lauver (03:08.738)

    that that was what was going to be the topic. So don’t think people even know what this beast called AI is. So there’s people who are building tools and then there’s people who are actually using tools. And those two probably need to be separated before I could even answer the next part of your question.

    John Jantsch (03:25.029)

    Yeah. Well, first off, then let’s back up a little bit. What percentage of businesses, business owners, people working for businesses, do you think are actually using even a simple interface like chat GPT?

    Keith Lauver (03:37.888)

    I think every business owner I’ve talked to has at least experimented with and tried chat GPT. When we take forms on our website, we ask them how frequently are they using it? And I would say that probably a quarter of them aren’t using it more than once a week. And that’s surprising to me. They still haven’t found that thing. And if I might offer a hypothesis about why, I think we are used to something like Google where

    John Jantsch (03:55.847)

    Yeah, yeah.

    Keith Lauver (04:06.978)

    You type in a search and a computer gives an answer. And AI’s potential is so much different than that. But most people are sitting down and thinking about this as a search tool and maybe a little bit smarter search tool. And they’re just not sure what’s beyond that even at the application layer.

    John Jantsch (04:25.757)

    So one of the things that, I don’t know, I, you you talked about bringing in this, large language model expert to talk about things and like that just goes nowhere with the business owners. So I’m going to bring up agents, which, know, maybe we have to kind of break down a little bit, but that’s one of the areas where people are like the future’s coming. You’re going to have, you know, agents replacing all of your people. We don’t actually have agents yet. Not really.

    because there’s a lot of things that I think are going to happen, over, mean, I think we’re going to have some simple task bots. but, but, but the one that people throw out, tell your agent to book me the best ticket on this flight, you know, blah, blah, blah. Well, they’ve got to have access to all the data, all the airline things. And those people aren’t going to share that information. Or if they do some big tech company, it’ll be the one that does the interface and we’ll just be a product of theirs like Facebook.

    Keith Lauver (05:20.994)

    Okay.

    John Jantsch (05:23.973)

    and not, not a user. talk, I just went and rambled all over the place there, but talk a little bit about, you know, the, the, where we are now with agents, what agents are, guess, where we are now and really what is going to be a hurdle to this large scale adoption.

    Keith Lauver (05:41.516)

    Yeah. So as I understand and use agents, they basically are bots, you will, programs that can perform a discrete task and do so in repetition and kind of string those tasks, perhaps one to another, to another. And instead of like right now, if you sit down and chat GPT and say, Hey,

    John Jantsch (05:49.203)

    you

    Keith Lauver (06:02.926)

    you know, can you give me input about a story or can you review this website and tell me the pros and the cons of it or whatever the query might be, an agent can actually do something that’s much more complex and a series of steps. So it might be, can you build me an entire website? Right? And step one is this and step two is this and step three, I think where agents are today,

    John Jantsch (06:18.621)

    Yeah, yeah, yeah, yeah.

    Keith Lauver (06:27.538)

    is still very much in the experimental world. I love the fact that as a company that’s created a platform, we now can begin to move our entire architecture into what they’re calling agentic. So we’re able to take what we were finding other ways of doing and we can now do it better and easier because most of the things that we need to have done are complex and require more than one step and agents will help us do that.

    John Jantsch (06:40.883)

    Mm-hmm.

    John Jantsch (06:53.757)

    Yeah, no, there’ll be a lot of stringing these things together too, right? You complete this task and then go give your output to this agent who then has been trained to do X, right? I mean, is that kind of another way to look at it?

    Keith Lauver (06:57.55)

    for sure.

    Keith Lauver (07:02.487)

    Yeah.

    I love that vision, John, that really interoperability of agents. It’s like, why not have the thing that’s really good at X talk to the other thing that’s really good at Y and talk to the other thing that’s really good at Z. In the field of marketing, of the analog metaphor, if you will, would be the branding person who just is the wizard in the marketing world, right? They’re able to just say, this is the emotional state that we’re going to evoke for people.

    John Jantsch (07:12.179)

    Right.

    John Jantsch (07:31.123)

    Mm-hmm.

    Keith Lauver (07:34.254)

    pontificate on that. And then you’ve got the designer who tries to interpret that. And then you’ve got the copywriter who actually puts words to it. And then you’ve got the HTML person who has to construct it. And then maybe you’ve got somebody that needs to be the messaging architect that’s thinking about it. And then the performance person, we get all these different things. Wouldn’t it be great if those could all be strung together?

    John Jantsch (07:56.731)

    Yeah. And I think that’s a, maybe that’s a little bit of the dilemma of how people, when they’re thinking about embracing AI in general is that, you know, one, one vision I’ve seen is, the org chart that has maybe those, those analog managers, if you will, is that what we’re to call people now? Analog managers. but that, but then each of those people will have three agents that help them do their function.

    Keith Lauver (08:14.83)

    I hope so.

    John Jantsch (08:24.371)

    and they’ve all been maybe specifically trained on a thing, but then I’ve also seen people say, no, we’re going to have, we’re going to have the data analysis agent. That’s going to go across department. you know, how do you, how do you see the org chart of the future?

    Keith Lauver (08:39.266)

    You know, I think, the org chart of the future is probably going to be as diverse as organizations of the future. think models, what’s beautiful about what’s happening in this world is the models can be completely novel. can create things that have never before been seen. An example is, you know, we have been building our go-to-market plans using our software itself. We haven’t needed really a marketing department. even haven’t had to do.

    John Jantsch (08:40.305)

    Peace.

    John Jantsch (08:56.466)

    Yes.

    Keith Lauver (09:08.352)

    advertising in a traditional way. Most of our team is fully fractional and we can all cooperate and actually perform at a much higher level for a lot less money. And I don’t even know what an org chart is. We had a potential investor asked us to build one and I’m like, we haven’t, we don’t even have one for our company. It’s just not the way we operate. We kind of collect around tasks and bring expertise to those tasks and then perform those tasks.

    So it’s just a very different organizational model that we’ve chosen. And I think there’s a lot of freedom in how people are going to build the company.

    John Jantsch (09:43.827)

    But see, I hear an org chart in there. It’s just way different than anything we’ve been taught. So I think it’s still, because an org chart to me is not people doing jobs. An org chart is what functions need to be done. And so I think that’s kind of what you’re describing, but we’re all just used to this is our head of that and this is our VP of that. And I think that that whole, that’s what’s interesting about it. think what’s going on is it’s not just like,

    Keith Lauver (09:48.878)

    Yeah, yeah.

    Keith Lauver (09:57.516)

    Ooh, I love that. Different, yeah.

    John Jantsch (10:13.405)

    How do we augment what we’re already doing? It’s how do we rethink everything, right?

    Keith Lauver (10:18.772)

    I love the freedom. think the moment, when we accidentally discovered this idea that turned into this platform for marketing, call Ella, when this was not an intentional discovery, it was pure accident. And in that moment, every single neuron in my brain fired every pattern from that 14 year old kid who wrote the software for that airport and Billings to the guy who’d been a student of marketing for the last decade fired and said, wait a minute.

    I can do this differently now and I can ask this question in this way and get a completely different perspective than the old model was go to the expert. If we invert and put all the experts into a model, it shifts and everything changes. And I’m addicted to that innovation. So I think it’s wonderful.

    John Jantsch (11:07.633)

    Yeah. Yeah. So, one of the certainly themes that is prevalent is that this technology is going to replace a lot of people. mean, every technology does, right? I mean, or at least changes, you know, what those people do. Where do you fall on kind of the, it’s going to revolutionize industries, replace a lot of people, augment, you know, lot of the value we can bring. I mean, where do you fall on that?

    Keith Lauver (11:20.814)

    you

    John Jantsch (11:37.867)

    continue.

    Keith Lauver (11:39.342)

    So yes, yes, and yes. I do think that AI is going to transform, to augment, to replace. But I don’t think that changes our sense of self. I don’t think that changes our value of fact. If anything, for me, what it’s done is created more freedom around that. I talked to so many people on our team. We’re avid minute by minute users of AI.

    John Jantsch (11:41.391)

    Yeah, OK.

    John Jantsch (11:50.034)

    Yeah.

    Keith Lauver (12:08.974)

    We’re more confident in what we can do and in the gifts that we’ve been created to bring to the world because we augment the things that maybe we’re not as good at. I’m a visionary, I’m not an integrator. So I see big ideas and when you ask me to actually turn that into a you know, a set of sequential steps, I just, my brain hurts. I don’t like that work and I don’t have to do that work anymore. So.

    John Jantsch (12:13.939)

    Mm-hmm. Mm-hmm.

    John Jantsch (12:35.795)

    Yeah. Peace.

    Keith Lauver (12:36.246)

    I think it’s not replacing people, but it’s replacing some of the things that we as people have done. And what that does is gives us the freedom to go back to what is our zone of genius? What is our superpower? What is it that we love to do? And I don’t think AI will ever replace humanity. I think it’s just bringing us up to be the very best versions of ourselves.

    John Jantsch (12:41.317)

    Yeah.

    John Jantsch (12:57.395)

    Well, it’s interesting because I certainly, I’ve always, you know, from a marketing standpoint, we’ve, our monitor has always been strategy before tactics. Um, and I think that in a lot of ways that makes the strategic thinker who can also master AI, who also understands marketing operations. That’s the job of the future, isn’t it? As opposed to the agency that comes in and does the stuff.

    Keith Lauver (13:22.06)

    I think that’s right. I would say our focus has actually been trying to go in and provide even greater effectiveness and efficiency for the strategists. And so because of that, I see a world where AI can actually do a lot of the strategy when well-guided and augmented by humans through that. I would say for me, as I’ve contemplated kind of my own work shift in the last, say, year, most of my time is now relational.

    And that can’t ever be replaced by AI. Most of my time is getting to understand people and their problems and then finding a way to bring that in. But I’m not spending time on strategies so much as I am building relationships that allow my tools to build that strategy. So I think that’s a higher level.

    John Jantsch (14:12.413)

    Yeah.

    Well, there’s such a, even though it’s more one-to-one, there is such a brand aspect to that. There is such a trust aspect to that. And I think that those are the things that are really going to allow the, if there’s going to be winners and losers, I think people that get that, think are going to side on the win.

    Keith Lauver (14:23.063)

    Ooh, I love that. Yeah.

    Keith Lauver (14:33.326)

    You know what I love about what you said there too is just kind of reminds me of the benefit to AI in getting us out of ourselves that if we’re going to be able to establish trust, one of the ways that I do this today that I did in 12 months ago is I talk about the fact that I run everything I do through a blind spot and a bias detector. I run everything I do through the lens of our software.

    that can look at 100 different marketing people’s perspectives. And that actually increases my trustworthiness, my credibility with somebody because I’m actually admitting my own limitations.

    John Jantsch (15:05.811)

    Mm-hmm.

    John Jantsch (15:16.339)

    Yeah, yeah, that’s one of my favorite prompts is like, what should I be asking you? Or what am I not asking you? You know, that kind of thing, you know, or, or I sometimes have to say, stop agreeing with me. That’s a brilliant idea.

    Keith Lauver (15:24.43)

    Yeah.

    Yes. I like to think that my AI is sometimes a little bit too puppy-like. You know, it just wants to wag its tail and say, yes, Keith, I love you. Will you rub my belly? It’s like, yeah. Exactly. It’s like, no, no, no. Or even when I ask AI to be to go do something and the end result, if I say, is this biased? And she says, yes. I’m like, well, why did you do that in that way in the first place? So.

    John Jantsch (15:54.021)

    Right. So, so, I have one more question, but I really do want, we haven’t, I want to spend some time on what you’re doing specifically with Ella because it relates to everything we’re talking about. But, one of the things that, anybody who says the five things that are coming this year, you know, personalization in marketing is, certainly a buzzword that’s going to be on that list all the time. is, is, and it seems AI can help that.

    But I also don’t see a lot of people doing it yet. And is the real missing ingredient is it can’t personalize without access to data.

    Keith Lauver (16:33.304)

    think that’s a great insight. think I would challenge how much data we can give it access to. would say in general, I’ll give you an example. I love what there’s a tool called Crystal Nose has done, which is they’ve used AI to go, you know, essentially determine somebody’s personality. And that gives you a degree of personalization to present information in a particular style. So for example, anytime I do a sales follow-up,

    I run it first through Crystal and I have Ella rewrite it to that person’s disc profile. And that gives me a level of personalization that’s not just this was the conversation we had, but this is who you are and how you probably prefer to receive information. So I think we’re getting closer to it.

    John Jantsch (17:17.169)

    Yeah. Yeah. And it might just be bullet points and short sentences as opposed to, you know, necessarily, hi, John. Exactly. Right. Right. So talk a little bit about Ella. If somebody came to you and said, what’s Ella?

    Keith Lauver (17:26.321)

    Yes. Exactly. These are the three things we talked about. Sign here.

    Keith Lauver (17:38.86)

    Yeah, so we describe Ella as a high definition marketing machine. And the reason that we’ve chosen to describe her that way is we have found as professional marketers that most marketing has historically been very fuzzy. The fuzziness has been caused by specializations and fragmentation, right? The fuzziness has been caused by shifts in tactics and expectations. And the fuzziness is the fact that at the end of the day,

    Most marketing is really a hypothesis that needs to be tested out there anyway. So it’s social science, it’s behavioral science. And so what we’ve said is let’s try to provide more pixels to the picture. Let’s take frameworks and connect them. Let’s take pictures and define them in greater resolution. Let’s interconnect them so that when somebody says, I want to talk to John about duct tape marketing,

    John Jantsch (18:10.515)

    Yeah.

    Keith Lauver (18:34.37)

    they’re able to do so with just a high degree of precision. So Ella is a tool that enables better messaging, more discrete personas, and essentially better results because of this high definition process.

    John Jantsch (18:49.907)

    Yeah, boy, will say, you you used a fate, one of my favorite words, frameworks. Um, you know, one of the best things you can do if you’re trying to get some sort of output out of, uh, out of an AI tool is, is to say, use this framework, uh, that’s well-defined. think at least it gives it some guardrails to say, okay, you know, I’m not just going to write something that hopefully sounds good. You’re going to write something that.

    Keith Lauver (19:04.216)

    Yes.

    John Jantsch (19:14.875)

    maybe is using a proven framework. And so it’s going to be more effective right off the bat, whether or not the outputs, know, word for word, what you’re going to use, at least the structure will be there.

    Keith Lauver (19:18.093)

    Yeah.

    Keith Lauver (19:25.538)

    Yeah, I think frameworks, you know, I got drawn into the idea of frameworks because I was a computer guy who fell into the field of marketing, right? I’m used to here is a subroutine. If you’re going to tell a story, here’s seven blanks to fill in. Donald Miller, thank you for giving me the seven blanks to fill in. Like I need that kind of thing. And what has been true about all the frameworks, at least that I’ve experienced is while fantastic, they’ve always been discreet.

    John Jantsch (19:32.541)

    Yeah. Yeah, yeah.

    John Jantsch (19:44.381)

    Yes, exactly.

    Keith Lauver (19:55.008)

    and probably more unitaskers. So they’re fantastic for one thing, but they are often missing another thing. And what, least in my mind has been the missing link to all of this is a unifying, almost marketing operating system that pulls all those frameworks together. And that’s the big aspiration for what we’re trying to build.

    John Jantsch (19:59.675)

    Yeah, yeah.

    John Jantsch (20:16.817)

    Now, are you staying very focused on the niche market of, and I thought I read this, of SaaS go to market, or are really putting yourself out there as any type of business or industry?

    Keith Lauver (20:27.21)

    yeah.

    Keith Lauver (20:31.756)

    Yeah, we started in the field of SaaS. Obviously, we are too a SaaS product and understand those frameworks very well. But as Ella has so quickly grown, people are contributing their own frameworks. We’ve got authors who are saying, use mine. Or we’ve got practitioners who say, have you heard about this amazing system called duct tape? And I’m like, yes. Yes, I have. And so we’re trying to integrate those. And so

    John Jantsch (20:36.147)

    Sure.

    John Jantsch (20:47.251)

    Sure. Right.

    John Jantsch (20:52.999)

    Hehehehehe

    Keith Lauver (20:58.988)

    The idea of Ella is she can help with B2B, B2C, really across industries. And she’s getting smarter every single time somebody uses her and at least volunteers their feedback to Ella.

    John Jantsch (21:13.619)

    So one question that comes up a lot of times and will probably be continued to be debated forever, but are there privacy concerns? You know, I’m sharing all of my personal company data. that something or, or, you know, as an agency, I’m sharing my clients data. Is that an issue with a model like, or a tool like Ella?

    Keith Lauver (21:36.64)

    It is an issue for all AI and Ella has decided to respond to that with kind of a very clear privacy policy, a very clear non-disclosure agreement that we enter into, and also very clear technical parameters where we have opted out underlying our tool is OpenAI, but we have basically disallowed OpenAI from using any prompt

    for training purposes, any prompt for storage purposes. And so we can say with confidence that we are protecting the confidentiality of that information. And I think it is important that we do that.

    John Jantsch (22:15.461)

    Awesome. Yeah. Yeah. Yeah. I think that’s going to be, you know, a raging debate for some time. And I think we’ll end up having, won’t we end up having the same thing that happened to the search engines, that, you know, the, the, all the privacy and all the stuff that they’re, they’ve been doing and not telling anybody. We’ll, we’ll come back to in lawsuits probably.

    Keith Lauver (22:35.31)

    I am excited to see how intellectual property will continue to evolve around all of this. But in the meantime, we’re going to let people do great work and keep what they’re doing private.

    John Jantsch (22:38.291)

    Yeah. Yeah.

    John Jantsch (22:47.155)

    Well, Keith, I appreciate you taking a moment to stop by the Duct Tape Marketing Podcast. Is there someplace you’d send folks to learn more about Atomic Elevator and your work?

    Keith Lauver (22:56.652)

    You bet, AtomicElevator.com and we’ve got free trials available. We’d love to sign up anybody. Let them take Elifer spin for a couple of weeks and see what kind of impact you can make for their clients.

    John Jantsch (23:08.627)

    Again, appreciate you taking a moment and maybe I’ll run into you one of these days next time I’m up in Montana.

    Keith Lauver (23:15.713)

    I hope that would be the case.

    powered by

     

  • The Woman in the Yard: Inside a Blumhouse Ghost Story Shattering Every Stereotype

    The Woman in the Yard: Inside a Blumhouse Ghost Story Shattering Every Stereotype

    It is in large light. In a farmhouse’s garden is a tall, all-black woman with a terrifying black veil covering her face. In the length, acres upon acres of land are left vacant. The Woman in ] is Blumhouse Productions ‘ newest horror film, and this is the first picture we can see of it on the film’s screens.

    On Den of Geek, the second post was The Person in the Yard: Inside a Blumhouse Ghost Story Breaking Every Myth.

    When Doctor Doom posed as Robert Downey Jr., who would be appearing in Avengers: Doomsday, at San Diego Comic-Con next month, Marvel shocked everyone. And then Marvel&#8217 is doing something equally daring to show the rest of the Avengers: Doomsday cast. Marvel Studios is revealing &#8230, seats on their main Twitter feed.

    Okay, yes, they’ve changed the chair ‘ labels! But they &#8217, re however chair.

    We see a new couch and a fresh cast member &#8217, s name on it at intervals of 10-15 hours. Which is a little annoying if you’re a regular person who doesn’t have a lot of time on Wednesday to enjoy just shots of chairs all time. Thankfully, we are not just regular folks around at Den of Geek. We’ll keep an eye on these chair’ show and make updates as they occur! With each fresh brand added, this article will be updated; check back often to see if it’s changed.

    Pedro Pascal

    Channing Temple as Gambit in Deadpool & Wolverine

    Channing Temple

    James Marsden as Cyclops

    James Marsden

    Rebecca Romjin as Mystique copy

    Rachel Romijn

    Alan Cumming in X2

    Alan Cumming

    Ian McKellen as Magneto

    Ian McKellen

    Patrick Stewart in X-Men poster

    Patrick Stewart

    Chris Hemsworth in Thor Love and Thunder Art

    Chris Hemsworth

    Vanessa Kirby

    Joseph Quinn

    Bucky Barnes in The Falcon and the Winter Soldier

    Sebastian Stan

    Anthony Mackie

    Shuri in Black Panther 2 Ending

    Letitia Wright

    Paul Rudd in Ant-Man and the Wasp: Quantumania

    Paul Rudd

    Loki (Tom Hiddleston) stabilizes the Temporal Loom and achieves his Glorious Purpose

    Tom Hiddleston

    Winston Duke in Black Panther 2

    Winston Duke

    Tenoch Huerta as Namor in Black Panther: Wakanda Forever

    Tenoch Huerta Mejária

    Ebon Moss-Bacharach

    Shang-Chi dominates Box Office for Marvel
    cnx. powershell. push ( function ( ) {cnx ( {playerId:” 106e33c0-3911-473c-b599-b1426db57530″, }). render ( “0270c398a82f44f49c23c16122516796” ), }),

    Simu Liu

    Florence Pugh and Scarlett Johansson in Black Widow

    Florence Pugh

    Beast in the Marvels

    Kelsey Grammer

    David Harbour as Red Guardian in Marvel's Black Widow

    David Harbour

    Hannah John-Kamen as Ghost in Marvel

    Hannah John-Kamen

    Russell, Wyatt as John Walker in The Falcon and the Winter Soldier

    Russell, Wyatt

    Lewis Pullman in Thunderbolts

    Lewis Pullman

    Danny Ramirez in Captain America Brave New World

    Danny Ramirez

    The first article on Den of Geek: Avengers: Doomsday Cast Announced &#8211, Updated Live.

  • Ike Barinholtz Did Not Care for Seth Rogen’s Driving in Apple TV+’s The Studio

    Ike Barinholtz Did Not Care for Seth Rogen’s Driving in Apple TV+’s The Studio

    The Studio raises the notion that praise is the sincerest form of flattery to a whole new level. The Apple TV + comedy series, which was co-produced by legendary comedic collaborators Seth Rogen and Evan Goldberg ( along with Peter Huyck, Alex Gregory, and Frida Perez ), delves so far into its satire of Hollywood that it defies definition as a major Hollywood production itself. Rogen ]…]

    The second article on Den of Geek was Ike Barinholtz did not care for Seth Rogen’s driving in Apple TV +’s The Studio.

    Marvel shocked all at San Diego Comic-Con last month when Doctor Doom emerged as Robert Downey Jr., who would be appearing in Avengers: Doomsday. And then Marvel&#8217 is doing something equally daring to show the rest of the Avengers: Doomsday cast. Marvel Studios is revealing seats on their principal Twitter feed.

    Okay, yes, they’ve changed the chair ‘ labels! But they &#8217, re still chair.

    We notice a new couch and a new put member’s name on it every 10-15 days. Which is a little annoying if you’re a regular person who doesn’t have a lot of time to enjoy just shots of chairs all day in the middle of the week. Luckily, we at Den of Geek are not just regular folks. We’ll keep an eye on these chairs and publish a report on each show as it occurs! With each fresh brand added, this article will be updated; check back often to see if it’s changed.

    Pedro Pascal

    Channing Temple as Gambit in Deadpool & Wolverine

    Channing Temple

    James Marsden as Cyclops

    James Marsden

    Rebecca Romjin as Mystique copy

    Elizabeth Romijn

    Alan Cumming in X2

    Alan Cumming

    Ian McKellen as Magneto

    Ian McKellen

    Stewart, Patrick in X-Men poster

    Stewart, Patrick

    Chris Hemsworth in Thor Love and Thunder Art

    Chris Hemsworth

    Vanessa Kirby

    Joseph Quinn

    Bucky Barnes in The Falcon and the Winter Soldier

    Sebastian Stan

    Anthony Mackie

    Shuri in Black Panther 2 Ending

    Letitia Wright

    Paul Rudd in Ant-Man and the Wasp: Quantumania

    Paul Rudd

    Loki (Tom Hiddleston) stabilizes the Temporal Loom and achieves his Glorious Purpose

    Tom Hiddleston

    Winston Duke in Black Panther 2

    Winston Duke

    Tenoch Huerta as Namor in Black Panther: Wakanda Forever

    Tenoch Huerta Mejária

    Ebon Moss-Bacharach

    Shang-Chi dominates Box Office for Marvel
    cnx. command. push ( function ( ) {cnx ( {playerId:” 106e33c0-3911-473c-b599-b1426db57530″, }). render ( “0270c398a82f44f49c23c16122516796” ), }),

    Simu Liu

    Florence Pugh and Scarlett Johansson in Black Widow

    Florence Pugh

    Beast in the Marvels

    Kelsey Grammer

    Harbour, David as Red Guardian in Marvel's Black Widow

    Harbour, David

    Hannah John-Kamen as Ghost in Marvel

    Hannah John-Kamen

    Wyatt Russell as John Walker in The Falcon and the Winter Soldier

    Wyatt Russell

    Lewis Pullman in Thunderbolts

    Lewis Pullman

    Danny Ramirez in Captain America Brave New World

    Danny Ramirez

    The first article on Den of Geek: Avengers: Doomsday Cast Announced &#8211, Updated Live appeared second.

  • How The Office Redefined the Mockumentary Format For Modern Television

    How The Office Redefined the Mockumentary Format For Modern Television

    A television program from every generation has an impact on the landscape of popular culture long. In the 1980s, it was Cheers ‘ feel-good lecture, and in the 1990s, it was Seinfeld’s wonderfully commonplace environment. And in the 2000s, it was the tongue-in-cheek tale that dominated The Office, which is undoubtedly the most adored comedy line ever.

    The first article on Den of Geek was How The Office Redefined the Mockumentary Format For Modern Television.

    Marvel shocked the audience at San Diego Comic-Con next month when Doctor Doom emerged as Robert Downey Jr., who would be appearing in Avengers: Doomsday. Marvel is presently revealing the rest of the Avengers: Doomsday cast in an equally bold manner. Marvel Studios is revealing seats on their primary Twitter feed.

    Okay, yes, they’re seats with labels on them! But they &#8217, re still chair.

    We notice a new couch and a new put member’s name on it every 10-15 days. Which is a little annoying if you’re a regular person who doesn’t have a lot of time on Wednesday to watch just shots of chairs all time. Thankfully, we are not just regular folks around at Den of Geek. We’ll keep an eye on these chairs and publish a report on each show as it occurs! With each fresh name added, this article will be updated; check back often to see if it’s changed.

    Pedro Pascal

    Channing Temple as Gambit in Deadpool & Wolverine

    Channing Temple

    James Marsden as Cyclops

    James Marsden

    Rebecca Romjin as Mystique copy

    Elizabeth Romijn

    Alan Cumming in X2

    Alan Cumming

    Ian McKellen as Magneto

    Ian McKellen

    Stewart, Patrick in X-Men poster

    Stewart, Patrick

    Chris Hemsworth in Thor Love and Thunder Art

    Chris Hemsworth

    Vanessa Kirby

    Joseph Quinn

    Bucky Barnes in The Falcon and the Winter Soldier

    Sebastian Stan

    Anthony Mackie

    Shuri in Black Panther 2 Ending

    Letitia Wright

    Paul Rudd in Ant-Man and the Wasp: Quantumania

    Paul Rudd

    Loki (Tom Hiddleston) stabilizes the Temporal Loom and achieves his Glorious Purpose

    Tom Hiddleston

    Winston Duke in Black Panther 2

    Winston Duke

    Tenoch Huerta as Namor in Black Panther: Wakanda Forever

    Tenoch Huerta Mejária

    Ebon Moss-Bacharach

    Shang-Chi dominates Box Office for Marvel
    cnx. powershell. push ( function ( ) {cnx ( {playerId:” 106e33c0-3911-473c-b599-b1426db57530″, }). render ( “0270c398a82f44f49c23c16122516796” ), }),

    Simu Liu

    Florence Pugh and Scarlett Johansson in Black Widow

    Florence Pugh

    Beast in the Marvels

    Kelsey Grammer

    David Harbour as Red Guardian in Marvel's Black Widow

    David Harbour

    Hannah John-Kamen as Ghost in Marvel

    Hannah John-Kamen

    Wyatt Russell as John Walker in The Falcon and the Winter Soldier

    Wyatt Russell

    Lewis Pullman in Thunderbolts

    Lewis Pullman

    Danny Ramirez in Captain America Brave New World

    Danny Ramirez

    The first article on Den of Geek: Avengers: Doomsday Cast Announced &#8211, Updated Live.

  • New British TV Series for 2025: BBC, Netflix, ITV, Channel 4, Disney+, Prime Video, Sky

    New British TV Series for 2025: BBC, Netflix, ITV, Channel 4, Disney+, Prime Video, Sky

    Missing You, the most recent payment of Netflix’s now-traditional New Week’s Day Harlan Coben movies, hit the ground running in 2025 with American TV. Following that, crime plays Patience and Prime Target, traditional action set A Thousand Blows, 1980s relaunch Bergerac, and numerous others that are still awaiting an air date announcement. Browse ]…]

    The first episode of Den of Geek was titled” New British TV Series for 2025″: BBC, Netflix, ITV, Channel 4, Disney+, Prime Video, Sky.

    When Doctor Doom posed as Robert Downey Jr., who would be appearing in Avengers: Doomsday, at San Diego Comic-Con next month, Marvel shocked everyone. Marvel is presently revealing the rest of the Avengers: Doomsday cast in an equally bold manner. Marvel Studios is revealing seats on their main Instagram supply.

    Okay, yes, they’re seats with names on them! But they &#8217, re still chair.

    We see a new couch and a fresh cast member &#8217, s name on it at intervals of 10-15 days. Which is a little annoying if you’re a regular person who doesn’t have a lot of time on Wednesday to enjoy just shots of chairs all time. Thankfully, we are not just regular folks around at Den of Geek. We’ll keep an eye on these chair’ show and make updates as they occur! With each fresh brand added, this article will be updated; check back often to see if it’s changed.

    Pedro Pascal

    Channing Turner as Gambit in Deadpool & Wolverine

    Channing Turner

    James Marsden as Cyclops

    James Marsden

    Rebecca Romjin as Mystique copy

    Rachel Romijn

    Alan Cumming in X2

    Alan Cumming

    Ian McKellen as Magneto

    Ian McKellen

    Stewart, Patrick in X-Men poster

    Stewart, Patrick

    Chris Hemsworth in Thor Love and Thunder Art

    Chris Hemsworth

    Vanessa Kirby

    Joseph Quinn

    Bucky Barnes in The Falcon and the Winter Soldier

    Sebastian Stan

    Anthony Mackie

    Shuri in Black Panther 2 Ending

    Letitia Wright

    Paul Rudd in Ant-Man and the Wasp: Quantumania

    Paul Rudd

    Loki (Tom Hiddleston) stabilizes the Temporal Loom and achieves his Glorious Purpose

    Tom Hiddleston

    Winston Duke in Black Panther 2

    Winston Duke

    Tenoch Huerta as Namor in Black Panther: Wakanda Forever

    Tenoch Huerta Mejária

    Ebon Moss-Bacharach

    Shang-Chi dominates Box Office for Marvel
    cnx. powershell. push ( function ( ) {cnx ( {playerId:” 106e33c0-3911-473c-b599-b1426db57530″, }). render ( “0270c398a82f44f49c23c16122516796” ), }),

    Simu Liu

    Florence Pugh and Scarlett Johansson in Black Widow

    Florence Pugh

    Beast in the Marvels

    Kelsey Grammer

    David Harbour as Red Guardian in Marvel's Black Widow

    David Harbour

    Hannah John-Kamen as Ghost in Marvel

    Hannah John-Kamen

    Wyatt Russell as John Walker in The Falcon and the Winter Soldier

    Wyatt Russell

    Lewis Pullman in Thunderbolts

    Lewis Pullman

    Danny Ramirez in Captain America Brave New World

    Danny Ramirez

    The second article on Den of Geek: Avengers: Doomsday Cast Announced &#8211, Updated Live.

  • Daredevil: Born Again Just Hinted at Ms. Marvel’s Next Adventure

    Daredevil: Born Again Just Hinted at Ms. Marvel’s Next Adventure

    Episode 5 of Daredevil: Born Suddenly has clues in this post. World soldiers Daredevil and Ms. Marvel are indistinguishable from one another, but it’s difficult to imagine any two. No matter how harshly it punishes his head, body, or heart, the former is a blind person driven by grief and anger toward vigilantism. The latter is an]…]

    The first article Daredevil: Born Once Really Hinted at Ms. Marvel’s Second Experience was published on Den of Geek.

    When Doctor Doom posed as Robert Downey Jr., who would be appearing in Avengers: Doomsday, at San Diego Comic-Con next month, Marvel shocked everyone. And then Marvel is doing something the same bold as revealing the rest of the Avengers: Doomsday cast. Marvel Studios is revealing &#8230, seats on their main Twitter feed.

    Okay, yes, they’re seats with labels on them! But they &#8217, re however chair.

    We observe a new couch and a fresh cast member &#8217, s name on it at intervals of 10-15 hours. Which is a little annoying if you’re a regular person who doesn’t have a lot of time on Wednesday to watch just shots of chairs all time. Thankfully, we at Den of Geek are not just regular people. We will keep an eye on these chairs and publish a report on each show as it occurs! With each fresh name added, this article will be updated; check back often to see if it’s changed.

    Pedro Pascal

    Channing Turner as Gambit in Deadpool & Wolverine

    Channing Turner

    James Marsden as Cyclops

    James Marsden

    Rebecca Romjin as Mystique copy

    Elizabeth Romijn

    Alan Cumming in X2

    Alan Cumming

    Ian McKellen as Magneto

    Ian McKellen

    Stewart, Patrick in X-Men poster

    Stewart, Patrick

    Chris Hemsworth in Thor Love and Thunder Art

    Chris Hemsworth

    Vanessa Kirby

    Joseph Quinn

    Bucky Barnes in The Falcon and the Winter Soldier

    Sebastian Stan

    Anthony Mackie

    Shuri in Black Panther 2 Ending

    Letitia Wright

    Paul Rudd in Ant-Man and the Wasp: Quantumania

    Paul Rudd

    Loki (Tom Hiddleston) stabilizes the Temporal Loom and achieves his Glorious Purpose

    Tom Hiddleston

    Winston Duke in Black Panther 2

    Winston Duke

    Tenoch Huerta as Namor in Black Panther: Wakanda Forever

    Tenoch Huerta Mejária

    Ebon Moss-Bacharach

    Shang-Chi dominates Box Office for Marvel
    cnx. powershell. push ( function ( ) {cnx ( {playerId:” 106e33c0-3911-473c-b599-b1426db57530″, }). render ( “0270c398a82f44f49c23c16122516796” ), }),

    Simu Liu

    Florence Pugh and Scarlett Johansson in Black Widow

    Florence Pugh

    Beast in the Marvels

    Kelsey Grammer

    David Harbour as Red Guardian in Marvel's Black Widow

    David Harbour

    Hannah John-Kamen as Ghost in Marvel

    Hannah John-Kamen

    Wyatt Russell as John Walker in The Falcon and the Winter Soldier

    Wyatt Russell

    Lewis Pullman in Thunderbolts

    Lewis Pullman

    Danny Ramirez in Captain America Brave New World

    Danny Ramirez

    The first article on Den of Geek: Avengers: Doomsday Cast Announced &#8211, Updated Live appeared second.