Voice Content and Usability

We’ve been conversing for many thousands of years. Whether to present information, perform transactions, or just to check in on one another, people have yammered aside, chattering and gesticulating, through spoken discussion for many generations. Only recently have we begun to write our discussions, and only recently have we outsourced them to the system, a system that exhibits a significantly higher affection for written letter than for the vernacular rigors of spoken language.

Computers have issues because conversation is more important than written speech in spoken and written writing. To have productive conversations with us, machines may struggle with the messiness of mortal speech: the disfluencies and pauses, the gestures and body language, and the variations in word choice and spoken dialect that is stymie even the most carefully crafted human-computer interaction. Speaking English also has the advantage of face-to-face contact, which enables us to interpret nonverbal social cues in the human-to-human scenario.

In contrast, written language develops its own fossil record of dated terms and phrases as we record it and retain usages long after they are no longer relevant in spoken communication ( for example, the salutation” To whom it may concern” ). Because it tends to be more consistent, polished, and formal, written text is fundamentally much easier for machines to parse and understand.

This luxury is not available in spoken language. There are verbal cues and vocal behaviors that modulate conversation in nuanced ways, including how something is said, not what. These are the nonverbal cues that decorate conversations with emphasis and emotional context. Whether rapid-fire, low-pitched, or high-decibel, whether sarcastic, stilted, or sighing, our spoken language conveys much more than the written word could ever muster. So as designers and content strategists, we face exciting challenges when it comes to voice interfaces, the machines we use to conduct spoken conversations.

Voice Compositions

We interact with voice interfaces for a variety of reasons, but according to Michael McTear, Zoraida Callejas, and David Griol in The Conversational Interface, those motivations by and large mirror the reasons we initiate conversations with other people, too ( ). We typically strike up a conversation as a result:

  • we require something to be done ( such as a transaction ),
  • we want to know something ( information of some sort ), or
  • We are social creatures, and we need a conversation partner.

These three categories, which I refer to as transactional, informational, and prosocial, also apply to virtually every voice interaction: a single conversation that begins with the voice interface’s first greeting and ends with the user leaving the interface. Note here that a conversation in our human sense—a chat between people that leads to some result and lasts an arbitrary length of time—could encompass multiple transactional, informational, and prosocial voice interactions in succession. In other words, a voice interaction is a conversation, but it must not be one particular voice interaction.

Purely prosocial exchanges are more gimmicky than captivating in the majority of voice interfaces because machines are unable to yet have the capability to truly understand how we are doing and engage in the kind of glad-handing behavior that people crave. There’s also ongoing debate as to whether users actually prefer the sort of organic human conversation that begins with a prosocial voice interaction and shifts seamlessly into other types. In fact, Michael Cohen, James Giangola, and Jennifer Balogh advise sticking to user expectations by imitating how they interact with other voice interfaces rather than trying too hard to be human, which could lead to alienation of them ( ).

A voice interface can also have two types of conversations we can have with one another that are both transactional and informational, each learning something new ( “discuss a musical” ).

Transactional voice interactions

When you order a Hawaiian pizza with extra pineapple, you’re typically having a conversation and a voice interaction when you’re tapping buttons on a food delivery app. The conversation quickly shifts from an initial smattering of neighborly small talk to the actual task at hand, which is ordering a pizza ( generously topped with pineapple, as it should be ).

Alison: Hey, how’s it going?

Burhan: Hello and welcome to Crust Deluxe! It’s chilly outside. How can I help you?

Alison: Can I get a pizza from Hawaii with extra pineapple.

Burhan: Yes, but what size?

Alison: Large.

Burhan: Anything else?

Alison: No thanks, that’s it.

Burhan: Something to drink?

I’ll have a bottle of Coke, Alison.

Burhan, you know what. That’ll be$ 13.55 and about fifteen minutes.

A service rendered or a product delivered, as each incremental disclosure in this transactional conversation reveals more and more of the desired transactional outcome. Conversations that are transactional have certain characteristics: they are direct, concise, and cost-effective. They quickly dispense with pleasantries.

Informational voice interactions

In the meantime, some conversations are primarily about getting information. Though Alison might visit Crust Deluxe with the sole purpose of placing an order, she might not actually want to walk out with a pizza at all. She might be interested in trying kosher or halal dishes, trying gluten-free dishes, or something else entirely. Even though we have a prosocial mini-conversation once more at the beginning to practice politeness, we are after much more.

Alison: Hey, how’s it going?

Burhan: Hello and welcome to Crust Deluxe! It’s chilly outside. How can I help you?

Alison: Can I ask a few questions?

Burhan: Of course! Continue straight ahead.

Alison: Do you have any halal options on the menu?

Burhan: Absolutely! On request, we can make any pie halal. We also have lots of vegetarian, ovo-lacto, and vegan options. Are you considering any additional dietary restrictions?

Alison: What about pizzas that are gluten-free?

Burhan: We can definitely do a gluten-free crust for you, no problem, for both our deep-dish and thin-crust pizzas. Anything else I can say to you to help?

Alison: That’s it for now. Good to know. Thank you!

Burhan: Anytime, come back soon!

This is a very different dialogue. Here, the goal is to obtain a particular set of facts. Informational conversations are research expeditions to gather data, news, or facts, or they are investigative quests for the truth. Voice interactions that are informational might be more long-winded than transactional conversations by necessity. Responses are typically longer, more in-depth, and carefully communicated so that the customer is aware of the important lessons.

Voice Interfaces

At their core, voice interfaces employ speech to support users in reaching their goals. However, just because an interface has a voice component doesn’t mean that every user interaction with it is mediated through voice. We’re most concerned with pure voice interfaces, which depend entirely on spoken conversation and lack any visual component, making multimodal voice interfaces much more nuanced and challenging to deal with because they can lean on visual components like screens as crutches.

Though voice interfaces have long been integral to the imagined future of humanity in science fiction, only recently have those lofty visions become fully realized in genuine voice interfaces.

IVR ( interactive voice response ) systems

Written conversational interfaces have been a part of computing for many decades, but voice interfaces first started to appear in the early 1990s with text-to-speech ( TTS ) dictation programs that recited written text aloud as well as speech-enabled in-car systems that gave directions to a user-provided address. With the advent of interactive voice response ( IVR ) systems, intended as an alternative to overburdened customer service representatives, we became acquainted with the first true voice interfaces that engaged in authentic conversation.

IVR systems made it easier for businesses to cut down on call centers, but they soon gained notoriety for their clunkiness. Similar to the corporate world, these systems were primarily created as metaphorical switchboards to direct customers to a real phone agent (” Say Reservations to book a flight or check an itinerary” ), and chances are you’ll have a conversation with one when you call an airline or hotel conglomerate. Despite their functional issues and users ‘ frustration with their inability to speak to an actual human right away, IVR systems proliferated in the early 1990s across a variety of industries (, PDF).

IVR systems have a reputation for having less scintillating conversations than we’re used to in real life ( or even in science fiction ), despite being extremely repetitive and monotonous conversations that typically don’t veer from a single format.

Screen readers

Parallel to the evolution of IVR systems was the invention of the screen reader, a tool that transcribes visual content into synthesized speech. It’s the most popular way to interact with text, multimedia, or form elements for website users who are blind or visually impaired. Perhaps the closest thing we have today to an out-of-the-box implementation of content delivered through voice is represented by screen readers.

Among the first screen readers known by that moniker was the Screen Reader for the BBC Micro and NEEC Portable developed by the Research Centre for the Education of the Visually Handicapped (RCEVH) at the University of Birmingham in 1986 ( ). In the same year, Jim Thatcher created the first IBM Screen Reader for text-based computers, which was later reworked for computers with graphical user interfaces ( GUIs ) ( ).

The demand for accessible website tools exploded as a result of the web’s explosive growth in the 1990s. Thanks to the introduction of semantic HTML and especially ARIA roles beginning in 2008, screen readers started facilitating speedy interactions with web pages that ostensibly allow disabled users to traverse the page as an aural and temporal space rather than a visual and physical one. In other words, web screen readers “provide mechanisms that translate visual design constructs—proximity, proportion, etc. —into useful information,” according to Aaron Gustafson in A List Apart. ” At least they do when documents are authored thoughtfully” ( ).

There is a big draw for screen readers: they’re challenging to use and relentlessly verbose, despite being incredibly instructive for voice interface designers. Sometimes awkward pronouncements that name every manipulable HTML element and announce every formatting change are made because the visual structures of websites and web navigation don’t translate well to screen readers. For many screen reader users, working with web-based interfaces exacts a cognitive toll.

Accessibility advocate and voice engineer Chris Maury examines why the screen reader experience is ill-suited for users who rely on voice in Wired:

I disliked the operation of Screen Readers from the beginning. Why are they designed the way they are? It makes no sense to present information visually and then only to have that information translated into audio. All the effort and thought that goes into creating the ideal user experience for an app is wasted, or worse, having a negative effect on blind users ‘ experience. ( )

Well-designed voice interfaces can often be more effective than long-winded screen reader monologues in guiding users to their destination. After all, users of the visual interface have the advantage of freely scurrying around the viewport to find information, ignoring areas that are unimportant to them. Blind users, meanwhile, are obligated to listen to every utterance synthesized into speech and therefore prize brevity and efficiency. Users with disabilities who have long had no choice but to use clumsy screen readers might find that voice interfaces, especially more contemporary voice assistants, provide a more streamlined experience.

Voice-overseers

When we think of voice assistants (the subset of voice interfaces now commonplace in living rooms, smart homes, and offices), many of us immediately picture HAL from 2001: A Space Odyssey or hear Majel Barrett’s voice as the omniscient computer in Star Trek. Voice-overseers are akin to personal concierges that can answer questions, schedule appointments, conduct searches, and perform other common day-to-day tasks. And they’re rapidly gaining more attention from accessibility advocates for their assistive potential.

Before the earliest IVR systems found success in the enterprise, Apple published a demonstration video in 1987 depicting the Knowledge Navigator, a voice assistant that could transcribe spoken words and recognize human speech to a great degree of accuracy. Then, in 2001, Tim Berners-Lee and others created their vision for a” semantic web agent” that would carry out routine tasks like” checking calendars, making appointments, and finding locations” ( hinter paywall ). Apple’s Siri finally made voice assistants a reality for consumers until 2011 when they were available.

Thanks to the plethora of voice assistants available today, there is considerable variation in how programmable and customizable certain voice assistants are over others ( Fig 1.1 ). At one extreme, everything but vendor-provided features are locked down. For instance, at the time of their release, the core functionality of Apple’s Siri and Microsoft’s Cortana couldn’t be expanded beyond their already-existing capabilities. There are no other means of developers communicating with Siri at a low level, aside from predefined categories of tasks like messaging, hailing rideshares, making restaurant reservations, and other things, which are still possible today.

At the opposite end of the spectrum, voice assistants like Amazon Alexa and Google Home offer a core foundation on which developers can build custom voice interfaces. For this reason, developers who feel stifled by the limitations of Siri and Cortana are increasingly using programmable voice assistants that allow for customization and extensibility. Google Home has the ability to program arbitrary Google Assistant skills, while Amazon offers the Alexa Skills Kit, a developer framework for creating custom voice interfaces for Amazon Alexa. Today, users can choose from among thousands of custom-built skills within both the Amazon Alexa and Google Assistant ecosystems.

As businesses like Amazon, Apple, Microsoft, and Google continue to dominate their markets, they are also selling and open-sourcing an unmatched range of tools and frameworks for designers and developers, aiming to make creating voice interfaces as simple as possible, even without the use of any code.

Often by necessity, voice assistants like Amazon Alexa tend to be monochannel—they’re tightly coupled to a device and can’t be accessed on a computer or smartphone instead. In contrast, many development platforms, such as Google’s Dialogflow, have omnichannel capabilities that allow users to create a single conversational interface that then becomes a voice interface, textual chatbot, and IVR system upon deployment. In this design-focused book, I don’t recommend any particular implementation strategies, but in Chapter 4 we’ll discuss some of the possible effects that these variables might have on how you construct your design artifacts.

Voice Content

Simply put, voice content is content that is delivered through voice. Voice content must be free-flowing, organic, contextless, and concise in order to preserve what makes human conversation so compelling in the first place.

Our world is replete with voice content in various forms: screen readers reciting website content, voice assistants rattling off a weather forecast, and automated phone hotline responses governed by IVR systems. We’re most concerned with the content in this book being delivered auditorically, not as an option but as a necessity.

Our first foray into informational voice interfaces will likely be to deliver content to users, for many of us. There’s only one problem: any content we already have isn’t in any way ready for this new habitat. How can we improve the conversational content on our websites? And how do we create fresh copy that works with voice-recognition?

Lately, we’ve begun slicing and dicing our content in unprecedented ways. Websites are, in many ways, massive vaults of what I call macrocontent: lengthy prose that can last for miles in a browser window while being viewed in microfilm format in newspaper archives. Microcontent was defined as permalinked pieces of content that could be read in any environment, such as email or text messages back in 2002, well before the present-day ubiquity of voice assistants.

A day’s weather forcast]sic], the arrival and departure times for an airplane flight, an abstract from a long publication, or a single instant message can all be examples of microcontent. ( )

I would update Dash’s definition of microcontent to include all instances of bite-sized content that transcends written communiqués. After all, today we encounter microcontent in interfaces where a small snippet of copy is displayed alone, unmoored from the browser, like a textbot confirmation of a restaurant reservation. The best way to learn how your content can be stretched to the limits of its potential is through microcontent, which will inform both established and new delivery channels.

Voice content stands out as being unique because it illustrates how content is experienced in space as opposed to time. We can glance at a digital sign underground for an instant and know when the next train is arriving, but voice interfaces hold our attention captive for periods of time that we can’t easily escape or skip, something screen reader users are all too familiar with.

We need to make sure that our microcontent truly performs well as voice content because it is essentially composed of isolated blobs without any connection to the channels in which they will eventually end up. This means focusing on the two most crucial characteristics of robust voice content: voice content legibility and voice content discoverability.

Our voice content’s legibility and discoverability in general both depend on how it manifests in terms of perceived space and time.

Recommended Story For You :

GET YOUR VINCHECKUP REPORT

The Future Of Marketing Is Here

Images Aren’t Good Enough For Your Audience Today!

Last copies left! Hurry up!

GET THIS WORLD CLASS FOREX SYSTEM WITH AMAZING 40+ RECOVERY FACTOR

Browse FREE CALENDARS AND PLANNERS

Creates Beautiful & Amazing Graphics In MINUTES

Uninstall any Unwanted Program out of the Box

Did you know that you can try our Forex Robots for free?

Stop Paying For Advertising And Start Selling It!

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *