2. Operable
2.5 Input Modalities (Level A)
Contents
Guideline 2.5 Input Modalities
WCAG 2.1
Make it easier for users to operate functionality through various inputs beyond keyboard.
Why is it important to allow for multiple input modes?
A keyboard was the first input mode for accessing a computer. The mouse was then introduced, greatly improving navigation through web content — for those who could operate a mouse. Various other mouse-like input devices soon followed, like track pads, track balls, eye-gaze pointers, speech control, joysticks, and various other input methods that mimic mouse functions.
Then, mobile devices with touch screens came along and introduced a new range of input methods — these are typically referred to as gestures. Gestures include physical movements like tapping, double tapping, pressing and holding, swiping, dragging, and so on.
Needless to say, limiting input modes can greatly reduce the number of people who are able to use a site. Fortunately, in many cases, designing for both mouse and keyboard access will also allow other input modes. A tap, for instance, is possible for elements that can be clicked with a mouse. A swipe function is much like Tab key navigation. However, it is when developers create custom elements in which mouse and/or keyboard access are scripted that input modes may be limited.
Success Criterion 2.5.1 Pointer Gestures
WCAG 2.1
Level A
All functionality that uses multipoint or path-based gestures for operation can be operated with a single pointer without a path-based gesture, unless a multipoint or path-based gesture is essential.
Note: This requirement applies to web content that interprets pointer actions (i.e., this does not apply to actions that are required to operate the user agent or assistive technology).
Pointer Gestures Explained
With the introduction of smartphones and touch pads, multipoint gestures have become fairly common place. A multipoint gesture typically requires two or three fingers to perform the gesture. One example is a pinch zoom, placing two fingers on a device screen and spreading them apart, causing the screen to magnify. Some people will not be able to perform such a gesture, so some other means is needed to zoom, such as [-] or [+] buttons that zoom with a single tap or click. Single-point gestures will be more accessible to some and can often be mimicked with a keypress.
These are some examples of multipoint and single-point gestures:
Multipoint gestures:
- Two-finger pinch
- Split tap
- Two- or three-finger taps and swipes.
Single-point gestures:
- Tap
- Double tap
- Press and hold
- Focus and gaze (eye tracking)
Another type of gesture is a path gesture. In the case of a path gesture, users draw a pattern to unlock a screen, or they drag a slider thumb to select a value along a particular range. In both cases, some people will not be able to click-and-drag or point-and-drag, so an alternative will be required. For a slider, users should be able to click on any spot along the slider, and, with a single click, they move the slider thumb to a select position along the slider bar. Users should also be able to control the slider thumb using a keyboard, though this is covered by SC 1.3.1 Keyboard (Level A).
Swipe gestures can also be difficult for some people. For example, a photo gallery may use a swipe to navigate from one image to the next. Next and Previous buttons or links can be added to the gallery viewer so those who are unable to swipe can click. Next and Previous buttons, with some scripting, can also be associated with left and right arrows on the keyboard, so those unable to swipe or point and click can also operate the gallery with a keypress.
This success criterion applies to web content rather than operating system–level gestures. For example, Android phones may have users draw a pattern to unlock the phone. In this case a preference setting is available to switch to a code or to use a fingerprint scanner instead of drawing the pattern. In this case, an operating system–level path gesture can be replaced with a single point method of unlocking the phone.
Suggested Reading:
Success Criterion 2.5.2 Pointer Cancellation
WCAG 2.1
Level A
For functionality that can be operated using a single pointer, at least one of the following is true:
- No Down-Event: The down-event of the pointer is not used to execute any part of the function;
- Abort or Undo: Completion of the function is on the up-event, and a mechanism is available to abort the function before completion or to undo the function after completion;
- Up Reversal: The up-event reverses any outcome of the preceding down-event;
- Essential: Completing the function on the down-event is essential.
Note: Functions that emulate a keyboard or numeric-keypad keypress are considered essential.
Note: This requirement applies to web content that interprets pointer actions (i.e., this does not apply to actions that are required to operate the user agent or assistive technology).
Pointer Cancellation Explained
The aim of this success criterion is to prevent accidental pointer input, whether through a mouse click or through a touch gesture to activate web content. For instance, by default, activation of a link or button occurs when a click is released or when a finger is lifted. In both cases, it gives a user an opportunity to abort the click or press by moving the pointer away from the element that was clicked before releasing it.
In most cases, activation should not occur during the down action (e.g., mousedown, touchstart); rather, it should occur when the action is released (e.g., mouseup, touchend). One exception occurs in cases where a down-event pops open a dialog, which closes when the up event occurs. Drag-and-drop elements also activate on the down action, i.e., holding down the element while it is moved. Then, the up action occurs when the element is in its new location. In this case, users should be able to release the action outside the allowable drop zone to abort, which returns the element to its initial location.
There are other conventional behaviours that also rely on a down-event, such as typing letters into a form field or clicking a key on a piano app. In such cases, it would be counter-intuitive to have the action occur on the up-event.
Suggested Reading:
Success Criterion 2.5.3 Label in Name
WCAG 2.1
Level A
For user interface components with labels that include text or images of text, the name contains the text that is presented visually.
Note: A best practice is to have the text of the label at the start of the name.
Label in Name Explained
This success criterion ensures that people who are using speech input or text-to-speech output are able to draw a connection between what they see on the screen and what their assistive technology reads to them. Assistive technologies read the “accessible name” associated with interface elements, which can be assembled with text gathered from a number of sources, such as a role (e.g., menuitem), state (e.g., enabled), and properties (e.g., haspopup menu) associated with the element, in addition to the text displayed on the screen. The W3C defines “accessible name” as follows:
Definition
Accessible Name: The accessible name is the name of a user interface element. Each platform accessibility API provides the accessible name property. The value of the accessible name may be derived from a visible (e.g., the visible text on a button) or invisible (e.g., the text alternative that describes an icon) property of the user interface element. See related accessible description.
A simple use for the accessible name property may be illustrated by an “OK” button. The text “OK” is the accessible name. When the button receives focus, assistive technologies may concatenate the platform’s role description with the accessible name. For example, a screen reader may speak “push-button OK” or “OK button.” The order of concatenation and specifics of the role description (e.g., “button”, “push-button”, “clickable button”) are determined by platform accessibility APIs or assistive technologies.
For those using speech input through voice recognition software, they may encounter a form button created using an image, and in that image is the word “search.” They can speak the word “search” to activate the button but only if the alt text for the image is the same as the text in the button image. In this case, alt is the accessible name, joined with the role “button.”
Key Point: By default, screen readers will read the longer of either the text of an element (or alt text, as in the case above) or the text of the title attribute. As result, any text content associated with an element should be included in the title text if it is being used.
On the other hand, a developer might also include additional information in a title attribute for the button, with words such as “enter keywords or phrases.” This text is hidden by default but displays when a mouse pointer hovers over the button. For a speech input user, they see the word “search” in the image, though in this case the accessible text would be “enter keywords and phrases.”
As a result of the accessible name differing from the text label for the button, a speech user may be unable to activate the button, in which the visible button text (“search”) and the accessible name (“enter keywords or phrases”) differ. In this case, the text in the button image should be prefixed on the title text. So you end up with an accessible name like “search: enter keywords and phrases.” In this case, the user can speak the word “search” to send focus to the button via its title text.
In the markup for this button, the text associated with the image would appear as follows:
<button title="Search: enter keywords or phrases"> <img src="search_icon.png" alt="Search"> </button>
Key Point: Common screen readers handle title text in different ways. In the current version of JAWS (version 18, at the time of this book’s release), title text is no longer read. In the past, JAWS would read the longer of the link text or title text as the default setting. Users could also set JAWS to read the title text, the link text, or the longer of the two, through a preference setting. By default, it now reads the link text, and despite the option available to read the title text or the longer, these settings no longer function in the current version of JAWS.
NVDA, on the other hand, reads both the link text and the title text by default. ChromeVox reads link text but does not read title text on links.
As a result of this inconsistent support for the title attribute, do not use title text on links if it contains critical information. Except for NVDA, it will not be read by screen readers.
Screen reader support for HTML title attribute
Link text | Link title | Link image alt | Link image title | |
---|---|---|---|---|
JAWS | Yes | No* | Yes | No* |
NVDA | Yes | Yes | Yes | Yes |
ChromeVox | Yes | No | No** | Yes |
*Despite a setting to read title text, title text is not read by JAWS 18
**When title text is present for linked images, title is read, and alt is not read.
Suggested Reading:
Success Criterion 2.5.4 Motion Actuation
WCAG 2.1
Level A
Functionality that can be operated by device motion or user motion can also be operated by user interface components and responding to the motion can be disabled to prevent accidental actuation, except when:
- Supported Interface: The motion is used to operate functionality through an accessibility supported interface.
- Essential: The motion is essential for the function and doing so would invalidate the activity.
Motion Actuation Explained
Most smartphones today include sensors, such as an accelerometer and/or gyroscope, that detect motion of the device. For example, they would detect shaking, which might act as an undo function. Some phones can also detect user gestures through the device’s camera, such as a hand wave to turn the page of an electronic book for instance.
Some people, however, who are unable to move the device (e.g., if it is attached to a wheelchair) or are unable to produce gestures (e.g., are unable to use their hands), will need alternative means of activating these functions. In the case of the undo function, there may be a button alternative that can be pressed when the device is stationary. Or, in the case of an electronic book, speech might be used to turn the page, to tap on the right side of the screen, to swipe left, and so on.
Likewise, motion actuation needs to be an option that can be disabled. If, for instance, a user has a shaky hand, they may inadvertently activate motion-based functions. They may need to disable motion activation. For any function that is activated via motion, an alternative means of activating the function is required that does not require moving the device or gesturing to it.
Suggested Reading: