2. Operable
Activity 4: Understanding the Limitations of Automated Accessibility Checkers
Contents
There are a variety of tools available on the Internet, as well as plugins or add-ons for web browsers, that can be used to test the accessibility of web content. But it is important to know that these tools differ in the accuracy and coverage of what they test.
While automated accessibility checkers are a great way to get a quick review of a website’s accessibility, they cannot be relied upon to identify all potential barriers or even to identify them accurately. Some barriers, particularly those that involve meaning in one way or another, can’t be measured with automated checkers (at least, not with the current state of the technology). Checkers are also not able to determine whether some types of inaccessible content have accessible alternatives.
Activity
In this activity, you will look at two popular automated accessibility testing tools, AChecker and Wave, plus another one of your choosing. You will describe what they are identifying and not identifying as barriers. Some tools are quite clear about what they test and list the checks they run. Other tools hide away the checks from the end user, making it difficult to know exactly what is being tested. Some have hundreds of checks they run. Others have just a handful. Some checkers are customizable to the needs of each user; others have little or no customization. The bottom line is accessibility checkers are not created equal.
Accessibility Checkers
Test Sites (Homepage only)
Other Accessibility Checkers
- Top 25 Awesome Accessibility Testing Tools for Websites
- How do automated accessibility checkers compare?
Requirements
Using the homepage from the two test sites listed above (i.e., Showcase and Lulu’s), compare the reports generated by AChecker, Wave, and an accessibility checker of your choice (being sure to name it).
Answer the following questions:
- How many known accessibility issues does each checker identify on each of the test sites’ homepages?
- Comparing each report, what did each checker miss that one of the others may have caught? Provide specific examples.
- How many manual checks did each checker suggest? (Manual checks would be checks a human needs to make.)
- Were there any false positives? (Examples of false positives include: identifying barriers that are not barriers or identifying barriers that have an accessible alternative available.)
- Does the checker list the checks it runs? (This may take a little research or digging around the settings or options of the application. Also, see the note above that describes what a check is, as opposed to a guideline or success criteria.)
- Based on your experience here with the three checkers, what are your overall thoughts on their accuracy and coverage?