TY - GEN
T1 - SaIL
AU - Aydin, Ali Selman
AU - Feiz, Shirin
AU - Ashok, Vikas
AU - Ramakrishnan, I. V.
N1 - Publisher Copyright: © ACM.
PY - 2020/3/17
Y1 - 2020/3/17
N2 - Navigating webpages with screen readers is a challenge even with recent improvements in screen reader technologies and the increased adoption of web standards for accessibility, namely ARIA. ARIA landmarks, an important aspect of ARIA, lets screen reader users access different sections of the webpage quickly, by enabling them to skip over blocks of irrelevant or redundant content. However, these landmarks are sporadically and inconsistently used by web developers, and in many cases, even absent in numerous web pages. Therefore, we propose SaIL, a scalable approach that automatically detects the important sections of a web page, and then injects ARIA landmarks into the corresponding HTML markup to facilitate quick access to these sections. The central concept underlying SaIL is visual saliency, which is determined using a state-of-the-art deep learning model that was trained on gaze-tracking data collected from sighted users in the context of web browsing. We present the findings of a pilot study that demonstrated the potential of SaIL in reducing both the time and effort spent in navigating webpages with screen readers.
AB - Navigating webpages with screen readers is a challenge even with recent improvements in screen reader technologies and the increased adoption of web standards for accessibility, namely ARIA. ARIA landmarks, an important aspect of ARIA, lets screen reader users access different sections of the webpage quickly, by enabling them to skip over blocks of irrelevant or redundant content. However, these landmarks are sporadically and inconsistently used by web developers, and in many cases, even absent in numerous web pages. Therefore, we propose SaIL, a scalable approach that automatically detects the important sections of a web page, and then injects ARIA landmarks into the corresponding HTML markup to facilitate quick access to these sections. The central concept underlying SaIL is visual saliency, which is determined using a state-of-the-art deep learning model that was trained on gaze-tracking data collected from sighted users in the context of web browsing. We present the findings of a pilot study that demonstrated the potential of SaIL in reducing both the time and effort spent in navigating webpages with screen readers.
KW - WAI-ARIA
KW - landmarks
KW - screen reader
KW - web accessibility
UR - https://www.scopus.com/pages/publications/85082472741
U2 - 10.1145/3377325.3377540
DO - 10.1145/3377325.3377540
M3 - Conference contribution
T3 - International Conference on Intelligent User Interfaces, Proceedings IUI
SP - 111
EP - 115
BT - Proceedings of the 25th International Conference on Intelligent User Interfaces, IUI 2020
PB - Association for Computing Machinery
T2 - 25th ACM International Conference on Intelligent User Interfaces, IUI 2020
Y2 - 17 March 2020 through 20 March 2020
ER -