En Axi – Accessibility Intelligence, an interview with David O’Neill podemos leer:
There is a lot of optimism, enthusiasm and trepidation on the interwebs about the transformative effect Artificial Intelligence (AI) will have on everything from marketing to the creative arts.
I often say that I am not an expert, but I know people who are. This is very much the case in terms of AI. Fortuitously, I have direct access to someone who is steeped in both accessibility knowledge and AI — or should I say Machine Learning (ML)? I am referring to David O’Neill, a Research Fellow at Vispero (the parent company of TPGi). He has a low social media profile but has quietly been driving improvements in automated accessibility testing, and accessibility testing in general, for decades.
In the following interview, we will learn about David, and how he envisions AI/ML to be a force for major improvements in how we tackle the thorny problems of making technology work better for people — all people.
Y explica un caso concreto:
First off, it is a common misconception that AI/ML can solve all problems. That simply is not the case. The efficacy of AI/ML is a function of the use-case, availability of applicable models and tasks, and the volume/quality of available data for training and/or semantic search. So, evaluating these use-cases involves fitting it with proven ML tasks and inventorying your data.
Detecting Accessibility Issues is largely a classification task. Today, we perform accessibility issue detection in a highly deterministic manner. We have functions that accept code as an input and use rule-based logic to assert a “pass or fail” outcome. We can say that the following code snippet is not accessible because it is an image with no ALT text attribute: <img src=”someimage.jpg”>. The lack of ALT text is easy to detect which allows us to classify the <img> element as “Inaccessible” with 100% confidence.
Can we do that specific test better with ML using a probabilistic model? And if so, is it worth it? The answer is probably not. We can train a classifier on a million code examples of images that are not accessible but have no guarantee that it will learn enough to predict “Inaccessible” on future examples with 100% accuracy. A simple rule of thumb is to use conventional deterministic algorithms whenever you can – provided the algorithms work, of course! The reasoning is simple: why trade in a sure, easy thing for a hard and costly result that only has some “probability” of being correct?
The key point here is that there is no benefit in using ML with fuzzy results and <100% accuracy on tasks that already have 100% accuracy with a traditionally programmed, non-learned algorithm.