AI’s black box myth

Monica Spisar
1 min readMar 6, 2018

AI doesn’t just ‘happen’. Data is fed in and converted to a result. That result is deterministically produced; an algorithm transforms the input to an output. There is no magic.

So, why the confusion? Even Geoffrey Hinton’s called AI a black box.

I’m not an expert — yet — but AI algorithms pass input through systems which always produce the same result for a given input. There’s no quantum-mechanics-like uncertainty at play here.

The systems do respond to inputs and adjust their actions according to the input. That’s not necessarily mysterious. True, there’s probably more complexity than might be deconvolved with just a passing glance, but the algorithms are (as I believe Google’s Francois Chollet has emphasized) no more complicated than high school calculus & algebra.

The pieces linked to below don’t quite address the topic of the fallacy of the black box narrative of AI. Rather, they focus on the reality that, often, a result is useful only when accompanied by context and justification. I.e., with an explanation of the reasoning which informed the conclusion. Whether the reasoning is acceptable is another matter.

Google attempts to get AI to explain itself (the actual work)

The Next Web Article: Bye Bye Black Box

Multimodal explanations: justifying decisions & pointing to the evidence

--

--