Using machine learning hides what the software is really doing, but maybe that’s nothing new…

Artificial Intelligence (AI), and in particular machine learning, is getting no shortage of mainstream media coverage these days. A perfect storm of hype, mystery, Hollywood treatment, and quirky billionaires is discombobulating the technology-consuming public. Will a robot baby missing part of their skull take your job? Is that utopia just around the corner, or is it the day you-know-what becomes self-aware?

MIT Technology Review and Scientific American both recently ran articles on the “black box” of machine learning. They make a fair point: machine learning by its very nature hides how the software is solving problems. Instead of thinking really hard about exactly how an algorithm should work and encoding exact rules into a chunk of software, the developer picks a set of general models and algorithms and “shows” the computer roughly what should happen. From there, it’s all extrapolation and guesswork on the computer’s part… often (and usually) with a bit of a nudge in the right direction from the humans. The MIT article ends with a warning: ‘“If it can’t do better than us at explaining what it’s doing,” the author says, “then don’t trust it.”’

The good news (and bad news) is that I’m pretty sure we’ve already got to that place without any help from AI! I’ve been building software for a while, and by “building”, I mean working with software that for the most part other people wrote, often a long time ago. Trying to determine exactly how it’s going to behave in every circumstance is a prohibitively expensive (i.e. impossible) task. Layers of complexity within systems and between systems create something that is far from deterministic and predictable. Back at the dawn of software, it was sometimes cost-effective and practical to understand exactly how a piece of software was working (reading through printouts and punchcards). Today in safety-critical applications that do very specific jobs, it is still possible to verify exactly what a program is doing. But, for the large and complex applications that we use online today, they are just as opaque to the people creating them as any machine learning masterpiece. Black boxes are everywhere – not just in AI.

At Ambit we’re building conversational user experiences, taking advantage of machine learning to do natural language processing. We don’t have to concern ourselves with the details of how our models figure out what users are saying, but this gives us a big advantage in reducing the effort (and the cost to our customers) of building the bots.
The robots are coming – but there’s nothing to be scared of – they just like to chat. The humans at Ambit like to chat too, see below!


By Gareth Cronin, CTO of Ambit

Gareth Cronin is Ambit’s CTO and is currently the GM Product at Xero. He is a former consultant to Air New Zealand, Vista Entertainment, Wynyard Group, Orion Health and others. To find out more about Gareth Cronin, visit his LinkedIn page here.


Follow us on LinkedIn for the latest updates on AI


2 Comments

Akhil · August 23, 2017 at 4:56 pm

Nice comparison between existing/legacy software and AI. At least with AI the data can often be modelled and reasoned about which is sometimes tough with complex legacy systems.

Steve · October 25, 2017 at 12:29 pm

Hi Gareth, Thanks for writing the article above. However, I do think there is too much of an inclination to dismiss the “SKYNET” scenario without an in depth discussion of the issues and facts. Personally, I am concerned about the implications of AI and Lethal Autonomous Weapons (LAWS). The UN is concerned enough about LAWS for it to initiate debate on the subject (https://www.un.org/disarmament/geneva/ccw/background-on-lethal-autonomous-weapons-systems/). Likewise, the Belfer Centre at Harvard University, recently made some sensible recommendations in relation to AI and National Security. Worth a read – even if its just the 11 recommendations in the Executive Summary https://www.belfercenter.org/publication/artificial-intelligence-and-national-security

Like any technology, AI has the potential for good and evil. We just need to ensure we have an open, informed and robust understanding & dicussion of the issues and risks.

Keep up the good work, driving the discussion and debate around AI.

Cheers
Steve

Leave a Reply

Your email address will not be published. Required fields are marked *