Joi Ito posted an interesting remark to the VW story on Facebook. With increased usage of machine learning algorithms. Computer try to optimise results. Results that can be great for operating the machine, although it can have side effects.
There is a thread over email with various people right now about how just auditing the code will not be enough since with machine learning, you don’t actually “program” the rules, but the machine learns them. If a machine optimizes in a way that breaks a rule, is it the programmers fault, and how do you detect it. I think that how and with what data we train AIs is going to be an exceedingly important way to manage things as relatively straight forward as breaking laws all the way to ethics.
The code used during the VW emission check probably didn’t have anything to do with machine learning. It’s a very simple check.
The software was relatively straight-forward: during an emissions test, the wheels of a car spin, but the steering wheel doesn’t. No turning or jostling of the steering column, indicates the car isn’t out on a normal drive and that an emissions test is underway. That activated a defeat device that limited the harmful gas emitted by the car, allowing it to pass the test.
With machines getting smarter running their own optimisation tricks. Who’s to blame when the machine makes a choice that’s probably completely rational for the machine, although against societies values.
Make in this story at Fusion as well.