When reading articles about “autonomous” vehicles, one phrase will repeatedly come up. Some bad behavior just happened, and a software source will say something to the effect that “this is just an edge case” for the self-driving platform. The implication is that nearly everything works, except for this small case on the edge of…The Big Picture?
Calling something an edge case gives the impression that the problem is nearly solved, except for an untidy bit here and maybe there. How realistic is that? Is an autonomous taxi in San Francisco in a closed system that can be characterized and solved? How about an autonomous car on an interstate highway or rural cow path? Can nearby vehicles, pedestrians and pets still behave in surprising ways, outside the bounds of the programming? What about Mother Nature, with changing weather, rock slides, hail, high winds, downed trees, downed power lines1, etc. The list is effectively endless. There are not just a handful of edge cases, there are probably a million.
Another thing that you read in the articles about autonomous car misbehavior is that the people around them (sometimes firefighters that need the autonomous vehicle to get out of their way) first try to signal to the driver before they realize that there is no one in the driver’s seat. Think about how some of your interactions when driving involve making eye contact with another driver or a pedestrian. The situation is a little ambiguous, so you negotiate the correct move by catching someone’s eye and perhaps gesturing with your hands (and I mean constructive gesturing). This is completely missing in the case of autonomous vehicles. A similar situation is when you need to actually roll down your window to take instructions from an emergency worker or police officer at a road block. No wonder emergency workers have such qualms about the autonomous taxis that have been foisted upon them. Some have actually had to break the window of the taxi to get it to stop.
What autonomous driving needs to succeed is to have the reasoning power of a human being, to handle the cases that crop up and cannot all be foreseen. It needs what we now call (for marketing purposes and kick the can down the road purposes) artificial general intelligence (AGI). This guy thinks that AGI is a prerequisite for satisfactory autonomous cars. This requirement is also clear from the Rebooting AI book, as well as more recent comments from Gary Marcus’s blog. In fact, I had been thinking about commenting on “edge cases” when Marcus posted this blog entry that was all about edge cases. For me, it validated that I understood the points in his book loud and clear, and they continue to be the salient points.
- As a programmer, just thinking about the downed power line case strikes me as incredibly complex. You see a cable in the road, recognize that it is not a tree limb, but can maybe see that it leads up to a utility pole. It might be sparking, but it might not. Depending on what you can see and infer, you make a decision to mitigate the risk. There might be another driver stopped in front of you that has decided to back away from it, and needs you to back up, too. I could go on and on…
↩︎