While suffering through the usual air travel woes recently, I felt compelled to tweet my feelings:
Although I tend to agree with Paul that an outsider probably does not have enough information to decide whether or not the actions he/she sees are reasonably good given the situation (especially when it comes to the incredibly complex world or airline operations), I like Matthew’s general idea of an experiment to identify whether or not the outcome of a black-box decision making process is “good”.
Can that be done? Can we observe a black-box situation (or process) long enough to be able to tell whether the analytic machine inside the box could do better?
A large number of people carry smart phones these days, with constant connectivity to the internet. Twitter, Facebook, and FourSquare (to name a few) can determine our location, and there are other Apps that tell us which of our friends (and even non-friends) are close by. With all of this connectivity and location awareness, we can think of human beings as sophisticated sensors that collect and share information. We see a fire, a car accident, a traffic jam, an arrest, a fight, and immediately share that information with our network. In addition, human sensors are much better than electronic sensors because they can detect and interpret many other things, such as: the mood in a room (after the airline changes your gate for the third time), the meaning of an image, and so on.
Consider a hypothetical situation in which a crowded venue has to be evacuated for whatever reason. Perhaps some exits will be blocked and people will be directed to go certain places, or act a certain way. Human observers may notice a problem with the way security is handling the situation from multiple locations inside the venue, and from multiple points of view. The collection of such impressions (be they tweets, Facebook status updates, or something else) may contain clues to what’s wrong with the black-box evacuation procedure devised for that venue. For example, “avoid using the south exit because people exiting through there bump into those coming down the stairs from the second floor and everyone has to slow down quite a bit.”
In a world where Analytics and OR specialists struggle to convince companies to try new ideas, could this kind of evidence/data be used to foster collaboration? “I noticed that you did X when Y happened. It turns out that if you had done Z, you’d have achieved a better outcome, and here’s why…”
Is the airline example really too complicated to be amenable to this kind of analysis? I’m not sure. But even if it is, there may be other situations in which a social network of human sensors can collect enough information to motivate someone to open that black box and tinker with its inner workings a little bit. Those of you working in the area of social networks might be aware of something along the lines of what I (with inspiration from Matthew) have proposed above. If that’s the case, I’d love to read more about it. Please let me know in the comments.