Over the years I’ve heard people repeat the idea that Google is an “objective search engine” because they allow their algorithms to reflect “the voice of the web.” This sounds great in theory, but in practice it implies two things:
- There is such a thing as “the voice of the web.”
- Google can be a perfect mirror of such voice.
Here’s a video in which a few Google insiders talk about how Google Search works. At one point the moderator (Danny Sullivan) asks if Google ever manually tweaks results. He brings up one time when the top result for the query “Jew” was an antisemitic site. Amit Singhal (head of Google’s ranking algorithm) responds: (jump to minute 41)
Singhal refers to a principle that Google holds dear: they would not manually (emphasis mine) promote, demote or remove results even if their judgment is saying that their algorithms are doing the wrong thing. However, he proceeds to explain how in that case the algorithm was clearly wrong, so they fixed it.
Think about this for a second. What does it mean for the algorithm to be wrong? In this case, Google was in the spotlight because of a controversial query. It so happens that they agreed with those who complained about the result. Are those people the voice of the web? He then moves on to other cases in which the judgement is not so “black and white.” In those cases, they just let the algorithm do its thing and not privilege one point of view over the other.
Here’s the catch: what’s described above could be done manually. Putting the algorithms in charge is a cop-out. What this accomplishes is removing responsibility from any single human being. As we all know though, human beings are behind the algorithms, and they (we) are able to steer them. We can justify our algorithm’s settings by announcing to the world that they were automatically optimized to maximize the satisfaction of human judges. Well then, who are those human judges and how were they chosen? How do you choose a set of human judges who perfectly represent the “voice of the web”? Moreover, could you (even inadvertently) prime those humans to choose the parameters you agree with?
Even if you could have a perfectly fair process to maximize the satisfaction of a representative set of individuals of all ethnicities, languages, geographies and beliefs, it all goes out the window when you decide that your algorithm is “wrong” because it doesn’t agree with your core values. It raises the question: is the algorithm wrong for a huge number of queries that nobody at Google feels strongly enough about?
Google knows that their search results reflect their opinion, and they publicly admit it. It’s interesting to see someone like Amit Singhal being disingenuous about it many months after that admission. He’s trying to play up the fact that nobody does anything manually as a dearly held principle. The implication is that if you have an algorithm to “decide what’s right”, then it’s all good. Even if you have to “fix it” once in a while when its “obviously wrong.” I believe that Singhal has no intention to mislead the public, and that this is just a blind spot in the engineering mind.
Those of us who work in search must be honest with ourselves about issues like this one. Whether we like it or not search results influence public opinion enormously, perhaps more than any other source of information in the world. This is the great responsibility that comes with great power.
One Reply to “Search Engines Cannot Be Objective”