A few days ago I saw DHH’s tweet storm about how he and his wife have different credit limits on their Apple credit cards. When they tried to find out the reason, the answer they gave them was “sorry, it’s the algorithm.” The problem with this response is that it reveals no information regarding the real reasons, and this highlights a relatively recent legal loophole.
If you actively enact a discriminatory policy, that is illegal. Someone can point at your policy and sue you for discrimination. However, there is nothing stopping us from encoding discrimination into an algorithm. This is in fact fairly easy to do, as Cathy O’Neil explains in this Ted Talk.
If an organization does this and they get caught, all they need to do is act surprised and promise to look into it. Nobody is going to scrutinize their data or code. Unlike a public policy disclosure, data and algorithms are opaque and often secret. This needs to change. We do not need to know the data or the code, but we can feed a variety of cases to an algorithm (for example, a million artificial person profiles) and see what it does.
Until the laws force organizations to prove that their algorithms are fair, we will live in this legal loophole. If you run into this situation yourself, remember this post. Make the situation public, ask for transparency. Ping me on Twitter if you need help!