If exclusively all algorithmic bias were as easy to spot as this: FaceApp, a photo-editing app that uses a neural network for editing selfiesin a photorealistic method, has apologized for improving a racist algorithm.
The app lets users upload a selfie or a photograph of a aspect, and offersa series of filters that can then be applied to the idol to subtly or radically alter itsappearance its appearance-shifting effects includeaging and even changing gender.
The problem isthe app also included a so-called hotness filter, and thisfilter was racist. Asusers said, the filterwas cheering skin manners to achieveits mooted beautifying effect. You can see the filterpictured above in a before and after shot of President Obama.
In an emailed statement defending for the racist algorithm, FaceAppsfounder and CEOYaroslav Goncharov told us: We are deeply sorry for this unquestionably serious matter. It is an deplorable side-effect of the underlying neural network caused by the training adjusted bias , not aimed practice. To relieve the issue, we have renamed the effect to eliminate any positive meaning associated with it. We are also working on the complete affix that should arrive soon.
As the Guardian noted earlier, the app has had a surge in popularity in recent weeks perhaps causing FaceApp to realise the filterhad a problem.
FaceApp has temporarily changed the names of the offending filter from hotness to trigger, although it would have been smarterto pluck it from the app altogether until anon-racist replacement wasready to ship. Perhaps theyre being agitated by the apps instant of viral popularity( its apparently supplementing around 700,000 users per epoch ).
While the underlying AI tech powering FaceApps effects includes system from some open-source libraries, such as Googles TensorFlow, Goncharov confirmed to us that the data set used to develop the hotnessfilter isits own , not a public data set. So theres no going away from where the accuse lies here.
Frankly it would be hard to come up with a better( visual) illustration of health risks of biases being embedded within algorithm. A machine learning modelis only as good as the data its fed and in FaceApps case, the Moscow-based teamclearly should not develop their algorithm on a diverse enough data set. We can at least thank themfor representing the lurkingproblem ofunderlying algorithmic bias in such a visually impactful way.
With AI being handed self-control of more and more organizations, theres a pressing need for algorithm accountability are totally interrogated, and for robust systems to be developedto avoid embedding human biases into our machines. Autonomous techdoes not mean immune to human blunders, and any developer that tries to claim otherwise is trying to sell a lie.