Machine learning continues to pick up on our prejudices
Can we ever develop AI that is not sexist or racist? Yes, sexist, racist machines are something we may be dealing with. By virtue of being developed by tech companies that rely on interactions through social platforms, machine learning technologies unfortunately keep picking up our biases and reflecting them back at us. Last month, a team of researchers from Boston University and Microsoft Research devised an algorithm capable of identifying stereotypes in writing, having it analyze a three-mn word corpus of Google News stories, to exhibit gender bias. And the results were as bad as one would expect. Occupation searches revealed that women were disproportionately billed as a “hairdresser”, “socialite” or “nanny”, while men were associated with “maestro”, “skipper” or “protegé”. The team analysed the same Google News corpus for signs of racial bias, and found similar results. Functionality on tech that relies on machine learning which studies us may help perpetuate these prejudices. A search engine could rank content developed by males higher than females, depriving the user of possibly more useful material purely out of acquired sexism. And then off course there was the case of Tay, Microsoft’s’ neo-Nazi twitter chat bot. Scientists that are part of the research team who spoke to Vice Motherboard are unfazed, and believe that research like theirs will help correct these trends in machine learning, which are much easier to fix than human.