Racial bias in natural language processing

mic-mic-stand-microphone-64057.jpg

Our research into racial bias in natural language processing found that, in its current form, introducing natural language processing in government risks excluding the needs and opinions of people of colour.

We reviewed the current academic literature and interviewed experts in natural language processing. We concluded that, should governments widely adopt natural language processing systems, there is a risk of racial bias in three areas: racial prejudices found in language in training data; weaknesses in filters designed to catch racist language; algorithms’ inability to handle linguistic variation.

Read our report here.