Microsoft Researchers Say NLP Bias Studies Must Consider Role of Social Hierarchies Like Racism

    0
    43

    As the recently released GPT-3 and several recent studies demonstrate, racial bias, as well as bias based on gender, occupation, and religion, can be found in popular NLP language models. But a team of AI researchers wants the NLP bias research community to more closely examine and explore relationships between language, power, and social hierarchies like racism in their work. That’s one of three major recommendations for NLP bias researchers a recent study makes. From a report: Published last week, the work, which includes analysis of 146 NLP bias research papers, also concludes that the research field generally lacks clear descriptions of bias and fails to explain how, why, and to whom that bias is harmful. “Although these papers have laid vital groundwork by illustrating some of the ways that NLP systems can be harmful, the majority of them fail to engage critically with what constitutes ‘bias’ in the first place,” the paper reads. “We argue that such work should examine the relationships between language and social hierarchies; we call on researchers and practitioners conducting such work to articulate their conceptualizations of ‘bias’ in order to enable conversations about what kinds of system behaviors are harmful, in what ways, to whom, and why; and we recommend deeper engagements between technologists and communities affected by NLP systems.” Read more of this story at Slashdot.

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here