
Out-group Homogeneity Bias
Tendency to perceive members of an out-group as more similar to each other than members of an in-group.
Out-group homogeneity bias reflects a cognitive bias in AI systems where data or model interpretation tends to over-generalize and treat individuals within an out-group as homogeneous, ignoring individual differences. This bias has significant implications in AI when models trained on biased data inadvertently adopt sociocultural biases, potentially leading to unfair outcomes. For instance, AI systems used in hiring or law enforcement might fail to distinguish the diverse characteristics of different individuals within an out-group, if the training data lack diversity or are skewed. Addressing out-group homogeneity is crucial for developing equitable AI systems capable of making fair and differentiated decisions, avoiding reinforcement of societal stereotypes.
The term "out-group homogeneity bias" first appeared in psychological literature in the 1970s and gained prominence in the AI field as biases in AI became more evident, towards the early 2000s. It highlighted the importance of understanding and mitigating biases inherent in datasets used for training AI, aiming for more nuanced and inclusive AI models.
Key contributors to the recognition and analysis of out-group homogeneity in AI include AI ethicists and researchers collaborating on interdisciplinary studies. Significant insights were derived from the works of psychologists and sociologists who initially outlined the cognitive bias in human perception, which AI researchers later adapted to understand and improve machine learning systems.