
ERR
Expected Reciprocal Rank
Expected Reciprocal Rank
A performance metric used in information retrieval that considers the rank position of relevant documents, providing a probability-based measure of ranking quality.
In the context of AI and information retrieval, ERR provides a nuanced approach to evaluating ranked lists of documents by calculating the expected value of the reciprocal rank of a relevant document. This metric addresses some limitations of earlier metrics like Mean Reciprocal Rank (MRR) by incorporating the probability of relevance at various ranks, offering a more sophisticated and probabilistic view of user satisfaction. It is particularly significant in situations where documents have varying levels of relevance, as it emphasizes the importance of positioning more pertinent documents higher in the list, which can directly impact user engagement and satisfaction in search engines and recommendation systems.
ERR was first proposed in 2009 and gained traction as an effective evaluation tool in the 2010s, especially as search engines and recommendation systems sought more refined metrics to measure and enhance result quality.
The concept of Expected Reciprocal Rank was primarily introduced by researchers from Yahoo! Labs, with specific contributions from Olivier Chapelle, Donald Metzler, Ya Zhang, and Pierre Grinspan, who developed and formalized this metric within the context of information retrieval systems.