Selective Preference Aggregation
Many applications in machine learning and decision-making rely on procedures to aggregate human preferences. In such tasks, individual express ordinal preferences over a set of items through votes, ratings, or pairwise comparisons. We then summarize their collective preferences as a ranking. Standard methods for preference aggregation are designed to return rankings that arbitrate individual disagreements in ways that are faithful and fair. In this work, we introduce a paradigm for selective aggregation, where we can avoid the need to arbitrate dissent by abstaining from comparison. We summarize collective preferences as a selective ranking — i.e., a partial order where we can only compare items where at least $100\cdot(1 - \tau)$% of individuals agree. We develop algorithms to build selective rankings that achieve all possible trade-offs between comparability and disagreement, and derive formal guarantees on their safety and stability. We conduct an extensive set of experiments on real-world datasets to benchmark our approach and demonstrate its functionality. Our results show selective aggregation can promote transparency and robustness by revealing disagreement and abstaining from arbitration.
Loading...