Stabilizing Recommendations by Rank-Preserving Fine-Tuning

TKDD ACM Transactions on Knowledge Discovery from Data2024
Abstract

Modern recommender systems are trained to predict a user’s preference towards items by learning from historical user-item interactions. However, users and items change over time, and these changes can cause model outputs to vary, even when the underlying model is unchanged. In applications like healthcare, housing, and finance, this instability can have adverse effects on user experience. To address this, we propose FINEST (FIne-tuning for stable recommeNdations via rank-prEServing fine-Tuning), which stabilizes a trained recommender system against training data perturbations. Our method tackles two key challenges: obtaining reference rankings and ensuring stability across all possible data variations. FINEST fine-tunes the model under simulated perturbation scenarios with rank-preserving regularization on sampled items, while maintaining recommendation quality. We evaluate FINEST on real-world datasets and demonstrate that it substantially improves stability while maintaining recommendation quality across multiple metrics.

Related Papers
Citation
Loading...