Recommender systems have become indispensable tools for delivering personalized content. However, their reliance on user interaction data raises significant privacy concerns. Local Differential Privacy (LDP) offers strong privacy guarantees by perturbing user data at the client side before transmission. Yet, the noise introduced to ensure privacy often substantially degrades recommendation accuracy. To address this challenge, we propose a novel LDP-based recommendation algorithm grounded in Bayesian estimation. Our approach first applies a randomized response mechanism to locally perturb users’ implicit feedback and then leverages Bayesian inference on the server side to reconstruct the underlying user–item interaction probabilities. This two-stage framework effectively mitigates the adverse impact of injected noise while preserving rigorous privacy guarantees, thereby enabling high-quality model training even under stringent privacy constraints. Extensive experiments on three public benchmark datasets demonstrate that our method consistently outperforms state-of-the-art baselines, achieving a superior trade-off between privacy protection and recommendation utility. This work presents a practical, scalable solution for privacy-preserving recommendations—particularly valuable in settings involving untrusted servers and sparse interaction data.



