Data privacy is of great important in a big data society, and the traditional centralized machine learning faces severe scrutiny and check due to the risk of data leakage. Federated learning (FL) is a promising alternative to solve this problem as it is capable of enabling collaborative model training while preserving data privacy by keeping raw data on local devices. Despite its advantages, fairness in FL remains a pressing challenge. In other words, disparities in data quality, quantity and distribution (among clients) could produce inequitable outcomes, discourage participation, even give rise to free-rider problems. Existing FL research on fairness handling often lacks a holistic and precise methodology, and focuses primarily on mitigating none-independent identically distributed effects without adequately handling the fairness factors such as training efficiency and model accuracy. To fill in such a gap, this paper proposes a novel fair aggregation framework for FL to ensure both internal and external gradient balance, while enforcing equitable resource allocation. Furthermore, a momentum decay mechanism is integrated to accelerate the convergence speed without compromising fairness. Extensive experiments on multiple benchmark datasets validate the effectiveness of the proposed framework and compared to existing baselines, the proposed method demonstrates consistent improvement in both accuracy and fairness.



