1

Fortifying Federated Learning Against Poisoning Attacks

News Discuss 
Federated learning allows multiple clients to collaboratively train models while preserving privacy, but it is vulnerable to poisoned gradients from malicious participants. Poison-resistant gradient aggregation techniques, such as trimmed mean, median, clustering, and norm-based methods, protect model integrity. Adaptive defenses and federated auditing further enhance security against... https://kahkaham.net/read-blog/135910

Comments

    No HTML

    HTML is disabled


Who Upvoted this Story