Remove author xiao-li
article thumbnail

Distributed differential privacy for federated learning

Google Research AI blog

While offering substantial additional protections, a fully malicious server might still be able to get around the DDP guarantees either by manipulating the public key exchange of SecAgg or by injecting a sufficient number of "fake" malicious clients that don’t add the prescribed noise into the aggregation pool.

ML 119