Saltar al contenido principal

Escribe una PREreview

A Robust Federated Learning Against Data Poisoning Attacks: Prevention and Detection of Attacked Nodes

Publicada
Servidor
Preprints.org
DOI
10.20944/preprints202506.2218.v1

Federated Learning (FL) enables collaborative modelbuilding among a large number of participants without sharingthe sensitive data to the central server. Because of its distributednature, FL has limited control over the local data and corre-sponding training process. Therefore, it is susceptible to datapoisoning attacks where malicious workers use malicious trainingdata to train the model. Furthermore, attackers on the workerside can easily manipulate local data by swapping the labels oftraining instances, adding noise to training instances, adding out-of-distribution training instances in the local data to initiate datapoisoning attacks. And local workers under such attacks carryincorrect information to the server, poison the global model, andcause misclassifications. So, prevention and detection of such datapoisoning attacks is crucial to build a robust federated trainingframework. To address it, we propose a prevention strategyin federated learning, namely Confident Federated Learningto prevent the workers from such data poisoning attacks. Ourproposed prevention strategy at first validates the label qualityof local training samples by characterizing and identifyinglabel errors in the local training data and then exclude thedetected mislabeled samples from the local training. To thisaim, we experiment with our proposed approach on MNIST,Fashion-MNIST, and CIFAR-10 dataset and experimental resultsvalidated the robustness of our proposed Confident FederatedLearning in preventing the data poisoning attacks. Our proposedmethod can successfully detect the mislabeled training sampleswith above 85% accuracy and exclude those detected samplesfrom the training set to prevent the data poisoning attackson the local workers. However, our prevention strategy cansuccessfully prevent the attack locally in the presence of certainpercentage of poisonous samples. Beyond that percentage, theprevention strategy may not be effective in preventing attacks.In such cases, detection of the attacked workers is needed. So, inaddition to the prevention strategy, we propose a novel detectionstrategy in federated learning framework to detect the maliciousworkers under attacks. We propose to create a class-wise clusterrepresentation for every participating worker by utilizing theneurons’ activation maps of local model and analyze the resultingclusters to filter out the workers under attacks before modelaggregation. We experimentally demonstrated the efficacy of ourproposed detection strategy in detecting workers affected by datapoisoning attacks, along with the attack types, e.g., label-flippingor dirty labeling. In addition, experimental results suggested thatthe global model couldn’t converge even after a large number oftraining rounds in the presence of malicious workers, whereasafter detecting the malicious workers with our proposed detectionmethod and discarding them from model aggregation, the globalmodel ensured convergence within a very few training rounds.Furthermore, our proposed approach stays robust under differentdata distributions and model sizes and does not require priorknowledge about the number of attackers in the system.

Puedes escribir una PREreview de A Robust Federated Learning Against Data Poisoning Attacks: Prevention and Detection of Attacked Nodes. Una PREreview es una revisión de un preprint y puede variar desde unas pocas oraciones hasta un extenso informe, similar a un informe de revisión por pares organizado por una revista.

Antes de comenzar

Te pediremos que inicies sesión con tu ORCID iD. Si no tienes un iD, puedes crear uno.

¿Qué es un ORCID iD?

Un ORCID iD es un identificador único que te distingue de otros/as con tu mismo nombre o uno similar.

Comenzar ahora