Logo des Repositoriums
 

The Influence of Training Parameters on Neural Networks' Vulnerability to Membership Inference Attacks

dc.contributor.authorBouanani,Oussama
dc.contributor.authorBoenisch,Franziska
dc.contributor.editorDemmler, Daniel
dc.contributor.editorKrupka, Daniel
dc.contributor.editorFederrath, Hannes
dc.date.accessioned2022-09-28T17:10:02Z
dc.date.available2022-09-28T17:10:02Z
dc.date.issued2022
dc.description.abstractWith Machine Learning (ML) models being increasingly applied in sensitive domains, the related privacy concerns are rising. Neural networks (NN) are vulnerable to, so-called, membership inference attacks (MIA) which aim at determining whether a particular data sample was used for training the model. The factors that render NNs prone to this privacy attack are not yet fully understood. However, previous work suggests that the setup of the models and the training process might impact a model's risk to MIAs. To investigate these factors more in detail, we set out to experimentally evaluate the influence of the training choices in NNs on the models' vulnerability. Our analyses highlight that the batch size, the activation function, and the application and placement of batch normalization and dropout have the highest impact on the success of MIAs. Additionally, we applied statistical analyses to the experiment results and found a highly positive correlation between a model's ability to resist MIAs and its generalization capacity. We also defined a metric to measure the difference in the distributions of loss values between member and non-member data samples and observed that models scoring higher values on that metric were consistently more exposed to the attack. The latter observation was further confirmed by manually generating predictions for member and non-member samples producing loss values within specific distributions and launching MIAs on them.en
dc.identifier.doi10.18420/inf2022_106
dc.identifier.isbn978-3-88579-720-3
dc.identifier.pissn1617-5468
dc.identifier.urihttps://dl.gi.de/handle/20.500.12116/39479
dc.language.isoen
dc.publisherGesellschaft für Informatik, Bonn
dc.relation.ispartofINFORMATIK 2022
dc.relation.ispartofseriesLecture Notes in Informatics (LNI) - Proceedings, Volume P-326
dc.subjectprivacy
dc.subjectmachine learning
dc.subjectneural networks
dc.subjectmembership inference
dc.titleThe Influence of Training Parameters on Neural Networks' Vulnerability to Membership Inference Attacksen
gi.citation.endPage1246
gi.citation.startPage1227
gi.conference.date26.-30. September 2022
gi.conference.locationHamburg
gi.conference.sessiontitleTrustworthy AI in Science and Society

Dateien

Originalbündel
1 - 1 von 1
Lade...
Vorschaubild
Name:
trustai_01.pdf
Größe:
204.98 KB
Format:
Adobe Portable Document Format