The Influence of Training Parameters on Neural Networks' Vulnerability to Membership Inference Attacks
dc.contributor.author | Bouanani,Oussama | |
dc.contributor.author | Boenisch,Franziska | |
dc.contributor.editor | Demmler, Daniel | |
dc.contributor.editor | Krupka, Daniel | |
dc.contributor.editor | Federrath, Hannes | |
dc.date.accessioned | 2022-09-28T17:10:02Z | |
dc.date.available | 2022-09-28T17:10:02Z | |
dc.date.issued | 2022 | |
dc.description.abstract | With Machine Learning (ML) models being increasingly applied in sensitive domains, the related privacy concerns are rising. Neural networks (NN) are vulnerable to, so-called, membership inference attacks (MIA) which aim at determining whether a particular data sample was used for training the model. The factors that render NNs prone to this privacy attack are not yet fully understood. However, previous work suggests that the setup of the models and the training process might impact a model's risk to MIAs. To investigate these factors more in detail, we set out to experimentally evaluate the influence of the training choices in NNs on the models' vulnerability. Our analyses highlight that the batch size, the activation function, and the application and placement of batch normalization and dropout have the highest impact on the success of MIAs. Additionally, we applied statistical analyses to the experiment results and found a highly positive correlation between a model's ability to resist MIAs and its generalization capacity. We also defined a metric to measure the difference in the distributions of loss values between member and non-member data samples and observed that models scoring higher values on that metric were consistently more exposed to the attack. The latter observation was further confirmed by manually generating predictions for member and non-member samples producing loss values within specific distributions and launching MIAs on them. | en |
dc.identifier.doi | 10.18420/inf2022_106 | |
dc.identifier.isbn | 978-3-88579-720-3 | |
dc.identifier.pissn | 1617-5468 | |
dc.identifier.uri | https://dl.gi.de/handle/20.500.12116/39479 | |
dc.language.iso | en | |
dc.publisher | Gesellschaft für Informatik, Bonn | |
dc.relation.ispartof | INFORMATIK 2022 | |
dc.relation.ispartofseries | Lecture Notes in Informatics (LNI) - Proceedings, Volume P-326 | |
dc.subject | privacy | |
dc.subject | machine learning | |
dc.subject | neural networks | |
dc.subject | membership inference | |
dc.title | The Influence of Training Parameters on Neural Networks' Vulnerability to Membership Inference Attacks | en |
gi.citation.endPage | 1246 | |
gi.citation.startPage | 1227 | |
gi.conference.date | 26.-30. September 2022 | |
gi.conference.location | Hamburg | |
gi.conference.sessiontitle | Trustworthy AI in Science and Society |
Dateien
Originalbündel
1 - 1 von 1