A Survey on Privacy Attacks and Defenses in Graph Neural Networks
DOI:
https://doi.org/10.5755/j01.itc.53.4.37737Keywords:
graph neural networks, privacy preserving, Deep Learning, differential privacyAbstract
Graph neural networks (GNNs) have emerged as a powerful tool in the field of graph machine learning, demonstrating by a various practical applications. However, the complex nature of graph structures and their expanding use across different scenarios present challenges for GNNs in terms of privacy protection. While there have been studies dedicated to addressing the privacy leakage problem of GNNs, many issues remain unresolved. This survey aims to provide a comprehensive understanding of the scientific challenges in the field of privacy-preserving GNNs. The survey begins with a succinct review of recent research on graph data privacy, followed by an analysis of the current methods for GNNs privacy attacks. Subsequently, the survey categorizes and explores the limitations, evaluation standards, and privacy defense technologies for GNNs, with a focus on data anonymization, differential privacy, graph-based federated learning, and methods based on adversarial learning. Additionally, the survey also summarizes some widely used datasets in GNNs privacy attacks and defenses. Finally, we identify several open challenges and possible directions for future research.
Downloads
Published
Issue
Section
License
Copyright terms are indicated in the Republic of Lithuania Law on Copyright and Related Rights, Articles 4-37.