A Survey on Privacy Attacks and Defenses in Graph Neural Networks

Authors

  • Lanhua Luo School of Artificial Intelligence, Hezhou University, Hezhou, China; Faculty of Data Science, City University of Macau, Macau, China
  • Wang Ren Faculty of Data Science, City University of Macau, Macau, China
  • Huasheng Huang School of Artificial Intelligence, Hezhou University, Hezhou, China
  • Fengling Wang School of Artificial Intelligence, Hezhou University, Hezhou, China

DOI:

https://doi.org/10.5755/j01.itc.53.4.37737

Keywords:

graph neural networks, privacy preserving, Deep Learning, differential privacy

Abstract

Graph neural networks (GNNs) have emerged as a powerful tool in the field of graph machine learning, demonstrating by a various practical applications. However, the complex nature of graph structures and their expanding use across different scenarios present challenges for GNNs in terms of privacy protection. While there have been studies dedicated to addressing the privacy leakage problem of GNNs, many issues remain unresolved. This survey aims to provide a comprehensive understanding of the scientific challenges in the field of privacy-preserving GNNs. The survey begins with a succinct review of recent research on graph data privacy, followed by an analysis of the current methods for GNNs privacy attacks. Subsequently, the survey categorizes and explores the limitations, evaluation standards, and privacy defense technologies for GNNs, with a focus on data anonymization, differential privacy, graph-based federated learning, and methods based on adversarial learning. Additionally, the survey also summarizes some widely used datasets in GNNs privacy attacks and defenses. Finally, we identify several open challenges and possible directions for future research. 

Downloads

Published

2024-12-21

Issue

Section

Articles