Learn from Adversarial Examples: Learning-Based Attack on Time Series Forecasting

Authors

  • Youbang Xiao School of Information Science and Technology, North China University of Technology, Beijing, China
  • Zhongguo Yang Beijing Key Laboratory on Integration and Analysis of Large-scale Stream Data https://orcid.org/0000-0002-3720-0642
  • Qi Zou Brunel London School, North China University of Technology, Beijing, China
  • Peng Zhang School of Cyber Science and Engineering, Nanjing University of Science and Technology, Nanjing, China

DOI:

https://doi.org/10.5755/j01.itc.54.2.37758

Keywords:

Time Series Forecasting, Adversarial Attack, Deep Learning

Abstract

Adversarial attack in Time Series Forecasting(TSF) has been a topic of growing interest in recent years. While some black box attack methods have been proposed for TSF, they require continuous query to the target model. And the computational costs increase as model and data complexity grows. In fact, The perturbations generated by these methods have certain patterns, especially constrained in L0 norm. Those patterns can be captured and learned by a model. In this study, we proposed Learning-Based Attack(LBA), a novel black box adversarial attack method for TSF tasks, focusing on adversarial example, the perturbed data. By utilizing a model to learn adversarial ex- amples and generate a similar one, we can achieve a comparable performance with the original attack methods while significantly reducing the number of queries to the target model, ensuring high efficient and stealthiness. We evaluate our method through several public datasets. In this paper, we learn the adversarial samples attacked by n-Values Time Series Attack(nVITA), a sparse black box attack for TSF. The results show that we can effectively learn the attack information and generate similar adversarial samples with lower computational overhead, thus achieving the stealthiness and efficiency of the attack. Furthermore, we also verify the transferability of our method and found its applicability to attack other models. Our code is available on Github.

Downloads

Published

2025-07-14

Issue

Section

Articles