Generative Adversarial Networks for Video Summarization Based on Key-frame Selection

Authors

  • Xiayun Hu Jinling Institute of Technology
  • Xiaobin Hu
  • Jingxian Li
  • Kun You School of Software Engineering, Jinling Institute of Technology

DOI:

https://doi.org/10.5755/j01.itc.52.1.32278

Abstract

Video summarization based on generative adversarial networks (GANs) has been shown to easily produce more realistic results. However, most summary videos are composed of multiple key components. If the selection of some video frames changes during the training process, the information carried by these frames may not be reasonably reflected in the identification results. In this paper, we propose a video summarization method based on selecting keyframes over GANs. The novelty of the proposed method is the discriminator not only identifies the completeness of the video, but also takes into account the value judgment of the candidate keyframes, thus enabling the influence of keyframes on the result value. Given GANs are mainly designed to generate continuous real values, it is generally challenging to generate discrete symbol sequences during the summarization process directly. However, if the generated sample is based on discrete symbols, the slight guidance change of the discrimination network may be meaningless. To better use the advantages of GANs, the study also adopts the video summarization optimization method of GANs under a collaborative reinforcement learning strategy. Experimental results show the proposed method gets a significant summarization effect and character compared with the existing cutting-edge methods.

Downloads

Published

2023-03-28

Issue

Section

Articles