Particle Swarm Optimization Method Combined with off Policy Reinforcement Learning Algorithm for the Discovery of High Utility Itemset

Authors

  • K. Logeswaran Department of Artificial Intelligence, Kongu Engineering College, Perundurai, 638060, India.
  • P. Suresh School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, 632014 , India.
  • S. Anandamurugan Department of Information Technology, Kongu Engineering College, Perundurai, 638060, India.

DOI:

https://doi.org/10.5755/j01.itc.52.1.31949

Keywords:

Reinforcement Learning, Evolutionary Computation, Particle Swarm Optimization, Execution Time

Abstract

Mining of High Utility Itemset (HUI) is an area of high importance in data mining that involves numerous methodologies for addressing it effectively. When the diversity of items and size of an item is quite vast in the given dataset, then the problem search space that needs to be solved by conventional exact approaches to High Utility Itemset Mining (HUIM) also increases in terms of exponential. This factual issue has made the researchers to choose alternate yet efficient approaches based on Evolutionary Computation (EC) to solve the HUIM problem. Particle Swarm Optimization (PSO) is an EC-based approach that has drawn the attention of many researchers to unravel different NP-Hard problems in real-time. Variants of PSO techniques have been established in recent years to increase the efficiency of the HUIs mining process. In PSO, the Minimization of execution time and generation of reasonable decent solutions were greatly influenced by the PSO control parameters namely Acceleration Coefficient and  and Inertia Weight. The proposed approach is called Adaptive Particle Swarm Optimization using Reinforcement Learning with Off Policy (APSO-RLOFF), which employs the Reinforcement Learning (RL) concept to achieve the adaptive online calibration of PSO control and, in turn, to increase the performance of PSO. The state-of-the-art RL approach called the Q-Learning algorithm is employed in the APSO-RLOFF approach. In RL, state-action utility values are estimated during each episode using Q-Learning. Extensive tests are carried out on four benchmark datasets to evaluate the performance of the suggested technique. An exact approach called HUP-Miner and three EC-based approaches, namely HUPEUMU-GRAM, HUIM-BPSO, and AGA_RLOFF, are used to relate the performance of the anticipated approach. From the outcome, it is inferred that the performance metrics of APSO-RLOFF, namely no of discovered HUIs and execution time, outstrip the previously considered EC computations.

 

Downloads

Published

2023-03-28

Issue

Section

Articles