Parallel Implementation of Improved K-Means Based on a Cloud Platform

Authors

  • Shufen Zhang
  • Zhiyu Liu North China University of Science and Technology
  • Xuebin Chen Hebei Key Laboratory of Data Science and Application
  • Changyin Luo Hebei Key Laboratory of Data Science and Application

DOI:

https://doi.org/10.5755/j01.itc.48.4.23881

Keywords:

K-Means, MapReduce, Sample Density, Max-Min Distance

Abstract

In order to solve the problem of traditional K-Means clustering algorithm in dealing with large-scale data set, a Hadoop K-Means (referred to HKM) clustering algorithm is proposed. Firstly, according to the sample density, the algorithm eliminates the effects of noise points in the data set. Secondly, it optimizes the selection of the initial center point using the thought of the max-min distance. Finally, it uses a MapReduce programming model to realize the parallelization. Experimental results show that the proposed algorithm not only has high accuracy and stability in clustering results, but can also solve the problems of scalability encountered by traditional clustering algorithms in dealing with large scale data.

Downloads

Published

2019-12-18

Issue

Section

Articles