Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Clustering with Hadoop

Revision as of 19:07, 2 March 2015 by Paul.roubekas.org (Talk | contribs) (Process)

Introduction

The quest for performance optimization has been tirelessly pursued since the invention of computers. We are perpetually seeking to streamline machine performance, by increasing speed, expanding memory, and improving accuracy. As researchers desire to deal with larger and larger data sets, such optimizations become necessities rather than comforts.

Many programming models have been developed to combat this issue; currently, one of the most popular is [MapReduce]. Useful across many domains, one specific application for which the [MapReduce paradigm] has been explored is knowledge discovery from nuclear reactor simulation data, as it is imminent that this data will quickly become large-scale. [Hadoop], a well-known open-source implementation of MapReduce, was used for this study.

For preliminary investigation, [k-Means clustering], available from [Mahout] was employed. The results were similar to the ones we published in the paper, ["Knowledge Discovery from Nuclear Reactor Simulation Data"].

Process

Here are the steps we used for working with Hadoop:

  • Successfully install Hadoop from their [Releases] page. I used a [tutorial by Michael Noll] to install Hadoop on my laptop, running on Fedora 17, as a single node cluster. (Note that this tutorial is only helpful for certain Linux users.)
  • The database used was [Hadoop Distributed File System]. A helpful tutorial on HDFS can be found here.
  • Successfully install Mahout, a scalable machine learning and data mining library, from their [Downloads] page. Examples on how to run various machine learning algorithms on Mahout can be found in /mahout/examples/bin.
  • Put the requisite data for analysis on HDFS. My data files are in the .csv format. Mahout's k-Means clustering expects the data to be formatted into a Mahout-specific SequenceFile format. A Java utility was written to convert our data into a dense sequence file format, as needed by the k-Means clustering algorithm. Please note that the examples for k-Means in Mahout mostly work with [converting text data into sparse vector format]. However, this was not the case with our data.
  • Run k-Means with the -cl option. For example:
    • mahout kmeans -i /data/diff_unc_trans_seq/part-m-00000 -o /data/diff_unc_trans_kmeans_op -x 20 -k 2 -c /data/analysis_clusters -cl

Information on the various command line options for k-Means can be found [here].

  • Get the clusteredPoints directory out of HDFS into your local directory.
  • Run mahout seqdumper to output them. For example:
    • mahout seqdumper -i /data/diff_unc_trans_kmeans_op/clusteredPoints/part-m-00000 -o ~/Data_orig/diff_unc_trans_hadoop_op/.

This can be further processed for analysis.

Resources

For more information on Hadoop, see their website, which contains many useful resources, including a tutorial on how to use Hadoop.

More details on Mahout and its uses can be found at the Apache Mahout website. The developers keep the website very well maintained, and you can find a great introduction to k-Means clustering, in addition to information on many other methods.

Some further helpful materials on MapReduce include:

   Chapter 2 of Mining of Massive Datasets by Anand Rajaraman and Jeffrey David Ullman.
   MapReduce: A Flexible Data Processing Tool, a seminal paper by Google Fellows Jeffrey Dean and Sanjay Ghemawat, published in 2010.

Related

[CSV Clustering via Mahout (on local machine)] [Developer Documentation]

Back to the top