Space: Apache Lucene Mahout (http://cwiki.apache.org/confluence/display/MAHOUT)
Page: k-Means (http://cwiki.apache.org/confluence/display/MAHOUT/k-Means)

Change Comment:
---------------------------------------------------------------------
Quickstart for kMeans

Edited by Sisir Koppaka:
---------------------------------------------------------------------
h1. kMeans

k-Means is a rather simple but well known algorithms for grouping objects, 
clustering. Again all objects need to be represented as a set of numerical 
features. In addition the user has to specify the number of groups (referred to 
as _k_) he wishes to identify.
Each object can be thought of as being represented by some feature vector in an 
_n_ dimensional space, _n_ being the number of all features used to describe 
the objects to cluster. The algorithm than randomly chooses _k_ points in that 
vector space, these point serve as the initial centers of the clusters. 
Afterwards all objects are each assigned to center they are closest to. Usually 
the distance measure is chosen by the user and determined by the learning task.
After that, for each cluster a new center is computed by averaging the feature 
vectors of all objects assigned to it. The process of assigning objects and 
recomputing centers is repeated until the process converges. The algorithm can 
be proven to converge after a finite number of iterations.
Several tweaks concerning distance measure, initial center choice and 
computation of new average centers have been explored, as well as the 
estimation of the number of clusters _k_. Yet the main principle always remains 
the same.



h2. Quickstart

[Here|^quickstart-kmeans.sh] is a short shell script outline that will get you 
started quickly with k-Means. This does the following:

* Get the Reuters dataset
* Run org.apache.lucene.benchmark.utils.ExtractReuters to generate reuters-out 
from reuters-sgm(the downloaded archive)
* Run seqdirectory to convert reuters-out to SequenceFile format
* Run seq2sparse to convert SequenceFiles to sparse vector format
* Finally, run kMeans with 20 clusters.

After following through the output that scrolls past, reading the code will 
offer you a better understanding.
mkdir \-p work
if \[ \! \-e work/reuters-out \]; then
  if \[ \! \-e work/reuters-sgm \]; then
    if \[ \! \-f work/reuters21578.tar.gz \]; then
      echo "Downloading Reuters-21578"
      curl 
http://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz  -o 
work/reuters21578.tar.gz
    fi
    mkdir \-p work/reuters-sgm
    echo "Extracting..."
    cd work/reuters-sgm && tar xzf ../reuters21578.tar.gz && cd 
.. && cd ..
  fi
fi
{color:#003366}{*}Strategy for parallelization{*}{color}

Some ideas can be found in [Cluster computing and 
MapReduce|http://code.google.com/edu/content/submissions/mapreduce-minilecture/listing.html]
 lecture video series \[by Google(r)\]; k-Mean clustering is discussed in 
[lecture #4|http://www.youtube.com/watch?v=1ZDybXl212Q]. Slides can be found 
[here|http://code.google.com/edu/content/submissions/mapreduce-minilecture/lec4-clustering.ppt].

Interestingly, Hadoop based implementation using 
[Canopy-clustering|http://en.wikipedia.org/wiki/Canopy_clustering_algorithm] 
seems to be here: [http://code.google.com/p/canopy-clustering/] (GPL 3 licence)

Here's another useful paper 
[http://www2.chass.ncsu.edu/garson/PA765/cluster.htm].

h2. Design of implementation

The initial implementation in MAHOUT-5 accepts two input directories: one for 
the data points and one for the initial clusters. The data directory contains 
multiple input files containing dense vectors of Java type Float\[\] encoded as 
"\[v1, v2, v3, ..., vn, \]", while the clusters directory contains a single 
file 'part-00000' which is in SequenceFile format and contains all of the 
initial cluster centers encoded as "Cn - \[c1, c2, ..., cn, \]. None of the 
input directories are modified by the implementation, allowing experimentation 
with initial clustering and convergence values.

The program iterates over the input points and clusters, outputting a new 
directory "clusters-N" containing a cluster center file "part-00000" for each 
iteration N. This process uses a mapper/combiner/reducer/driver as follows:
* KMeansMapper - reads the input clusters during its configure() method, then 
assigns and outputs each input point to its nearest cluster as defined by the 
user-supplied distance measure. Output key is: encoded cluster. Output value 
is: input point.
* KMeansCombiner - receives all key:value pairs from the mapper and produces 
partial sums of the input vectors for each cluster. Output key is: encoded 
cluster. Output value is "<number of points in partial sum>, <partial sum 
vector summing all such points>".
* KMeansReducer - a single reducer receives all key:value pairs from all 
combiners and sums them to produce a new centroid for the cluster which is 
output. Output key is: encoded cluster identifier (e.g. "C14". Output value is: 
formatted cluster (e.g. "C14 - \[c1, c2, ..., cn, \]). The reducer encodes 
unconverged clusters with a 'Cn' cluster Id and converged clusters with 'Vn' 
clusterId.
* KMeansDriver - iterates over the points and clusters until all output 
clusters have converged (Vn clusterIds) or until a maximum number of iterations 
has been reached. During iterations, a new clusters directory "clusters-N" is 
produced with the output clusters from the previous iteration used for input to 
the next. A final pass over the data using the KMeansMapper clusters all points 
to an output directory "points" and has no combiner or reducer steps.

With the latest diff (MAHOUT-5c and newer), Canopy clustering can be used to 
compute the initial clusters for KMeans:
{quote}
// now run the CanopyDriver job
CanopyDriver.runJob("testdata/points", "testdata/canopies"
ManhattanDistanceMeasure.class.getName(), (float) 3.1, (float) 2.1,
"dist/apache-mahout-0.1-dev.jar");

// now run the KMeansDriver job
KMeansDriver.runJob("testdata/points", "testdata/canopies", "output",
EuclideanDistanceMeasure.class.getName(), "0.001", "10");
{quote}

In the above example, the input data points are stored in 'testdata/points' and 
the CanopyDriver is configured to output to the 'testdata/canopies' directory. 
Once the driver executes it will contain the canopy definition file. Upon 
running the KMeansDriver the output directory will have two or more new 
directories: 'clusters-N'' containining the clusters for each iteration and 
'points' will contain the clustered data points.

This diagram shows the examplary dataflow of the k-Means example implementation 
provided by Mahout:
{gliffy:name=Example implementation of k-Means provided with 
Mahout|space=MAHOUT|page=k-Means|pageid=75159|align=left|size=L}

This diagram doesn't consider CanopyClustering:
{gliffy:name=k-Means Example\|space=MAHOUT\|page=k-Means\|align=left\|size=L}

Change your notification preferences: 
http://cwiki.apache.org/confluence/users/viewnotifications.action    

Reply via email to