http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/users/clustering/spectral-clustering.html
----------------------------------------------------------------------
diff --git a/users/clustering/spectral-clustering.html 
b/users/clustering/spectral-clustering.html
index e8b6e9a..093cb3c 100644
--- a/users/clustering/spectral-clustering.html
+++ b/users/clustering/spectral-clustering.html
@@ -280,16 +280,16 @@
 
 <ol>
   <li>
-    <p>Computing a similarity (or <em>affinity</em>) matrix <code 
class="highlighter-rouge">\(\mathbf{A}\)</code> from the data. This involves 
determining a pairwise distance function <code 
class="highlighter-rouge">\(f\)</code> that takes a pair of data points and 
returns a scalar.</p>
+    <p>Computing a similarity (or <em>affinity</em>) matrix 
<code>\(\mathbf{A}\)</code> from the data. This involves determining a pairwise 
distance function <code>\(f\)</code> that takes a pair of data points and 
returns a scalar.</p>
   </li>
   <li>
-    <p>Computing a graph Laplacian <code 
class="highlighter-rouge">\(\mathbf{L}\)</code> from the affinity matrix. There 
are several types of graph Laplacians; which is used will often depends on the 
situation.</p>
+    <p>Computing a graph Laplacian <code>\(\mathbf{L}\)</code> from the 
affinity matrix. There are several types of graph Laplacians; which is used 
will often depends on the situation.</p>
   </li>
   <li>
-    <p>Computing the eigenvectors and eigenvalues of <code 
class="highlighter-rouge">\(\mathbf{L}\)</code>. The degree of this 
decomposition is often modulated by <code 
class="highlighter-rouge">\(k\)</code>, or the number of clusters. Put another 
way, <code class="highlighter-rouge">\(k\)</code> eigenvectors and eigenvalues 
are computed.</p>
+    <p>Computing the eigenvectors and eigenvalues of 
<code>\(\mathbf{L}\)</code>. The degree of this decomposition is often 
modulated by <code>\(k\)</code>, or the number of clusters. Put another way, 
<code>\(k\)</code> eigenvectors and eigenvalues are computed.</p>
   </li>
   <li>
-    <p>The <code class="highlighter-rouge">\(k\)</code> eigenvectors are used 
as “proxy” data for the original dataset, and fed into k-means clustering. 
The resulting cluster assignments are transparently passed back to the original 
data.</p>
+    <p>The <code>\(k\)</code> eigenvectors are used as “proxy” data for 
the original dataset, and fed into k-means clustering. The resulting cluster 
assignments are transparently passed back to the original data.</p>
   </li>
 </ol>
 
@@ -303,13 +303,13 @@
 
 <h2 id="input">Input</h2>
 
-<p>The input format for the algorithm currently takes the form of a 
Hadoop-backed affinity matrix in the form of text files. Each line of the text 
file specifies a single element of the affinity matrix: the row index <code 
class="highlighter-rouge">\(i\)</code>, the column index <code 
class="highlighter-rouge">\(j\)</code>, and the value:</p>
+<p>The input format for the algorithm currently takes the form of a 
Hadoop-backed affinity matrix in the form of text files. Each line of the text 
file specifies a single element of the affinity matrix: the row index 
<code>\(i\)</code>, the column index <code>\(j\)</code>, and the value:</p>
 
-<p><code class="highlighter-rouge">i, j, value</code></p>
+<p><code>i, j, value</code></p>
 
-<p>The affinity matrix is symmetric, and any unspecified <code 
class="highlighter-rouge">\(i, j\)</code> pairs are assumed to be 0 for 
sparsity. The row and column indices are 0-indexed. Thus, only the non-zero 
entries of either the upper or lower triangular need be specified.</p>
+<p>The affinity matrix is symmetric, and any unspecified <code>\(i, j\)</code> 
pairs are assumed to be 0 for sparsity. The row and column indices are 
0-indexed. Thus, only the non-zero entries of either the upper or lower 
triangular need be specified.</p>
 
-<p>The matrix elements specified in the text files are collected into a Mahout 
<code class="highlighter-rouge">DistributedRowMatrix</code>.</p>
+<p>The matrix elements specified in the text files are collected into a Mahout 
<code>DistributedRowMatrix</code>.</p>
 
 <p><strong>(<a 
href="https://issues.apache.org/jira/browse/MAHOUT-1539";>MAHOUT-1539</a> will 
allow for the creation of the affinity matrix to occur as part of the core 
spectral clustering algorithm, as opposed to the current requirement that the 
user create this matrix themselves and provide it, rather than the original 
data, to the algorithm)</strong></p>
 
@@ -319,21 +319,21 @@
 
 <p>Spectral clustering can be invoked with the following arguments.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>bin/mahout spectralkmeans \
+<pre><code>bin/mahout spectralkmeans \
     -i &lt;affinity matrix directory&gt; \
     -o &lt;output working directory&gt; \
     -d &lt;number of data points&gt; \
     -k &lt;number of clusters AND number of top eigenvectors to use&gt; \
     -x &lt;maximum number of k-means iterations&gt;
-</code></pre></div></div>
+</code></pre>
 
-<p>The affinity matrix can be contained in a single text file (using the 
aforementioned one-line-per-entry format) or span many text files <a 
href="https://issues.apache.org/jira/browse/MAHOUT-978";>per (MAHOUT-978</a>, do 
not prefix text files with a leading underscore ‘_’ or period ‘.’). The 
<code class="highlighter-rouge">-d</code> flag is required for the algorithm to 
know the dimensions of the affinity matrix. <code 
class="highlighter-rouge">-k</code> is the number of top eigenvectors from the 
normalized graph Laplacian in the SSVD step, and also the number of clusters 
given to k-means after the SSVD step.</p>
+<p>The affinity matrix can be contained in a single text file (using the 
aforementioned one-line-per-entry format) or span many text files <a 
href="https://issues.apache.org/jira/browse/MAHOUT-978";>per (MAHOUT-978</a>, do 
not prefix text files with a leading underscore ‘_’ or period ‘.’). The 
<code>-d</code> flag is required for the algorithm to know the dimensions of 
the affinity matrix. <code>-k</code> is the number of top eigenvectors from the 
normalized graph Laplacian in the SSVD step, and also the number of clusters 
given to k-means after the SSVD step.</p>
 
 <h2 id="example">Example</h2>
 
-<p>To provide a simple example, take the following affinity matrix, contained 
in a text file called <code class="highlighter-rouge">affinity.txt</code>:</p>
+<p>To provide a simple example, take the following affinity matrix, contained 
in a text file called <code>affinity.txt</code>:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>0, 0, 0
+<pre><code>0, 0, 0
 0, 1, 0.8
 0, 2, 0.5
 1, 0, 0.8
@@ -342,21 +342,21 @@
 2, 0, 0.5
 2, 1, 0.9
 2, 2, 0
-</code></pre></div></div>
+</code></pre>
 
-<p>With this 3-by-3 matrix, <code class="highlighter-rouge">-d</code> would be 
<code class="highlighter-rouge">3</code>. Furthermore, since all affinity 
matrices are assumed to be symmetric, the entries specifying both <code 
class="highlighter-rouge">1, 2, 0.9</code> and <code 
class="highlighter-rouge">2, 1, 0.9</code> are redundant; only one of these is 
needed. Additionally, any entries that are 0, such as those along the diagonal, 
also need not be specified at all. They are provided here for completeness.</p>
+<p>With this 3-by-3 matrix, <code>-d</code> would be <code>3</code>. 
Furthermore, since all affinity matrices are assumed to be symmetric, the 
entries specifying both <code>1, 2, 0.9</code> and <code>2, 1, 0.9</code> are 
redundant; only one of these is needed. Additionally, any entries that are 0, 
such as those along the diagonal, also need not be specified at all. They are 
provided here for completeness.</p>
 
 <p>In general, larger values indicate a stronger “connectedness”, whereas 
smaller values indicate a weaker connectedness. This will vary somewhat 
depending on the distance function used, though a common one is the <a 
href="http://en.wikipedia.org/wiki/RBF_kernel";>RBF kernel</a> (used in the 
above example) which returns values in the range [0, 1], where 0 indicates 
completely disconnected (or completely dissimilar) and 1 is fully connected (or 
identical).</p>
 
 <p>The call signature with this matrix could be as follows:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>bin/mahout spectralkmeans \
+<pre><code>bin/mahout spectralkmeans \
     -i s3://mahout-example/input/ \
     -o s3://mahout-example/output/ \
     -d 3 \
     -k 2 \
     -x 10
-</code></pre></div></div>
+</code></pre>
 
 <p>There are many other optional arguments, in particular for tweaking the 
SSVD process (block size, number of power iterations, etc) and the k-means 
clustering step (distance measure, convergence delta, etc).</p>
 

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/users/clustering/streaming-k-means.html
----------------------------------------------------------------------
diff --git a/users/clustering/streaming-k-means.html 
b/users/clustering/streaming-k-means.html
index 1056c5d..bd51e2f 100644
--- a/users/clustering/streaming-k-means.html
+++ b/users/clustering/streaming-k-means.html
@@ -397,7 +397,7 @@ The algorithm can be instructed to take multiple 
independent runs (using the <em
 
 <p>##Usage of <em>StreamingKMeans</em></p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code> bin/mahout streamingkmeans  
+<pre><code> bin/mahout streamingkmeans  
    -i &lt;input&gt;  
    -o &lt;output&gt; 
    -ow  
@@ -420,33 +420,33 @@ The algorithm can be instructed to take multiple 
independent runs (using the <em
    --tempDir &lt;tempDir&gt;   
    --startPhase &lt;startPhase&gt;   
    --endPhase &lt;endPhase&gt;                    
-</code></pre></div></div>
+</code></pre>
 
 <p>###Details on Job-Specific Options:</p>
 
 <ul>
-  <li><code class="highlighter-rouge">--input (-i) &lt;input&gt;</code>: Path 
to job input directory.</li>
-  <li><code class="highlighter-rouge">--output (-o) &lt;output&gt;</code>: The 
directory pathname for output.</li>
-  <li><code class="highlighter-rouge">--overwrite (-ow)</code>: If present, 
overwrite the output directory before running job.</li>
-  <li><code class="highlighter-rouge">--numClusters (-k) &lt;k&gt;</code>: The 
k in k-Means. Approximately this many clusters will be generated.</li>
-  <li><code class="highlighter-rouge">--estimatedNumMapClusters (-km) 
&lt;estimatedNumMapClusters&gt;</code>: The estimated number of clusters to use 
for the Map phase of the job when running StreamingKMeans. This should be 
around k * log(n), where k is the final number of clusters and n is the total 
number of data points to cluster.</li>
-  <li><code class="highlighter-rouge">--estimatedDistanceCutoff (-e) 
&lt;estimatedDistanceCutoff&gt;</code>: The initial estimated distance cutoff 
between two points for forming new clusters. If no value is given, it’s 
estimated from the data set</li>
-  <li><code class="highlighter-rouge">--maxNumIterations (-mi) 
&lt;maxNumIterations&gt;</code>: The maximum number of iterations to run for 
the BallKMeans algorithm used by the reducer. If no value is given, defaults to 
10.</li>
-  <li><code class="highlighter-rouge">--trimFraction (-tf) 
&lt;trimFraction&gt;</code>: The ‘ball’ aspect of ball k-means means that 
only the closest points to the centroid will actually be used for updating. The 
fraction of the points to be used is those points whose distance to the center 
is within trimFraction * distance to the closest other center. If no value is 
given, defaults to 0.9.</li>
-  <li><code class="highlighter-rouge">--randomInit</code> (<code 
class="highlighter-rouge">-ri</code>) Whether to use k-means++ initialization 
or random initialization of the seed centroids. Essentially, k-means++ provides 
better clusters, but takes longer, whereas random initialization takes less 
time, but produces worse clusters, and tends to fail more often and needs 
multiple runs to compare to k-means++. If set, uses the random 
initialization.</li>
-  <li><code class="highlighter-rouge">--ignoreWeights (-iw)</code>: Whether to 
correct the weights of the centroids after the clustering is done. The weights 
end up being wrong because of the trimFraction and possible train/test splits. 
In some cases, especially in a pipeline, having an accurate count of the 
weights is useful. If set, ignores the final weights.</li>
-  <li><code class="highlighter-rouge">--testProbability (-testp) 
&lt;testProbability&gt;</code>: A double value  between 0 and 1  that 
represents  the percentage of  points to be used  for ‘testing’  different  
clustering runs in  the final  BallKMeans step.  If no value is  given, 
defaults to  0.1</li>
-  <li><code class="highlighter-rouge">--numBallKMeansRuns (-nbkm) 
&lt;numBallKMeansRuns&gt;</code>: Number of  BallKMeans runs to  use at the end 
to  try to cluster the  points. If no  value is given,  defaults to 4</li>
-  <li><code class="highlighter-rouge">--distanceMeasure (-dm) 
&lt;distanceMeasure&gt;</code>: The classname of  the  DistanceMeasure.  
Default is  SquaredEuclidean.</li>
-  <li><code class="highlighter-rouge">--searcherClass (-sc) 
&lt;searcherClass&gt;</code>: The type of  searcher to be  used when  
performing nearest  neighbor searches.  Defaults to  ProjectionSearch.</li>
-  <li><code class="highlighter-rouge">--numProjections (-np) 
&lt;numProjections&gt;</code>: The number of  projections  considered in  
estimating the  distances between  vectors. Only used  when the distance  
measure requested is either ProjectionSearch or FastProjectionSearch. If no 
value is given, defaults to 3.</li>
-  <li><code class="highlighter-rouge">--searchSize (-s) 
&lt;searchSize&gt;</code>: In more efficient  searches (non  BruteSearch), not 
all distances are calculated for determining the nearest neighbors. The number 
of elements whose distances from the query vector is actually computer is 
proportional to searchSize. If no value is given, defaults to 1.</li>
-  <li><code class="highlighter-rouge">--reduceStreamingKMeans (-rskm)</code>: 
There might be too many intermediate clusters from the mapper to fit into 
memory, so the reducer can run  another pass of StreamingKMeans to collapse 
them down to a fewer clusters.</li>
-  <li><code class="highlighter-rouge">--method (-xm)</code> method The 
execution  method to use:  sequential or  mapreduce. Default  is mapreduce.</li>
-  <li><code class="highlighter-rouge">-- help (-h)</code>: Print out help</li>
-  <li><code class="highlighter-rouge">--tempDir &lt;tempDir&gt;</code>: 
Intermediate output directory.</li>
-  <li><code class="highlighter-rouge">--startPhase &lt;startPhase&gt;</code> 
First phase to run.</li>
-  <li><code class="highlighter-rouge">--endPhase &lt;endPhase&gt;</code> Last 
phase to run.</li>
+  <li><code>--input (-i) &lt;input&gt;</code>: Path to job input 
directory.</li>
+  <li><code>--output (-o) &lt;output&gt;</code>: The directory pathname for 
output.</li>
+  <li><code>--overwrite (-ow)</code>: If present, overwrite the output 
directory before running job.</li>
+  <li><code>--numClusters (-k) &lt;k&gt;</code>: The k in k-Means. 
Approximately this many clusters will be generated.</li>
+  <li><code>--estimatedNumMapClusters (-km) 
&lt;estimatedNumMapClusters&gt;</code>: The estimated number of clusters to use 
for the Map phase of the job when running StreamingKMeans. This should be 
around k * log(n), where k is the final number of clusters and n is the total 
number of data points to cluster.</li>
+  <li><code>--estimatedDistanceCutoff (-e) 
&lt;estimatedDistanceCutoff&gt;</code>: The initial estimated distance cutoff 
between two points for forming new clusters. If no value is given, it’s 
estimated from the data set</li>
+  <li><code>--maxNumIterations (-mi) &lt;maxNumIterations&gt;</code>: The 
maximum number of iterations to run for the BallKMeans algorithm used by the 
reducer. If no value is given, defaults to 10.</li>
+  <li><code>--trimFraction (-tf) &lt;trimFraction&gt;</code>: The ‘ball’ 
aspect of ball k-means means that only the closest points to the centroid will 
actually be used for updating. The fraction of the points to be used is those 
points whose distance to the center is within trimFraction * distance to the 
closest other center. If no value is given, defaults to 0.9.</li>
+  <li><code>--randomInit</code> (<code>-ri</code>) Whether to use k-means++ 
initialization or random initialization of the seed centroids. Essentially, 
k-means++ provides better clusters, but takes longer, whereas random 
initialization takes less time, but produces worse clusters, and tends to fail 
more often and needs multiple runs to compare to k-means++. If set, uses the 
random initialization.</li>
+  <li><code>--ignoreWeights (-iw)</code>: Whether to correct the weights of 
the centroids after the clustering is done. The weights end up being wrong 
because of the trimFraction and possible train/test splits. In some cases, 
especially in a pipeline, having an accurate count of the weights is useful. If 
set, ignores the final weights.</li>
+  <li><code>--testProbability (-testp) &lt;testProbability&gt;</code>: A 
double value  between 0 and 1  that represents  the percentage of  points to be 
used  for ‘testing’  different  clustering runs in  the final  BallKMeans 
step.  If no value is  given, defaults to  0.1</li>
+  <li><code>--numBallKMeansRuns (-nbkm) &lt;numBallKMeansRuns&gt;</code>: 
Number of  BallKMeans runs to  use at the end to  try to cluster the  points. 
If no  value is given,  defaults to 4</li>
+  <li><code>--distanceMeasure (-dm) &lt;distanceMeasure&gt;</code>: The 
classname of  the  DistanceMeasure.  Default is  SquaredEuclidean.</li>
+  <li><code>--searcherClass (-sc) &lt;searcherClass&gt;</code>: The type of  
searcher to be  used when  performing nearest  neighbor searches.  Defaults to  
ProjectionSearch.</li>
+  <li><code>--numProjections (-np) &lt;numProjections&gt;</code>: The number 
of  projections  considered in  estimating the  distances between  vectors. 
Only used  when the distance  measure requested is either ProjectionSearch or 
FastProjectionSearch. If no value is given, defaults to 3.</li>
+  <li><code>--searchSize (-s) &lt;searchSize&gt;</code>: In more efficient  
searches (non  BruteSearch), not all distances are calculated for determining 
the nearest neighbors. The number of elements whose distances from the query 
vector is actually computer is proportional to searchSize. If no value is 
given, defaults to 1.</li>
+  <li><code>--reduceStreamingKMeans (-rskm)</code>: There might be too many 
intermediate clusters from the mapper to fit into memory, so the reducer can 
run  another pass of StreamingKMeans to collapse them down to a fewer 
clusters.</li>
+  <li><code>--method (-xm)</code> method The execution  method to use:  
sequential or  mapreduce. Default  is mapreduce.</li>
+  <li><code>-- help (-h)</code>: Print out help</li>
+  <li><code>--tempDir &lt;tempDir&gt;</code>: Intermediate output 
directory.</li>
+  <li><code>--startPhase &lt;startPhase&gt;</code> First phase to run.</li>
+  <li><code>--endPhase &lt;endPhase&gt;</code> Last phase to run.</li>
 </ul>
 
 <p>##References</p>

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/users/clustering/viewing-results.html
----------------------------------------------------------------------
diff --git a/users/clustering/viewing-results.html 
b/users/clustering/viewing-results.html
index a462caf..c36024a 100644
--- a/users/clustering/viewing-results.html
+++ b/users/clustering/viewing-results.html
@@ -294,16 +294,16 @@ demonstrate the various ways one might inspect the 
outcome of various jobs.
 
 <p>Run the following to print out all options:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>java  -cp "*" 
org.apache.mahout.utils.clustering.ClusterDumper --help
-</code></pre></div></div>
+<pre><code>java  -cp "*" org.apache.mahout.utils.clustering.ClusterDumper 
--help
+</code></pre>
 
 <p><a name="ViewingResults-Example"></a></p>
 <h3 id="example">Example</h3>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>java  -cp "*" 
org.apache.mahout.utils.clustering.ClusterDumper --seqFileDir 
./solr-clust-n2/out/clusters-2
+<pre><code>java  -cp "*" org.apache.mahout.utils.clustering.ClusterDumper 
--seqFileDir ./solr-clust-n2/out/clusters-2
       --dictionary ./solr-clust-n2/dictionary.txt
       --substring 100 --pointsDir ./solr-clust-n2/out/points/
-</code></pre></div></div>
+</code></pre>
 
 <p><a name="ViewingResults-ClusterLabels(MAHOUT-163)"></a></p>
 <h2 id="cluster-labels-mahout-163">Cluster Labels (MAHOUT-163)</h2>

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/users/clustering/visualizing-sample-clusters.html
----------------------------------------------------------------------
diff --git a/users/clustering/visualizing-sample-clusters.html 
b/users/clustering/visualizing-sample-clusters.html
index eefae35..a10ee0b 100644
--- a/users/clustering/visualizing-sample-clusters.html
+++ b/users/clustering/visualizing-sample-clusters.html
@@ -306,9 +306,9 @@ programs.</li>
 
 <p>If you are using Eclipse, just right-click on each of the classes mentioned 
above and choose “Run As -Java Application”. To run these directly from the 
command line:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>cd $MAHOUT_HOME/examples
+<pre><code>cd $MAHOUT_HOME/examples
 mvn -q exec:java 
-Dexec.mainClass=org.apache.mahout.clustering.display.DisplayClustering
-</code></pre></div></div>
+</code></pre>
 
 <p>You can substitute other names above for <em>DisplayClustering</em>.</p>
 

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/users/dim-reduction/ssvd.html
----------------------------------------------------------------------
diff --git a/users/dim-reduction/ssvd.html b/users/dim-reduction/ssvd.html
index b06f0dc..ebe184d 100644
--- a/users/dim-reduction/ssvd.html
+++ b/users/dim-reduction/ssvd.html
@@ -325,7 +325,7 @@ approximations of matrices” contains comprehensive 
definition of parallelizati
 
 <p><strong>tests.R</strong></p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>n&lt;-1000
+<pre><code>n&lt;-1000
 m&lt;-2000
 k&lt;-10
  
@@ -339,7 +339,7 @@ vsim&lt;- qr.Q(qr( matrix(rnorm(n*k,mean=5), 
nrow=n,ncol=k)))
  
  
 x&lt;- usim %*% svalsim %*% t(vsim)
-</code></pre></div></div>
+</code></pre>
 
 <p>and try to compare ssvd.svd(x) and stock svd(x) performance for the same 
rank k, notice the difference in the running time. Also play with power 
iterations (qIter) and compare accuracies of standard svd and SSVD.</p>
 
@@ -347,51 +347,51 @@ x&lt;- usim %*% svalsim %*% t(vsim)
 
 <h4 id="modified-ssvd-algorithm">Modified SSVD Algorithm.</h4>
 
-<p>Given an <code class="highlighter-rouge">\(m\times n\)</code>
-matrix <code class="highlighter-rouge">\(\mathbf{A}\)</code>, a target rank 
<code class="highlighter-rouge">\(k\in\mathbb{N}_{1}\)</code>
-, an oversampling parameter <code 
class="highlighter-rouge">\(p\in\mathbb{N}_{1}\)</code>, 
-and the number of additional power iterations <code 
class="highlighter-rouge">\(q\in\mathbb{N}_{0}\)</code>, 
-this procedure computes an <code 
class="highlighter-rouge">\(m\times\left(k+p\right)\)</code>
-SVD <code class="highlighter-rouge">\(\mathbf{A\approx 
U}\boldsymbol{\Sigma}\mathbf{V}^{\top}\)</code>:</p>
+<p>Given an <code>\(m\times n\)</code>
+matrix <code>\(\mathbf{A}\)</code>, a target rank 
<code>\(k\in\mathbb{N}_{1}\)</code>
+, an oversampling parameter <code>\(p\in\mathbb{N}_{1}\)</code>, 
+and the number of additional power iterations 
<code>\(q\in\mathbb{N}_{0}\)</code>, 
+this procedure computes an <code>\(m\times\left(k+p\right)\)</code>
+SVD <code>\(\mathbf{A\approx 
U}\boldsymbol{\Sigma}\mathbf{V}^{\top}\)</code>:</p>
 
 <ol>
   <li>
-    <p>Create seed for random <code 
class="highlighter-rouge">\(n\times\left(k+p\right)\)</code>
-  matrix <code class="highlighter-rouge">\(\boldsymbol{\Omega}\)</code>. The 
seed defines matrix <code class="highlighter-rouge">\(\mathbf{\Omega}\)</code>
+    <p>Create seed for random <code>\(n\times\left(k+p\right)\)</code>
+  matrix <code>\(\boldsymbol{\Omega}\)</code>. The seed defines matrix 
<code>\(\mathbf{\Omega}\)</code>
   using Gaussian unit vectors per one of suggestions in <a 
href="http://arxiv.org/abs/0909.4061";>Halko, Martinsson, Tropp</a>.</p>
   </li>
   <li>
-    <p><code 
class="highlighter-rouge">\(\mathbf{Y=A\boldsymbol{\Omega}},\,\mathbf{Y}\in\mathbb{R}^{m\times\left(k+p\right)}\)</code></p>
+    
<p><code>\(\mathbf{Y=A\boldsymbol{\Omega}},\,\mathbf{Y}\in\mathbb{R}^{m\times\left(k+p\right)}\)</code></p>
   </li>
   <li>
-    <p>Column-orthonormalize <code 
class="highlighter-rouge">\(\mathbf{Y}\rightarrow\mathbf{Q}\)</code>
-  by computing thin decomposition <code 
class="highlighter-rouge">\(\mathbf{Y}=\mathbf{Q}\mathbf{R}\)</code>.
-  Also, <code 
class="highlighter-rouge">\(\mathbf{Q}\in\mathbb{R}^{m\times\left(k+p\right)},\,\mathbf{R}\in\mathbb{R}^{\left(k+p\right)\times\left(k+p\right)}\)</code>.
-  I denote this as <code 
class="highlighter-rouge">\(\mathbf{Q}=\mbox{qr}\left(\mathbf{Y}\right).\mathbf{Q}\)</code></p>
+    <p>Column-orthonormalize <code>\(\mathbf{Y}\rightarrow\mathbf{Q}\)</code>
+  by computing thin decomposition 
<code>\(\mathbf{Y}=\mathbf{Q}\mathbf{R}\)</code>.
+  Also, 
<code>\(\mathbf{Q}\in\mathbb{R}^{m\times\left(k+p\right)},\,\mathbf{R}\in\mathbb{R}^{\left(k+p\right)\times\left(k+p\right)}\)</code>.
+  I denote this as 
<code>\(\mathbf{Q}=\mbox{qr}\left(\mathbf{Y}\right).\mathbf{Q}\)</code></p>
   </li>
   <li>
-    <p><code 
class="highlighter-rouge">\(\mathbf{B}_{0}=\mathbf{Q}^{\top}\mathbf{A}:\,\,\mathbf{B}\in\mathbb{R}^{\left(k+p\right)\times
 n}\)</code>.</p>
+    
<p><code>\(\mathbf{B}_{0}=\mathbf{Q}^{\top}\mathbf{A}:\,\,\mathbf{B}\in\mathbb{R}^{\left(k+p\right)\times
 n}\)</code>.</p>
   </li>
   <li>
-    <p>If <code class="highlighter-rouge">\(q&gt;0\)</code>
-  repeat: for <code class="highlighter-rouge">\(i=1..q\)</code>: 
-  <code 
class="highlighter-rouge">\(\mathbf{B}_{i}^{\top}=\mathbf{A}^{\top}\mbox{qr}\left(\mathbf{A}\mathbf{B}_{i-1}^{\top}\right).\mathbf{Q}\)</code>
+    <p>If <code>\(q&gt;0\)</code>
+  repeat: for <code>\(i=1..q\)</code>: 
+  
<code>\(\mathbf{B}_{i}^{\top}=\mathbf{A}^{\top}\mbox{qr}\left(\mathbf{A}\mathbf{B}_{i-1}^{\top}\right).\mathbf{Q}\)</code>
   (power iterations step).</p>
   </li>
   <li>
-    <p>Compute Eigensolution of a small Hermitian <code 
class="highlighter-rouge">\(\mathbf{B}_{q}\mathbf{B}_{q}^{\top}=\mathbf{\hat{U}}\boldsymbol{\Lambda}\mathbf{\hat{U}}^{\top}\)</code>,
-  <code 
class="highlighter-rouge">\(\mathbf{B}_{q}\mathbf{B}_{q}^{\top}\in\mathbb{R}^{\left(k+p\right)\times\left(k+p\right)}\)</code>.</p>
+    <p>Compute Eigensolution of a small Hermitian 
<code>\(\mathbf{B}_{q}\mathbf{B}_{q}^{\top}=\mathbf{\hat{U}}\boldsymbol{\Lambda}\mathbf{\hat{U}}^{\top}\)</code>,
+  
<code>\(\mathbf{B}_{q}\mathbf{B}_{q}^{\top}\in\mathbb{R}^{\left(k+p\right)\times\left(k+p\right)}\)</code>.</p>
   </li>
   <li>
-    <p>Singular values <code 
class="highlighter-rouge">\(\mathbf{\boldsymbol{\Sigma}}=\boldsymbol{\Lambda}^{0.5}\)</code>,
-  or, in other words, <code 
class="highlighter-rouge">\(s_{i}=\sqrt{\sigma_{i}}\)</code>.</p>
+    <p>Singular values 
<code>\(\mathbf{\boldsymbol{\Sigma}}=\boldsymbol{\Lambda}^{0.5}\)</code>,
+  or, in other words, <code>\(s_{i}=\sqrt{\sigma_{i}}\)</code>.</p>
   </li>
   <li>
-    <p>If needed, compute <code 
class="highlighter-rouge">\(\mathbf{U}=\mathbf{Q}\hat{\mathbf{U}}\)</code>.</p>
+    <p>If needed, compute 
<code>\(\mathbf{U}=\mathbf{Q}\hat{\mathbf{U}}\)</code>.</p>
   </li>
   <li>
-    <p>If needed, compute <code 
class="highlighter-rouge">\(\mathbf{V}=\mathbf{B}_{q}^{\top}\hat{\mathbf{U}}\boldsymbol{\Sigma}^{-1}\)</code>.
-Another way is <code 
class="highlighter-rouge">\(\mathbf{V}=\mathbf{A}^{\top}\mathbf{U}\boldsymbol{\Sigma}^{-1}\)</code>.</p>
+    <p>If needed, compute 
<code>\(\mathbf{V}=\mathbf{B}_{q}^{\top}\hat{\mathbf{U}}\boldsymbol{\Sigma}^{-1}\)</code>.
+Another way is 
<code>\(\mathbf{V}=\mathbf{A}^{\top}\mathbf{U}\boldsymbol{\Sigma}^{-1}\)</code>.</p>
   </li>
 </ol>
 

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/users/environment/classify-a-doc-from-the-shell.html
----------------------------------------------------------------------
diff --git a/users/environment/classify-a-doc-from-the-shell.html 
b/users/environment/classify-a-doc-from-the-shell.html
index 6427c25..85c9728 100644
--- a/users/environment/classify-a-doc-from-the-shell.html
+++ b/users/environment/classify-a-doc-from-the-shell.html
@@ -274,72 +274,72 @@
 
     <p>#Building a text classifier in Mahout’s Spark Shell</p>
 
-<p>This tutorial will take you through the steps used to train a Multinomial 
Naive Bayes model and create a text classifier based on that model using the 
<code class="highlighter-rouge">mahout spark-shell</code>.</p>
+<p>This tutorial will take you through the steps used to train a Multinomial 
Naive Bayes model and create a text classifier based on that model using the 
<code>mahout spark-shell</code>.</p>
 
 <h2 id="prerequisites">Prerequisites</h2>
-<p>This tutorial assumes that you have your Spark environment variables set 
for the <code class="highlighter-rouge">mahout spark-shell</code> see: <a 
href="http://mahout.apache.org/users/sparkbindings/play-with-shell.html";>Playing
 with Mahout’s Shell</a>.  As well we assume that Mahout is running in 
cluster mode (i.e. with the <code class="highlighter-rouge">MAHOUT_LOCAL</code> 
environment variable <strong>unset</strong>) as we’ll be reading and writing 
to HDFS.</p>
+<p>This tutorial assumes that you have your Spark environment variables set 
for the <code>mahout spark-shell</code> see: <a 
href="http://mahout.apache.org/users/sparkbindings/play-with-shell.html";>Playing
 with Mahout’s Shell</a>.  As well we assume that Mahout is running in 
cluster mode (i.e. with the <code>MAHOUT_LOCAL</code> environment variable 
<strong>unset</strong>) as we’ll be reading and writing to HDFS.</p>
 
 <h2 id="downloading-and-vectorizing-the-wikipedia-dataset">Downloading and 
Vectorizing the Wikipedia dataset</h2>
-<p><em>As of Mahout v. 0.10.0, we are still reliant on the MapReduce versions 
of <code class="highlighter-rouge">mahout seqwiki</code> and <code 
class="highlighter-rouge">mahout seq2sparse</code> to extract and vectorize our 
text.  A</em> <a 
href="https://issues.apache.org/jira/browse/MAHOUT-1663";><em>Spark 
implementation of seq2sparse</em></a> <em>is in the works for Mahout v. 
0.11.</em> However, to download the Wikipedia dataset, extract the bodies of 
the documentation, label each document and vectorize the text into TF-IDF 
vectors, we can simpmly run the <a 
href="https://github.com/apache/mahout/blob/master/examples/bin/classify-wikipedia.sh";>wikipedia-classifier.sh</a>
 example.</p>
+<p><em>As of Mahout v. 0.10.0, we are still reliant on the MapReduce versions 
of <code>mahout seqwiki</code> and <code>mahout seq2sparse</code> to extract 
and vectorize our text.  A</em> <a 
href="https://issues.apache.org/jira/browse/MAHOUT-1663";><em>Spark 
implementation of seq2sparse</em></a> <em>is in the works for Mahout v. 
0.11.</em> However, to download the Wikipedia dataset, extract the bodies of 
the documentation, label each document and vectorize the text into TF-IDF 
vectors, we can simpmly run the <a 
href="https://github.com/apache/mahout/blob/master/examples/bin/classify-wikipedia.sh";>wikipedia-classifier.sh</a>
 example.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>Please select a number to choose the corresponding task 
to run
+<pre><code>Please select a number to choose the corresponding task to run
 1. CBayes (may require increased heap space on yarn)
 2. BinaryCBayes
 3. clean -- cleans up the work area in /tmp/mahout-work-wiki
 Enter your choice :
-</code></pre></div></div>
+</code></pre>
 
-<p>Enter (2). This will download a large recent XML dump of the Wikipedia 
database, into a <code class="highlighter-rouge">/tmp/mahout-work-wiki</code> 
directory, unzip it and  place it into HDFS.  It will run a <a 
href="http://mahout.apache.org/users/classification/wikipedia-classifier-example.html";>MapReduce
 job to parse the wikipedia set</a>, extracting and labeling only pages with 
category tags for [United States] and [United Kingdom] (~11600 documents). It 
will then run <code class="highlighter-rouge">mahout seq2sparse</code> to 
convert the documents into TF-IDF vectors.  The script will also a build and 
test a <a 
href="http://mahout.apache.org/users/classification/bayesian.html";>Naive Bayes 
model using MapReduce</a>.  When it is completed, you should see a confusion 
matrix on your screen.  For this tutorial, we will ignore the MapReduce model, 
and build a new model using Spark based on the vectorized text output by <code 
class="highlighter-rouge">seq2sparse</code>.</p>
+<p>Enter (2). This will download a large recent XML dump of the Wikipedia 
database, into a <code>/tmp/mahout-work-wiki</code> directory, unzip it and  
place it into HDFS.  It will run a <a 
href="http://mahout.apache.org/users/classification/wikipedia-classifier-example.html";>MapReduce
 job to parse the wikipedia set</a>, extracting and labeling only pages with 
category tags for [United States] and [United Kingdom] (~11600 documents). It 
will then run <code>mahout seq2sparse</code> to convert the documents into 
TF-IDF vectors.  The script will also a build and test a <a 
href="http://mahout.apache.org/users/classification/bayesian.html";>Naive Bayes 
model using MapReduce</a>.  When it is completed, you should see a confusion 
matrix on your screen.  For this tutorial, we will ignore the MapReduce model, 
and build a new model using Spark based on the vectorized text output by 
<code>seq2sparse</code>.</p>
 
 <h2 id="getting-started">Getting Started</h2>
 
-<p>Launch the <code class="highlighter-rouge">mahout spark-shell</code>.  
There is an example script: <code 
class="highlighter-rouge">spark-document-classifier.mscala</code> (.mscala 
denotes a Mahout-Scala script which can be run similarly to an R script).   We 
will be walking through this script for this tutorial but if you wanted to 
simply run the script, you could just issue the command:</p>
+<p>Launch the <code>mahout spark-shell</code>.  There is an example script: 
<code>spark-document-classifier.mscala</code> (.mscala denotes a Mahout-Scala 
script which can be run similarly to an R script).   We will be walking through 
this script for this tutorial but if you wanted to simply run the script, you 
could just issue the command:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>mahout&gt; :load 
/path/to/mahout/examples/bin/spark-document-classifier.mscala
-</code></pre></div></div>
+<pre><code>mahout&gt; :load 
/path/to/mahout/examples/bin/spark-document-classifier.mscala
+</code></pre>
 
-<p>For now, lets take the script apart piece by piece.  You can cut and paste 
the following code blocks into the <code class="highlighter-rouge">mahout 
spark-shell</code>.</p>
+<p>For now, lets take the script apart piece by piece.  You can cut and paste 
the following code blocks into the <code>mahout spark-shell</code>.</p>
 
 <h2 id="imports">Imports</h2>
 
 <p>Our Mahout Naive Bayes imports:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>import org.apache.mahout.classifier.naivebayes._
+<pre><code>import org.apache.mahout.classifier.naivebayes._
 import org.apache.mahout.classifier.stats._
 import org.apache.mahout.nlp.tfidf._
-</code></pre></div></div>
+</code></pre>
 
 <p>Hadoop imports needed to read our dictionary:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>import org.apache.hadoop.io.Text
+<pre><code>import org.apache.hadoop.io.Text
 import org.apache.hadoop.io.IntWritable
 import org.apache.hadoop.io.LongWritable
-</code></pre></div></div>
+</code></pre>
 
 <h2 
id="read-in-our-full-set-from-hdfs-as-vectorized-by-seq2sparse-in-classify-wikipediash">Read
 in our full set from HDFS as vectorized by seq2sparse in 
classify-wikipedia.sh</h2>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val pathToData = "/tmp/mahout-work-wiki/"
+<pre><code>val pathToData = "/tmp/mahout-work-wiki/"
 val fullData = drmDfsRead(pathToData + "wikipediaVecs/tfidf-vectors")
-</code></pre></div></div>
+</code></pre>
 
 <h2 
id="extract-the-category-of-each-observation-and-aggregate-those-observations-by-category">Extract
 the category of each observation and aggregate those observations by 
category</h2>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val (labelIndex, aggregatedObservations) = 
SparkNaiveBayes.extractLabelsAndAggregateObservations(
+<pre><code>val (labelIndex, aggregatedObservations) = 
SparkNaiveBayes.extractLabelsAndAggregateObservations(
                                                              fullData)
-</code></pre></div></div>
+</code></pre>
 
 <h2 
id="build-a-muitinomial-naive-bayes-model-and-self-test-on-the-training-set">Build
 a Muitinomial Naive Bayes model and self test on the training set</h2>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val model = 
SparkNaiveBayes.train(aggregatedObservations, labelIndex, false)
+<pre><code>val model = SparkNaiveBayes.train(aggregatedObservations, 
labelIndex, false)
 val resAnalyzer = SparkNaiveBayes.test(model, fullData, false)
 println(resAnalyzer)
-</code></pre></div></div>
+</code></pre>
 
-<p>printing the <code class="highlighter-rouge">ResultAnalyzer</code> will 
display the confusion matrix.</p>
+<p>printing the <code>ResultAnalyzer</code> will display the confusion 
matrix.</p>
 
 <h2 id="read-in-the-dictionary-and-document-frequency-count-from-hdfs">Read in 
the dictionary and document frequency count from HDFS</h2>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val dictionary = sdc.sequenceFile(pathToData + 
"wikipediaVecs/dictionary.file-0",
+<pre><code>val dictionary = sdc.sequenceFile(pathToData + 
"wikipediaVecs/dictionary.file-0",
                                   classOf[Text],
                                   classOf[IntWritable])
 val documentFrequencyCount = sdc.sequenceFile(pathToData + 
"wikipediaVecs/df-count",
@@ -359,13 +359,13 @@ val documentFrequencyCountRDD = 
documentFrequencyCount.map {
 
 val dictionaryMap = dictionaryRDD.collect.map(x =&gt; x._1.toString -&gt; 
x._2.toInt).toMap
 val dfCountMap = documentFrequencyCountRDD.collect.map(x =&gt; x._1.toInt 
-&gt; x._2.toLong).toMap
-</code></pre></div></div>
+</code></pre>
 
 <h2 
id="define-a-function-to-tokenize-and-vectorize-new-text-using-our-current-dictionary">Define
 a function to tokenize and vectorize new text using our current dictionary</h2>
 
-<p>For this simple example, our function <code 
class="highlighter-rouge">vectorizeDocument(...)</code> will tokenize a new 
document into unigrams using native Java String methods and vectorize using our 
dictionary and document frequencies. You could also use a <a 
href="https://lucene.apache.org/core/";>Lucene</a> analyzer for bigrams, 
trigrams, etc., and integrate Apache <a 
href="https://tika.apache.org/";>Tika</a> to extract text from different 
document types (PDF, PPT, XLS, etc.).  Here, however we will keep it simple, 
stripping and tokenizing our text using regexs and native String methods.</p>
+<p>For this simple example, our function <code>vectorizeDocument(...)</code> 
will tokenize a new document into unigrams using native Java String methods and 
vectorize using our dictionary and document frequencies. You could also use a 
<a href="https://lucene.apache.org/core/";>Lucene</a> analyzer for bigrams, 
trigrams, etc., and integrate Apache <a 
href="https://tika.apache.org/";>Tika</a> to extract text from different 
document types (PDF, PPT, XLS, etc.).  Here, however we will keep it simple, 
stripping and tokenizing our text using regexs and native String methods.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>def vectorizeDocument(document: String,
+<pre><code>def vectorizeDocument(document: String,
                         dictionaryMap: Map[String,Int],
                         dfMap: Map[Int,Long]): Vector = {
     val wordCounts = document.replaceAll("[^\\p{L}\\p{Nd}]+", " ")
@@ -392,11 +392,11 @@ val dfCountMap = documentFrequencyCountRDD.collect.map(x 
=&gt; x._1.toInt -&gt;
     }
     vec
 }
-</code></pre></div></div>
+</code></pre>
 
 <h2 id="setup-our-classifier">Setup our classifier</h2>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val labelMap = model.labelIndex
+<pre><code>val labelMap = model.labelIndex
 val numLabels = model.numLabels
 val reverseLabelMap = labelMap.map(x =&gt; x._2 -&gt; x._1)
 
@@ -405,13 +405,13 @@ val classifier = model.isComplementary match {
     case true =&gt; new ComplementaryNBClassifier(model)
     case _ =&gt; new StandardNBClassifier(model)
 }
-</code></pre></div></div>
+</code></pre>
 
 <h2 id="define-an-argmax-function">Define an argmax function</h2>
 
 <p>The label with the highest score wins the classification for a given 
document.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>def argmax(v: Vector): (Int, Double) = {
+<pre><code>def argmax(v: Vector): (Int, Double) = {
     var bestIdx: Int = Integer.MIN_VALUE
     var bestScore: Double = Integer.MIN_VALUE.asInstanceOf[Int].toDouble
     for(i &lt;- 0 until v.size) {
@@ -422,20 +422,20 @@ val classifier = model.isComplementary match {
     }
     (bestIdx, bestScore)
 }
-</code></pre></div></div>
+</code></pre>
 
 <h2 id="define-our-tf-idf-vector-classifier">Define our TF(-IDF) vector 
classifier</h2>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>def classifyDocument(clvec: Vector) : String = {
+<pre><code>def classifyDocument(clvec: Vector) : String = {
     val cvec = classifier.classifyFull(clvec)
     val (bestIdx, bestScore) = argmax(cvec)
     reverseLabelMap(bestIdx)
 }
-</code></pre></div></div>
+</code></pre>
 
 <h2 
id="two-sample-news-articles-united-states-football-and-united-kingdom-football">Two
 sample news articles: United States Football and United Kingdom Football</h2>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>// A random United States football article
+<pre><code>// A random United States football article
 // 
http://www.reuters.com/article/2015/01/28/us-nfl-superbowl-security-idUSKBN0L12JR20150128
 val UStextToClassify = new String("(Reuters) - Super Bowl security officials 
acknowledge" +
     " the NFL championship game represents a high profile target on a world 
stage but are" +
@@ -500,11 +500,11 @@ val UKtextToClassify = new String("(Reuters) - Manchester 
United have signed a s
     " Premier League last season and missing out on a place in the lucrative 
Champions League." +
     " ($1 = 0.8910 Swiss francs) (Writing by Neil Maidment, additional 
reporting by Jemima" + 
     " Kelly; editing by Keith Weir)")
-</code></pre></div></div>
+</code></pre>
 
 <h2 id="vectorize-and-classify-our-documents">Vectorize and classify our 
documents</h2>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val usVec = vectorizeDocument(UStextToClassify, 
dictionaryMap, dfCountMap)
+<pre><code>val usVec = vectorizeDocument(UStextToClassify, dictionaryMap, 
dfCountMap)
 val ukVec = vectorizeDocument(UKtextToClassify, dictionaryMap, dfCountMap)
 
 println("Classifying the news article about superbowl security (united 
states)")
@@ -512,33 +512,33 @@ classifyDocument(usVec)
 
 println("Classifying the news article about Manchester United (united 
kingdom)")
 classifyDocument(ukVec)
-</code></pre></div></div>
+</code></pre>
 
 <h2 id="tie-everything-together-in-a-new-method-to-classify-text">Tie 
everything together in a new method to classify text</h2>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>def classifyText(txt: String): String = {
+<pre><code>def classifyText(txt: String): String = {
     val v = vectorizeDocument(txt, dictionaryMap, dfCountMap)
     classifyDocument(v)
 }
-</code></pre></div></div>
+</code></pre>
 
 <h2 id="now-we-can-simply-call-our-classifytext-method-on-any-string">Now we 
can simply call our classifyText(…) method on any String</h2>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>classifyText("Hello world from Queens")
+<pre><code>classifyText("Hello world from Queens")
 classifyText("Hello world from London")
-</code></pre></div></div>
+</code></pre>
 
 <h2 id="model-persistance">Model persistance</h2>
 
 <p>You can save the model to HDFS:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>model.dfsWrite("/path/to/model")
-</code></pre></div></div>
+<pre><code>model.dfsWrite("/path/to/model")
+</code></pre>
 
 <p>And retrieve it with:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val model =  NBModel.dfsRead("/path/to/model")
-</code></pre></div></div>
+<pre><code>val model =  NBModel.dfsRead("/path/to/model")
+</code></pre>
 
 <p>The trained model can now be embedded in an external application.</p>
 

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/users/environment/h2o-internals.html
----------------------------------------------------------------------
diff --git a/users/environment/h2o-internals.html 
b/users/environment/h2o-internals.html
index 9abe6c2..a0e6ed0 100644
--- a/users/environment/h2o-internals.html
+++ b/users/environment/h2o-internals.html
@@ -274,7 +274,7 @@
 
     <h1 id="introduction">Introduction</h1>
 
-<p>This document provides an overview of how the Mahout Samsara environment is 
implemented over the H2O backend engine. The document is aimed at Mahout 
developers, to give a high level description of the design so that one can 
explore the code inside <code class="highlighter-rouge">h2o/</code> with some 
context.</p>
+<p>This document provides an overview of how the Mahout Samsara environment is 
implemented over the H2O backend engine. The document is aimed at Mahout 
developers, to give a high level description of the design so that one can 
explore the code inside <code>h2o/</code> with some context.</p>
 
 <h2 id="h2o-overview">H2O Overview</h2>
 
@@ -290,13 +290,13 @@
 
 <h2 id="h2o-environment-engine">H2O Environment Engine</h2>
 
-<p>The H2O backend implements the abstract DRM as an H2O Frame. Each logical 
column in the DRM is an H2O Vector. All elements of a logical DRM row are 
guaranteed to be homed on the same server. A set of rows stored on a server are 
presented as a read-only virtual in-core Matrix (i.e BlockMatrix) for the 
closure method in the <code class="highlighter-rouge">mapBlock(...)</code> 
API.</p>
+<p>The H2O backend implements the abstract DRM as an H2O Frame. Each logical 
column in the DRM is an H2O Vector. All elements of a logical DRM row are 
guaranteed to be homed on the same server. A set of rows stored on a server are 
presented as a read-only virtual in-core Matrix (i.e BlockMatrix) for the 
closure method in the <code>mapBlock(...)</code> API.</p>
 
-<p>H2O provides a flexible execution framework called <code 
class="highlighter-rouge">MRTask</code>. The <code 
class="highlighter-rouge">MRTask</code> framework typically executes over a 
Frame (or even a Vector), supports various types of map() methods, can 
optionally modify the Frame or Vector (though this never happens in the Mahout 
integration), and optionally create a new Vector or set of Vectors (to combine 
them into a new Frame, and consequently a new DRM).</p>
+<p>H2O provides a flexible execution framework called <code>MRTask</code>. The 
<code>MRTask</code> framework typically executes over a Frame (or even a 
Vector), supports various types of map() methods, can optionally modify the 
Frame or Vector (though this never happens in the Mahout integration), and 
optionally create a new Vector or set of Vectors (to combine them into a new 
Frame, and consequently a new DRM).</p>
 
 <h2 id="source-layout">Source Layout</h2>
 
-<p>Within mahout.git, the top level directory, <code 
class="highlighter-rouge">h2o/</code> holds all the source code related to the 
H2O backend engine. Part of the code (that interfaces with the rest of the 
Mahout componenets) is in Scala, and part of the code (that interfaces with 
h2o-core and implements algebraic operators) is in Java. Here is a brief 
overview of what functionality can be found where within <code 
class="highlighter-rouge">h2o/</code>.</p>
+<p>Within mahout.git, the top level directory, <code>h2o/</code> holds all the 
source code related to the H2O backend engine. Part of the code (that 
interfaces with the rest of the Mahout componenets) is in Scala, and part of 
the code (that interfaces with h2o-core and implements algebraic operators) is 
in Java. Here is a brief overview of what functionality can be found where 
within <code>h2o/</code>.</p>
 
 <p>h2o/ - top level directory containing all H2O related code</p>
 

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/users/environment/how-to-build-an-app.html
----------------------------------------------------------------------
diff --git a/users/environment/how-to-build-an-app.html 
b/users/environment/how-to-build-an-app.html
index 76d3b91..8195509 100644
--- a/users/environment/how-to-build-an-app.html
+++ b/users/environment/how-to-build-an-app.html
@@ -293,38 +293,38 @@ In order to build and run the CooccurrenceDriver you 
need to install the follow
 <p>Spark requires a set of jars on the classpath for the client side part of 
an app and another set of jars must be passed to the Spark Context for running 
distributed code. The example should discover all the neccessary classes 
automatically.</p>
 
 <p>##Application
-Using Mahout as a library in an application will require a little Scala code. 
Scala has an App trait so we’ll create an object, which inherits from <code 
class="highlighter-rouge">App</code></p>
+Using Mahout as a library in an application will require a little Scala code. 
Scala has an App trait so we’ll create an object, which inherits from 
<code>App</code></p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>object CooccurrenceDriver extends App {
+<pre><code>object CooccurrenceDriver extends App {
 }
-</code></pre></div></div>
+</code></pre>
 
-<p>This will look a little different than Java since <code 
class="highlighter-rouge">App</code> does delayed initialization, which causes 
the body to be executed when the App is launched, just as in Java you would 
create a main method.</p>
+<p>This will look a little different than Java since <code>App</code> does 
delayed initialization, which causes the body to be executed when the App is 
launched, just as in Java you would create a main method.</p>
 
 <p>Before we can execute something on Spark we’ll need to create a context. 
We could use raw Spark calls here but default values are setup for a Mahout 
context by using the Mahout helper function.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>implicit val mc = mahoutSparkContext(masterUrl = 
"local", 
+<pre><code>implicit val mc = mahoutSparkContext(masterUrl = "local", 
   appName = "CooccurrenceDriver")
-</code></pre></div></div>
+</code></pre>
 
 <p>We need to read in three files containing different interaction types. The 
files will each be read into a Mahout IndexedDataset. This allows us to 
preserve application-specific user and item IDs throughout the calculations.</p>
 
 <p>For example, here is data/purchase.csv:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>u1,iphone
+<pre><code>u1,iphone
 u1,ipad
 u2,nexus
 u2,galaxy
 u3,surface
 u4,iphone
 u4,galaxy
-</code></pre></div></div>
+</code></pre>
 
 <p>Mahout has a helper function that reads the text delimited files  
SparkEngine.indexedDatasetDFSReadElements. The function reads single element 
tuples (user-id,item-id) in a distributed way to create the IndexedDataset. 
Distributed Row Matrices (DRM) and Vectors are important data types supplied by 
Mahout and IndexedDataset is like a very lightweight Dataframe in R, it wraps a 
DRM with HashBiMaps for row and column IDs.</p>
 
 <p>One important thing to note about this example is that we read in all 
datasets before we adjust the number of rows in them to match the total number 
of users in the data. This is so the math works out <a 
href="http://mahout.apache.org/users/algorithms/intro-cooccurrence-spark.html";>(A’A,
 A’B, A’C)</a> even if some users took one action but not another there 
must be the same number of rows in all matrices.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>/**
+<pre><code>/**
  * Read files of element tuples and create IndexedDatasets one per action. 
These 
  * share a userID BiMap but have their own itemID BiMaps
  */
@@ -358,25 +358,25 @@ def readActions(actionInput: Array[(String, String)]): 
Array[(String, IndexedDat
   }
   resizedNameActionPairs // return the array of Tuples
 }
-</code></pre></div></div>
+</code></pre>
 
 <p>Now that we have the data read in we can perform the cooccurrence 
calculation.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>// actions.map creates an array of just the 
IndeedDatasets
+<pre><code>// actions.map creates an array of just the IndeedDatasets
 val indicatorMatrices = SimilarityAnalysis.cooccurrencesIDSs(
   actions.map(a =&gt; a._2)) 
-</code></pre></div></div>
+</code></pre>
 
 <p>All we need to do now is write the indicators.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>// zip a pair of arrays into an array of pairs, 
reattaching the action names
+<pre><code>// zip a pair of arrays into an array of pairs, reattaching the 
action names
 val indicatorDescriptions = actions.map(a =&gt; a._1).zip(indicatorMatrices)
 writeIndicators(indicatorDescriptions)
-</code></pre></div></div>
+</code></pre>
 
-<p>The <code class="highlighter-rouge">writeIndicators</code> method uses the 
default write function <code class="highlighter-rouge">dfsWrite</code>.</p>
+<p>The <code>writeIndicators</code> method uses the default write function 
<code>dfsWrite</code>.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>/**
+<pre><code>/**
  * Write indicatorMatrices to the output dir in the default format
  * for indexing by a search engine.
  */
@@ -391,11 +391,11 @@ def writeIndicators( indicators: Array[(String, 
IndexedDataset)]) = {
       IndexedDatasetWriteBooleanSchema) 
   }
 }
-</code></pre></div></div>
+</code></pre>
 
 <p>See the Github project for the full source. Now we create a build.sbt to 
build the example.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>name := "cooccurrence-driver"
+<pre><code>name := "cooccurrence-driver"
 
 organization := "com.finderbots"
 
@@ -424,25 +424,25 @@ packSettings
 
 packMain := Map(
   "cooc" -&gt; "CooccurrenceDriver")
-</code></pre></div></div>
+</code></pre>
 
 <p>##Build
 Building the examples from project’s root folder:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>$ sbt pack
-</code></pre></div></div>
+<pre><code>$ sbt pack
+</code></pre>
 
 <p>This will automatically set up some launcher scripts for the driver. To run 
execute</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>$ target/pack/bin/cooc
-</code></pre></div></div>
+<pre><code>$ target/pack/bin/cooc
+</code></pre>
 
 <p>The driver will execute in Spark standalone mode and put the data in 
/path/to/3-input-cooc/data/indicators/<em>indicator-type</em></p>
 
 <p>##Using a Debugger
 To build and run this example in a debugger like IntelliJ IDEA. Install from 
the IntelliJ site and add the Scala plugin.</p>
 
-<p>Open IDEA and go to the menu File-&gt;New-&gt;Project from existing 
sources-&gt;SBT-&gt;/path/to/3-input-cooc. This will create an IDEA project 
from <code class="highlighter-rouge">build.sbt</code> in the root directory.</p>
+<p>Open IDEA and go to the menu File-&gt;New-&gt;Project from existing 
sources-&gt;SBT-&gt;/path/to/3-input-cooc. This will create an IDEA project 
from <code>build.sbt</code> in the root directory.</p>
 
 <p>At this point you may create a “Debug Configuration” to run. In the 
menu choose Run-&gt;Edit Configurations. Under “Default” choose 
“Application”. In the dialog hit the elipsis button “…” to the right 
of “Environment Variables” and fill in your versions of JAVA_HOME, 
SPARK_HOME, and MAHOUT_HOME. In configuration editor under “Use classpath 
from” choose root-3-input-cooc module.</p>
 
@@ -463,24 +463,24 @@ To build and run this example in a debugger like IntelliJ 
IDEA. Install from the
 <ul>
   <li>You won’t need the context, since it is created when the shell is 
launched, comment that line out.</li>
   <li>Replace the logger.info lines with println</li>
-  <li>Remove the package info since it’s not needed, this will produce the 
file in <code 
class="highlighter-rouge">path/to/3-input-cooc/bin/CooccurrenceDriver.mscala</code>.</li>
+  <li>Remove the package info since it’s not needed, this will produce the 
file in <code>path/to/3-input-cooc/bin/CooccurrenceDriver.mscala</code>.</li>
 </ul>
 
-<p>Note the extension <code class="highlighter-rouge">.mscala</code> to 
indicate we are using Mahout’s scala extensions for math, otherwise known as 
<a 
href="http://mahout.apache.org/users/environment/out-of-core-reference.html";>Mahout-Samsara</a></p>
+<p>Note the extension <code>.mscala</code> to indicate we are using Mahout’s 
scala extensions for math, otherwise known as <a 
href="http://mahout.apache.org/users/environment/out-of-core-reference.html";>Mahout-Samsara</a></p>
 
 <p>To run the code make sure the output does not exist already</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>$ rm -r /path/to/3-input-cooc/data/indicators
-</code></pre></div></div>
+<pre><code>$ rm -r /path/to/3-input-cooc/data/indicators
+</code></pre>
 
 <p>Launch the Mahout + Spark shell:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>$ mahout spark-shell
-</code></pre></div></div>
+<pre><code>$ mahout spark-shell
+</code></pre>
 
 <p>You’ll see the Mahout splash:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>MAHOUT_LOCAL is set, so we don't add HADOOP_CONF_DIR to 
classpath.
+<pre><code>MAHOUT_LOCAL is set, so we don't add HADOOP_CONF_DIR to classpath.
 
                      _                 _
          _ __ ___   __ _| |__   ___  _   _| |_
@@ -496,11 +496,11 @@ Type :help for more information.
 Created spark context..
 Mahout distributed context is available as "implicit val sdc".
 mahout&gt; 
-</code></pre></div></div>
+</code></pre>
 
 <p>To load the driver type:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>mahout&gt; :load 
/path/to/3-input-cooc/bin/CooccurrenceDriver.mscala
+<pre><code>mahout&gt; :load /path/to/3-input-cooc/bin/CooccurrenceDriver.mscala
 Loading ./bin/CooccurrenceDriver.mscala...
 import com.google.common.collect.{HashBiMap, BiMap}
 import org.apache.log4j.Logger
@@ -510,16 +510,16 @@ import org.apache.mahout.sparkbindings._
 import scala.collection.immutable.HashMap
 defined module CooccurrenceDriver
 mahout&gt; 
-</code></pre></div></div>
+</code></pre>
 
 <p>To run the driver type:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>mahout&gt; CooccurrenceDriver.main(args = Array(""))
-</code></pre></div></div>
+<pre><code>mahout&gt; CooccurrenceDriver.main(args = Array(""))
+</code></pre>
 
 <p>You’ll get some stats printed:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>Total number of users for all actions = 5
+<pre><code>Total number of users for all actions = 5
 purchase indicator matrix:
   Number of rows for matrix = 4
   Number of columns for matrix = 5
@@ -532,9 +532,9 @@ category indicator matrix:
   Number of rows for matrix = 5
   Number of columns for matrix = 7
   Number of rows after resize = 5
-</code></pre></div></div>
+</code></pre>
 
-<p>If you look in <code 
class="highlighter-rouge">path/to/3-input-cooc/data/indicators</code> you 
should find folders containing the indicator matrices.</p>
+<p>If you look in <code>path/to/3-input-cooc/data/indicators</code> you should 
find folders containing the indicator matrices.</p>
 
    </div>
   </div>     

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/users/environment/in-core-reference.html
----------------------------------------------------------------------
diff --git a/users/environment/in-core-reference.html 
b/users/environment/in-core-reference.html
index 313fc63..0c7a62c 100644
--- a/users/environment/in-core-reference.html
+++ b/users/environment/in-core-reference.html
@@ -278,218 +278,218 @@
 
 <p>The following imports are used to enable Mahout-Samsara’s Scala DSL 
bindings for in-core Linear Algebra:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>import org.apache.mahout.math._
+<pre><code>import org.apache.mahout.math._
 import scalabindings._
 import RLikeOps._
-</code></pre></div></div>
+</code></pre>
 
 <h4 id="inline-initalization">Inline initalization</h4>
 
 <p>Dense vectors:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val densVec1: Vector = (1.0, 1.1, 1.2)
+<pre><code>val densVec1: Vector = (1.0, 1.1, 1.2)
 val denseVec2 = dvec(1, 0, 1,1 ,1,2)
-</code></pre></div></div>
+</code></pre>
 
 <p>Sparse vectors:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val sparseVec1: Vector = (5 -&gt; 1.0) :: (10 -&gt; 
2.0) :: Nil
+<pre><code>val sparseVec1: Vector = (5 -&gt; 1.0) :: (10 -&gt; 2.0) :: Nil
 val sparseVec1 = svec((5 -&gt; 1.0) :: (10 -&gt; 2.0) :: Nil)
 
 // to create a vector with specific cardinality
 val sparseVec1 = svec((5 -&gt; 1.0) :: (10 -&gt; 2.0) :: Nil, cardinality = 20)
-</code></pre></div></div>
+</code></pre>
 
 <p>Inline matrix initialization, either sparse or dense, is always done row 
wise.</p>
 
 <p>Dense matrices:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val A = dense((1, 2, 3), (3, 4, 5))
-</code></pre></div></div>
+<pre><code>val A = dense((1, 2, 3), (3, 4, 5))
+</code></pre>
 
 <p>Sparse matrices:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val A = sparse(
+<pre><code>val A = sparse(
           (1, 3) :: Nil,
           (0, 2) :: (1, 2.5) :: Nil
               )
-</code></pre></div></div>
+</code></pre>
 
 <p>Diagonal matrix with constant diagonal elements:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>diag(3.5, 10)
-</code></pre></div></div>
+<pre><code>diag(3.5, 10)
+</code></pre>
 
 <p>Diagonal matrix with main diagonal backed by a vector:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>diagv((1, 2, 3, 4, 5))
-</code></pre></div></div>
+<pre><code>diagv((1, 2, 3, 4, 5))
+</code></pre>
 
 <p>Identity matrix:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>eye(10)
-</code></pre></div></div>
+<pre><code>eye(10)
+</code></pre>
 
 <p>####Slicing and Assigning</p>
 
 <p>Getting a vector element:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val d = vec(5)
-</code></pre></div></div>
+<pre><code>val d = vec(5)
+</code></pre>
 
 <p>Setting a vector element:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>vec(5) = 3.0
-</code></pre></div></div>
+<pre><code>vec(5) = 3.0
+</code></pre>
 
 <p>Getting a matrix element:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val d = m(3,5)
-</code></pre></div></div>
+<pre><code>val d = m(3,5)
+</code></pre>
 
 <p>Setting a matrix element:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>M(3,5) = 3.0
-</code></pre></div></div>
+<pre><code>M(3,5) = 3.0
+</code></pre>
 
 <p>Getting a matrix row or column:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val rowVec = M(3, ::)
+<pre><code>val rowVec = M(3, ::)
 val colVec = M(::, 3)
-</code></pre></div></div>
+</code></pre>
 
 <p>Setting a matrix row or column via vector assignment:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>M(3, ::) := (1, 2, 3)
+<pre><code>M(3, ::) := (1, 2, 3)
 M(::, 3) := (1, 2, 3)
-</code></pre></div></div>
+</code></pre>
 
 <p>Setting a subslices of a matrix row or column:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>a(0, 0 to 1) = (3, 5)
-</code></pre></div></div>
+<pre><code>a(0, 0 to 1) = (3, 5)
+</code></pre>
 
 <p>Setting a subslices of a matrix row or column via vector assignment:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>a(0, 0 to 1) := (3, 5)
-</code></pre></div></div>
+<pre><code>a(0, 0 to 1) := (3, 5)
+</code></pre>
 
 <p>Getting a matrix as from matrix contiguous block:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val B = A(2 to 3, 3 to 4)
-</code></pre></div></div>
+<pre><code>val B = A(2 to 3, 3 to 4)
+</code></pre>
 
 <p>Assigning a contiguous block to a matrix:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>A(0 to 1, 1 to 2) = dense((3, 2), (3 ,3))
-</code></pre></div></div>
+<pre><code>A(0 to 1, 1 to 2) = dense((3, 2), (3 ,3))
+</code></pre>
 
 <p>Assigning a contiguous block to a matrix using the matrix assignment 
operator:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>A(o to 1, 1 to 2) := dense((3, 2), (3, 3))
-</code></pre></div></div>
+<pre><code>A(o to 1, 1 to 2) := dense((3, 2), (3, 3))
+</code></pre>
 
 <p>Assignment operator used for copying between vectors or matrices:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>vec1 := vec2
+<pre><code>vec1 := vec2
 M1 := M2
-</code></pre></div></div>
+</code></pre>
 
 <p>Assignment operator using assignment through a functional literal for a 
matrix:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>M := ((row, col, x) =&gt; if (row == col) 1 else 0
-</code></pre></div></div>
+<pre><code>M := ((row, col, x) =&gt; if (row == col) 1 else 0
+</code></pre>
 
 <p>Assignment operator using assignment through a functional literal for a 
vector:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>vec := ((index, x) =&gt; sqrt(x)
-</code></pre></div></div>
+<pre><code>vec := ((index, x) =&gt; sqrt(x)
+</code></pre>
 
 <h4 id="blas-like-operations">BLAS-like operations</h4>
 
 <p>Plus/minus either vector or numeric with assignment or not:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>a + b
+<pre><code>a + b
 a - b
 a + 5.0
 a - 5.0
-</code></pre></div></div>
+</code></pre>
 
 <p>Hadamard (elementwise) product, either vector or matrix or numeric 
operands:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>a * b
+<pre><code>a * b
 a * 0.5
-</code></pre></div></div>
+</code></pre>
 
 <p>Operations with assignment:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>a += b
+<pre><code>a += b
 a -= b
 a += 5.0
 a -= 5.0
 a *= b
 a *= 5
-</code></pre></div></div>
+</code></pre>
 
 <p><em>Some nuanced rules</em>:</p>
 
 <p>1/x in R (where x is a vector or a matrix) is elementwise inverse.  In 
scala it would be expressed as:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val xInv = 1 /: x
-</code></pre></div></div>
+<pre><code>val xInv = 1 /: x
+</code></pre>
 
 <p>and R’s 5.0 - x would be:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val x1 = 5.0 -: x
-</code></pre></div></div>
+<pre><code>val x1 = 5.0 -: x
+</code></pre>
 
 <p><em>note: All assignment operations, including :=, return the assignee just 
like in C++</em>:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>a -= b 
-</code></pre></div></div>
+<pre><code>a -= b 
+</code></pre>
 
 <p>assigns <strong>a - b</strong> to <strong>b</strong> (in-place) and returns 
<strong>b</strong>.  Similarly for <strong>a /=: b</strong> or <strong>1 /=: 
v</strong></p>
 
 <p>Dot product:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>a dot b
-</code></pre></div></div>
+<pre><code>a dot b
+</code></pre>
 
 <p>Matrix and vector equivalency (or non-equivalency).  <strong>Dangerous, 
exact equivalence is rarely useful, better to use norm comparisons with an 
allowance of small errors.</strong></p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>a === b
+<pre><code>a === b
 a !== b
-</code></pre></div></div>
+</code></pre>
 
 <p>Matrix multiply:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>a %*% b
-</code></pre></div></div>
+<pre><code>a %*% b
+</code></pre>
 
 <p>Optimized Right Multiply with a diagonal matrix:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>diag(5, 5) :%*% b
-</code></pre></div></div>
+<pre><code>diag(5, 5) :%*% b
+</code></pre>
 
 <p>Optimized Left Multiply with a diagonal matrix:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>A %*%: diag(5, 5)
-</code></pre></div></div>
+<pre><code>A %*%: diag(5, 5)
+</code></pre>
 
 <p>Second norm, of a vector or matrix:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>a.norm
-</code></pre></div></div>
+<pre><code>a.norm
+</code></pre>
 
 <p>Transpose:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val Mt = M.t
-</code></pre></div></div>
+<pre><code>val Mt = M.t
+</code></pre>
 
-<p><em>note: Transposition is currently handled via view, i.e. updating a 
transposed matrix will be updating the original.</em>  Also computing something 
like <code class="highlighter-rouge">\(\mathbf{X^\top}\mathbf{X}\)</code>:</p>
+<p><em>note: Transposition is currently handled via view, i.e. updating a 
transposed matrix will be updating the original.</em>  Also computing something 
like <code>\(\mathbf{X^\top}\mathbf{X}\)</code>:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val XtX = X.t %*% X
-</code></pre></div></div>
+<pre><code>val XtX = X.t %*% X
+</code></pre>
 
 <p>will not therefore incur any additional data copying.</p>
 
@@ -497,115 +497,115 @@ a !== b
 
 <p>Matrix decompositions require an additional import:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>import org.apache.mahout.math.decompositions._
-</code></pre></div></div>
+<pre><code>import org.apache.mahout.math.decompositions._
+</code></pre>
 
 <p>All arguments in the following are matricies.</p>
 
 <p><strong>Cholesky decomposition</strong></p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val ch = chol(M)
-</code></pre></div></div>
+<pre><code>val ch = chol(M)
+</code></pre>
 
 <p><strong>SVD</strong></p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val (U, V, s) = svd(M)
-</code></pre></div></div>
+<pre><code>val (U, V, s) = svd(M)
+</code></pre>
 
 <p><strong>EigenDecomposition</strong></p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val (V, d) = eigen(M)
-</code></pre></div></div>
+<pre><code>val (V, d) = eigen(M)
+</code></pre>
 
 <p><strong>QR decomposition</strong></p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val (Q, R) = qr(M)
-</code></pre></div></div>
+<pre><code>val (Q, R) = qr(M)
+</code></pre>
 
 <p><strong>Rank</strong>: Check for rank deficiency (runs rank-revealing 
QR)</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>M.isFullRank
-</code></pre></div></div>
+<pre><code>M.isFullRank
+</code></pre>
 
 <p><strong>In-core SSVD</strong></p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>Val (U, V, s) = ssvd(A, k = 50, p = 15, q = 1)
-</code></pre></div></div>
+<pre><code>Val (U, V, s) = ssvd(A, k = 50, p = 15, q = 1)
+</code></pre>
 
 <p><strong>Solving linear equation systems and matrix inversion:</strong> 
fully similar to R semantics; there are three forms of invocation:</p>
 
-<p>Solve <code class="highlighter-rouge">\(\mathbf{AX}=\mathbf{B}\)</code>:</p>
+<p>Solve <code>\(\mathbf{AX}=\mathbf{B}\)</code>:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>solve(A, B)
-</code></pre></div></div>
+<pre><code>solve(A, B)
+</code></pre>
 
-<p>Solve <code class="highlighter-rouge">\(\mathbf{Ax}=\mathbf{b}\)</code>:</p>
+<p>Solve <code>\(\mathbf{Ax}=\mathbf{b}\)</code>:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>solve(A, b)
-</code></pre></div></div>
+<pre><code>solve(A, b)
+</code></pre>
 
-<p>Compute <code class="highlighter-rouge">\(\mathbf{A^{-1}}\)</code>:</p>
+<p>Compute <code>\(\mathbf{A^{-1}}\)</code>:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>solve(A)
-</code></pre></div></div>
+<pre><code>solve(A)
+</code></pre>
 
 <h4 id="misc">Misc</h4>
 
 <p>Vector cardinality:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>a.length
-</code></pre></div></div>
+<pre><code>a.length
+</code></pre>
 
 <p>Matrix cardinality:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>m.nrow
+<pre><code>m.nrow
 m.ncol
-</code></pre></div></div>
+</code></pre>
 
 <p>Means and sums:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>m.colSums
+<pre><code>m.colSums
 m.colMeans
 m.rowSums
 m.rowMeans
-</code></pre></div></div>
+</code></pre>
 
 <p>Copy-By-Value:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val b = a cloned
-</code></pre></div></div>
+<pre><code>val b = a cloned
+</code></pre>
 
 <h4 id="random-matrices">Random Matrices</h4>
 
-<p><code class="highlighter-rouge">\(\mathcal{U}\)</code>(0,1) random matrix 
view:</p>
+<p><code>\(\mathcal{U}\)</code>(0,1) random matrix view:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val incCoreA = Matrices.uniformView(m, n, seed)
-</code></pre></div></div>
+<pre><code>val incCoreA = Matrices.uniformView(m, n, seed)
+</code></pre>
 
-<p><code class="highlighter-rouge">\(\mathcal{U}\)</code>(-1,1) random matrix 
view:</p>
+<p><code>\(\mathcal{U}\)</code>(-1,1) random matrix view:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val incCoreA = Matrices.symmetricUniformView(m, n, seed)
-</code></pre></div></div>
+<pre><code>val incCoreA = Matrices.symmetricUniformView(m, n, seed)
+</code></pre>
 
-<p><code class="highlighter-rouge">\(\mathcal{N}\)</code>(-1,1) random matrix 
view:</p>
+<p><code>\(\mathcal{N}\)</code>(-1,1) random matrix view:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>val incCoreA = Matrices.gaussianView(m, n, seed)
-</code></pre></div></div>
+<pre><code>val incCoreA = Matrices.gaussianView(m, n, seed)
+</code></pre>
 
 <h4 id="iterators">Iterators</h4>
 
 <p>Mahout-Math already exposes a number of iterators.  Scala code just needs 
the following imports to enable implicit conversions to scala iterators.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>import collection._
+<pre><code>import collection._
 import JavaConversions._
-</code></pre></div></div>
+</code></pre>
 
 <p>Iterating over rows in a Matrix:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>for (row &lt;- m) {
+<pre><code>for (row &lt;- m) {
   ... do something with row
 }
-</code></pre></div></div>
+</code></pre>
 
 <!--Iterating over non-zero and all elements of a vector:
 *Note that Vector.Element also has some implicit syntatic sugar, e.g to add 
5.0 to every non-zero element of a matrix, the following code may be used:*

Reply via email to