Repository: incubator-systemml-website
Updated Branches:
  refs/heads/master cb489926b -> 05e05c651


Add jekyll syntax highlighting to code blocks

Closes #43.


Project: http://git-wip-us.apache.org/repos/asf/incubator-systemml-website/repo
Commit: 
http://git-wip-us.apache.org/repos/asf/incubator-systemml-website/commit/05e05c65
Tree: 
http://git-wip-us.apache.org/repos/asf/incubator-systemml-website/tree/05e05c65
Diff: 
http://git-wip-us.apache.org/repos/asf/incubator-systemml-website/diff/05e05c65

Branch: refs/heads/master
Commit: 05e05c651ff74b5c8783811e9c1cd26179ae99f8
Parents: cb48992
Author: Dexter Lesaca <[email protected]>
Authored: Thu Apr 6 12:58:04 2017 -0700
Committer: Deron Eriksson <[email protected]>
Committed: Thu Apr 6 12:58:04 2017 -0700

----------------------------------------------------------------------
 _src/_sass/_base.scss |   8 +++
 _src/_sass/main.scss  |   4 ++
 _src/get-started.html | 139 ++++++++++++++++++++++++++-------------------
 3 files changed, 92 insertions(+), 59 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-systemml-website/blob/05e05c65/_src/_sass/_base.scss
----------------------------------------------------------------------
diff --git a/_src/_sass/_base.scss b/_src/_sass/_base.scss
index 54e2328..f8a0f85 100644
--- a/_src/_sass/_base.scss
+++ b/_src/_sass/_base.scss
@@ -43,6 +43,10 @@ pre {
   white-space: pre\9; /* IE7+ */
 }
 
+code {
+  overflow: scroll;
+}
+
 .text-center {
   text-align: center;
 }
@@ -65,3 +69,7 @@ hr {
 ol {
   padding: 0;
 }
+
+figure {
+  margin: 0;
+}

http://git-wip-us.apache.org/repos/asf/incubator-systemml-website/blob/05e05c65/_src/_sass/main.scss
----------------------------------------------------------------------
diff --git a/_src/_sass/main.scss b/_src/_sass/main.scss
index b9013f5..3ec514d 100644
--- a/_src/_sass/main.scss
+++ b/_src/_sass/main.scss
@@ -6,6 +6,10 @@
 // Mixins
 @import "mixins.scss";
 
+// Syntax highlighting
+
+@import "syntax";
+
 @import "navigation.scss";
 
 // Base Styles

http://git-wip-us.apache.org/repos/asf/incubator-systemml-website/blob/05e05c65/_src/get-started.html
----------------------------------------------------------------------
diff --git a/_src/get-started.html b/_src/get-started.html
index 712351b..7f78a89 100644
--- a/_src/get-started.html
+++ b/_src/get-started.html
@@ -65,9 +65,13 @@ limitations under the License.
 
     <!-- Step 1 Code -->
     <div class="col col-12">
-      <pre><code>/usr/bin/ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Homebrew/install/master/install)"
+
+
+      {% highlight bash %}
+/usr/bin/ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Homebrew/install/master/install)"
 # Linux
-ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Linuxbrew/install/master/install)"</code></pre>
+ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Linuxbrew/install/master/install)"{% 
endhighlight %}
+
     </div>
 
     <!-- Step 2 Instructions -->
@@ -77,11 +81,12 @@ ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Linuxbrew/install/master
 
     <!-- Step 2 Code -->
     <div class="col col-12">
-      <pre><code>brew tap caskroom/cask
+      {% highlight bash %}
+brew tap caskroom/cask
 brew install Caskroom/cask/java
 
 brew install python
-pip install jupyter matplotlib numpy</code></pre>
+pip install jupyter matplotlib numpy{% endhighlight %}
     </div>
   </div>
 
@@ -100,8 +105,9 @@ pip install jupyter matplotlib numpy</code></pre>
 
     <!-- Step 3 Code -->
     <div class="col col-12">
-      <pre><code>brew tap homebrew/versions
-brew install apache-spark16</code></pre>
+      {% highlight bash %}
+brew tap homebrew/versions
+brew install apache-spark16{% endhighlight %}
 
     <p> Alternatively, you can <a 
href="http://spark.apache.org/downloads.html";>download Spark</a> directly. </p>
     </div>
@@ -114,22 +120,27 @@ brew install apache-spark16</code></pre>
     <!-- Step 4 Code -->
     <div class="col col-12">
 
-       <p>
-       If you are a python user, we recommend that you download and install 
Apache SystemML via pip:
-       </p>
-       <pre><code>Python 2:
+       <p>
+       If you are a python user, we recommend that you download and install 
Apache SystemML via pip:
+       </p>
+      {% highlight bash %}
+# Python 2
 pip install systemml
-# Bleeding edge: pip install 
git+git://github.com/apache/incubator-systemml.git#subdirectory=src/main/python
+# Bleeding edge: pip install 
git+git://github.com/apache/incubator-systemml.git#subdirectory=src/main/python{%
 endhighlight %}
 
-Python 3:
+      {% highlight bash %}
+# Python 3:
 pip3 install systemml
-# Bleeding edge: pip3 install 
git+git://github.com/apache/incubator-systemml.git#subdirectory=src/main/python</code></pre>
+# Bleeding edge: pip3 install 
git+git://github.com/apache/incubator-systemml.git#subdirectory=src/main/python{%
 endhighlight %}
+
 
-       <p>
-       Alternatively, if you intend to use SystemML via spark-shell (or 
spark-submit), you only need systemml-0.12.0-incubating.jar, which is packaged 
into our official binary release (<a 
href="http://www.apache.org/dyn/closer.cgi/pub/apache/incubator/systemml/0.12.0-incubating/systemml-0.12.0-incubating.zip";
 target="_blank">systemml-0.12.0-incubating.zip</a>).
-       Note: If you have installed SystemML via pip, you can get the location 
of this jar by executing following command:
-       </p>
-       <pre><code>python -c 'import imp; import os; print 
os.path.join(imp.find_module("systemml")[1], "systemml-java")'</code></pre>
+
+       <p>
+       Alternatively, if you intend to use SystemML via spark-shell (or 
spark-submit), you only need systemml-0.12.0-incubating.jar, which is packaged 
into our official binary release (<a 
href="http://www.apache.org/dyn/closer.cgi/pub/apache/incubator/systemml/0.12.0-incubating/systemml-0.12.0-incubating.zip";
 target="_blank">systemml-0.12.0-incubating.zip</a>).
+       Note: If you have installed SystemML via pip, you can get the location 
of this jar by executing following command:
+       </p>
+      {% highlight bash %}
+python -c 'import imp; import os; print 
os.path.join(imp.find_module("systemml")[1], "systemml-java")'{% endhighlight %}
 
     </div>
 
@@ -138,31 +149,31 @@ pip3 install systemml
       <!-- Section Header -->
       <div class="col col-12 content-group--medium-bottom-margin">
         <h2>Ways to Use</h2>
-        <p>You can use SystemML in one of the following ways:
-       <ol>
-               <li>On Cluster (using our programmatic APIs):
-                       <ul>
-                               <li>Using pyspark: Please see our <a 
href="http://apache.github.io/incubator-systemml/beginners-guide-python";>beginner's
 guide for python users</a>.</li>
-                               <li>Using Jupyter: Described below in step 
5.</li>
-                               <li>Using spark-shell: Described below in step 
6.</li>
-                       </ul>
-               </li>
-
-               <li>On Cluster (command-line batch mode):
-                       <ul>
-                               <li>Using spark-submit: Please see our <a 
href="http://apache.github.io/incubator-systemml/spark-batch-mode";>spark batch 
mode tutorial</a>.</li>
-                               <li>Using hadoop: Please see our <a 
href="http://apache.github.io/incubator-systemml/hadoop-batch-mode";>hadoop 
batch model tutorial</a>.</li>
-                       </ul>
-               </li>
-
-               <li>On laptop (command-line batch mode) without installing 
Spark or Hadoop: Please see our <a 
href="http://apache.github.io/incubator-systemml/standalone-guide";>standalone 
mode tutorial</a>.</li>
-
-               <li>In-memory mode (as part of another Java application for 
scoring): Please see our <a 
href="http://apache.github.io/incubator-systemml/jmlc";>JMLC tutorial</a>.</li>
-       </ol>
-
-       <p>
-       Note that you can also run pyspark, spark-shell, spark-submit on you 
laptop using "--master local[*]" parameter.
-       </p>
+        <p>You can use SystemML in one of the following ways:</p>
+       <ol>
+               <li>On Cluster (using our programmatic APIs):
+                       <ul>
+                               <li>Using pyspark: Please see our <a 
href="http://apache.github.io/incubator-systemml/beginners-guide-python";>beginner's
 guide for python users</a>.</li>
+                               <li>Using Jupyter: Described below in step 
5.</li>
+                               <li>Using spark-shell: Described below in step 
6.</li>
+                       </ul>
+               </li>
+
+               <li>On Cluster (command-line batch mode):
+                       <ul>
+                               <li>Using spark-submit: Please see our <a 
href="http://apache.github.io/incubator-systemml/spark-batch-mode";>spark batch 
mode tutorial</a>.</li>
+                               <li>Using hadoop: Please see our <a 
href="http://apache.github.io/incubator-systemml/hadoop-batch-mode";>hadoop 
batch model tutorial</a>.</li>
+                       </ul>
+               </li>
+
+               <li>On laptop (command-line batch mode) without installing 
Spark or Hadoop: Please see our <a 
href="http://apache.github.io/incubator-systemml/standalone-guide";>standalone 
mode tutorial</a>.</li>
+
+               <li>In-memory mode (as part of another Java application for 
scoring): Please see our <a 
href="http://apache.github.io/incubator-systemml/jmlc";>JMLC tutorial</a>.</li>
+       </ol>
+
+       <p>
+       Note that you can also run pyspark, spark-shell, spark-submit on you 
laptop using "--master local[*]" parameter.
+       </p>
       </div>
 
       <!-- Step 5 Instructions -->
@@ -174,11 +185,12 @@ pip3 install systemml
       <div class="col col-12">
         <h4>Get Started</h4>
         <p>Start up your Jupyter notebook by moving to the folder where you 
saved the notebook. Then copy and paste the line below:</p>
-        <pre><code>Python 2:
-PYSPARK_DRIVER_PYTHON=jupyter PYSPARK_DRIVER_PYTHON_OPTS="notebook" pyspark 
--master local[*] --driver-class-path SystemML.jar --jars SystemML.jar--conf 
"spark.driver.memory=12g" --conf spark.driver.maxResultSize=0 --conf 
spark.akka.frameSize=128 --conf spark.default.parallelism=100
-
-Python 3:
-PYSPARK_PYTHON=python3 PYSPARK_DRIVER_PYTHON=jupyter 
PYSPARK_DRIVER_PYTHON_OPTS="notebook" pyspark --master local[*] 
--driver-class-path SystemML.jar --jars SystemML.jar --conf 
"spark.driver.memory=12g" --conf spark.driver.maxResultSize=0 --conf 
spark.akka.frameSize=128 --conf spark.default.parallelism=100</code></pre>
+        {% highlight python %}
+# Python 2:
+PYSPARK_DRIVER_PYTHON=jupyter PYSPARK_DRIVER_PYTHON_OPTS="notebook" pyspark 
--master local[*] --driver-class-path SystemML.jar --jars SystemML.jar--conf 
"spark.driver.memory=12g" --conf spark.driver.maxResultSize=0 --conf 
spark.akka.frameSize=128 --conf spark.default.parallelism=100{% endhighlight %}
+        {% highlight python %}
+# Python 3:
+PYSPARK_PYTHON=python3 PYSPARK_DRIVER_PYTHON=jupyter 
PYSPARK_DRIVER_PYTHON_OPTS="notebook" pyspark --master local[*] 
--driver-class-path SystemML.jar --jars SystemML.jar --conf 
"spark.driver.memory=12g" --conf spark.driver.maxResultSize=0 --conf 
spark.akka.frameSize=128 --conf spark.default.parallelism=100{% endhighlight %}
 
       </div>
 
@@ -191,30 +203,38 @@ PYSPARK_PYTHON=python3 PYSPARK_DRIVER_PYTHON=jupyter 
PYSPARK_DRIVER_PYTHON_OPTS=
       <div class="col col-12">
         <h4>Start Spark Shell with SystemML</h4>
         <p> To use SystemML with Spark Shell, the SystemML jar can be 
referenced using Spark Shell’s --jars option. Start the Spark Shell with 
SystemML with the following line of code in your terminal:</p>
-        <pre><code>spark-shell --executor-memory 4G --driver-memory 4G --jars 
SystemML.jar</code></pre>
+        {% highlight bash %}
+spark-shell --executor-memory 4G --driver-memory 4G --jars SystemML.jar{% 
endhighlight %}
+        <!-- <pre><code>spark-shell --executor-memory 4G --driver-memory 4G 
--jars SystemML.jar</code></pre> -->
         <h4>Create the MLContext</h4>
         <p>To begin, start an MLContext by typing the code below. Once 
successful, you should see a “Welcome to Apache SystemML!” message.</p>
-        <pre><code>import org.apache.sysml.api.mlcontext._
+        {% highlight bash %}
+import org.apache.sysml.api.mlcontext._
 import org.apache.sysml.api.mlcontext.ScriptFactory._
-val ml = new MLContext(sc)</code></pre>
+val ml = new MLContext(sc){% endhighlight %}
+
         <h4>Hello World</h4>
         <p>The ScriptFactory class allows DML and PYDML scripts to be created 
from Strings, Files, URLs, and InputStreams. Here, we’ll use the dmlmethod to 
create a DML “hello world” script based on a String.  We execute the script 
using MLContext’s execute method, which displays “hello world” to the 
console. The execute method returns an MLResults object, which contains no 
results since the script has no outputs.</p>
-        <pre><code>val helloScript = dml("print('hello world')")
-ml.execute(helloScript)</code></pre>
+        {% highlight python %}
+val helloScript = dml("print('hello world')")
+ml.execute(helloScript){% endhighlight %}
         <h4>DataFrame Example</h4>
         <p>As an example of how to use SystemML, we’ll first use Spark to 
create a DataFrame called df of random doubles from 0 to 1 consisting of 10,000 
rows and 1,000 columns.</p>
-        <pre><code>import org.apache.spark.sql._
+        {% highlight python %}
+import org.apache.spark.sql._
 import org.apache.spark.sql.types.{StructType,StructField,DoubleType}
 import scala.util.Random
 val numRows = 10000
 val numCols = 1000
 val data = sc.parallelize(0 to numRows-1).map { _ => 
Row.fromSeq(Seq.fill(numCols)(Random.nextDouble)) }
 val schema = StructType((0 to numCols-1).map { i => StructField("C" + i, 
DoubleType, true) } )
-val df = sqlContext.createDataFrame(data, schema)</code></pre>
+val df = sqlContext.createDataFrame(data, schema){% endhighlight %}
+
         <p>We’ll create a DML script using the ScriptFactory dml method to 
find the minimum, maximum, and mean values in a matrix. This script has one 
input variable, matrix Xin, and three output variables, minOut, maxOut, and 
meanOut.
 For performance, we’ll specify metadata indicating that the matrix has 
10,000 rows and 1,000 columns.
 We execute the script and obtain the results as a Tuple by calling getTuple on 
the results, specifying the types and names of the output variables.</p>
-        <pre><code>val minMaxMean =
+        {% highlight python %}
+val minMaxMean =
 """
 minOut = min(Xin)
 maxOut = max(Xin)
@@ -222,11 +242,12 @@ meanOut = mean(Xin)
 """
 val mm = new MatrixMetadata(numRows, numCols)
 val minMaxMeanScript = dml(minMaxMean).in("Xin", df, mm).out("minOut", 
"maxOut", "meanOut")
-val (min, max, mean) = ml.execute(minMaxMeanScript).getTuple[Double, Double, 
Double]("minOut", "maxOut", "meanOut")</code></pre>
+val (min, max, mean) = ml.execute(minMaxMeanScript).getTuple[Double, Double, 
Double]("minOut", "maxOut", "meanOut"){% endhighlight %}
         <p>Many different types of input and output variables are 
automatically allowed. These types include Boolean, Long, Double, String, 
Array[Array[Double]], RDD<String> and JavaRDD<String> in CSV (dense) and IJV 
(sparse) formats, DataFrame, BinaryBlockMatrix,Matrix, and Frame. RDDs and 
JavaRDDs are assumed to be CSV format unless MatrixMetadata is supplied 
indicating IJV format.</p>
         <h4>RDD Example:</h4>
         <p>Let’s take a look at an example of input matrices as RDDs in CSV 
format. We’ll create two 2x2 matrices and input these into a DML script. This 
script will sum each matrix and create a message based on which sum is greater. 
We will output the sums and the message.</p>
-<pre><code>val rdd1 = sc.parallelize(Array("1.0,2.0", "3.0,4.0"))
+        {% highlight python %}
+val rdd1 = sc.parallelize(Array("1.0,2.0", "3.0,4.0"))
 val rdd2 = sc.parallelize(Array("5.0,6.0", "7.0,8.0"))
 val sums = """
 s1 = sum(m1);
@@ -248,7 +269,7 @@ val message = sumResults.getString("message")
 val rdd1Metadata = new MatrixMetadata(2, 2)
 val rdd2Metadata = new MatrixMetadata(2, 2)
 val sumScript = dmlFromFile("sums.dml").in(Seq(("m1", rdd1, rdd1Metadata), 
("m2", rdd2, rdd2Metadata))).out("s1", "s2", "message")
-val (firstSum, secondSum, sumMessage) = ml.execute(sumScript).getTuple[Double, 
Double, String]("s1", "s2", "message")</code></pre>
+val (firstSum, secondSum, sumMessage) = ml.execute(sumScript).getTuple[Double, 
Double, String]("s1", "s2", "message"){% endhighlight %}
       <p>Congratulations! You’ve now run examples in Apache SystemML!</p>
     </div>
   </div>

Reply via email to