Repository: incubator-systemml
Updated Branches:
  refs/heads/master 9b9d019b2 -> 20e05458b


[SYSTEMML-618] Updating the LeNet and softmax MNIST examples.

This commit adds `-train` and `-predict` scripts for running the examples in 
batch mode at the command line, and updates the functions in the main scripts.


Project: http://git-wip-us.apache.org/repos/asf/incubator-systemml/repo
Commit: 
http://git-wip-us.apache.org/repos/asf/incubator-systemml/commit/20e05458
Tree: http://git-wip-us.apache.org/repos/asf/incubator-systemml/tree/20e05458
Diff: http://git-wip-us.apache.org/repos/asf/incubator-systemml/diff/20e05458

Branch: refs/heads/master
Commit: 20e05458b6dea5874dc93261252396a3e5cbc49f
Parents: 9b9d019
Author: Mike Dusenberry <[email protected]>
Authored: Fri Jul 8 16:49:28 2016 -0700
Committer: Mike Dusenberry <[email protected]>
Committed: Fri Jul 8 16:52:42 2016 -0700

----------------------------------------------------------------------
 .../examples/Example - MNIST LeNet.ipynb        |   7 +-
 .../Example - MNIST Softmax Classifier.ipynb    |   5 +-
 scripts/staging/SystemML-NN/examples/README.md  |  44 ++++---
 .../examples/mnist_lenet-predict.dml            |  87 +++++++++++++
 .../SystemML-NN/examples/mnist_lenet-train.dml  | 119 ++++++++++++++++++
 .../SystemML-NN/examples/mnist_lenet.dml        | 125 +++++--------------
 .../examples/mnist_softmax-predict.dml          |  74 +++++++++++
 .../examples/mnist_softmax-train.dml            | 108 ++++++++++++++++
 .../SystemML-NN/examples/mnist_softmax.dml      | 114 +++++------------
 9 files changed, 486 insertions(+), 197 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-systemml/blob/20e05458/scripts/staging/SystemML-NN/examples/Example
 - MNIST LeNet.ipynb
----------------------------------------------------------------------
diff --git a/scripts/staging/SystemML-NN/examples/Example - MNIST LeNet.ipynb 
b/scripts/staging/SystemML-NN/examples/Example - MNIST LeNet.ipynb
index fb1f2b3..4550cce 100644
--- a/scripts/staging/SystemML-NN/examples/Example - MNIST LeNet.ipynb  
+++ b/scripts/staging/SystemML-NN/examples/Example - MNIST LeNet.ipynb  
@@ -190,9 +190,10 @@
     "b4 = read($b4)\n",
     "\n",
     "# Eval on test set\n",
-    "[loss, accuracy] = mnist_lenet::eval(X_test, y_test, C, Hin, Win, W1, b1, 
W2, b2, W3, b3, W4, b4)\n",
+    "probs = mnist_lenet::predict(X_test, C, Hin, Win, W1, b1, W2, b2, W3, b3, 
W4, b4)\n",
+    "[loss, accuracy] = mnist_lenet::eval(probs, y_test)\n",
     "\n",
-    "print(\"Test ;Accuracy: \" + accuracy)\n",
+    "print(\"Test Accuracy: \" + accuracy)\n",
     "\n",
     "print(\"\")\n",
     "print(\"\")\n",
@@ -223,7 +224,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython2",
-   "version": "2.7.11"
+   "version": "2.7.12"
   }
  },
  "nbformat": 4,

http://git-wip-us.apache.org/repos/asf/incubator-systemml/blob/20e05458/scripts/staging/SystemML-NN/examples/Example
 - MNIST Softmax Classifier.ipynb
----------------------------------------------------------------------
diff --git a/scripts/staging/SystemML-NN/examples/Example - MNIST Softmax 
Classifier.ipynb b/scripts/staging/SystemML-NN/examples/Example - MNIST Softmax 
Classifier.ipynb
index 454e31b..39559ec 100644
--- a/scripts/staging/SystemML-NN/examples/Example - MNIST Softmax 
Classifier.ipynb     
+++ b/scripts/staging/SystemML-NN/examples/Example - MNIST Softmax 
Classifier.ipynb     
@@ -164,7 +164,8 @@
     "b = read($b)\n",
     "\n",
     "# Eval on test set\n",
-    "[loss, accuracy] = mnist_softmax::eval(X_test, y_test, W, b)\n",
+    "probs = mnist_softmax::predict(X_test, W, b)\n",
+    "[loss, accuracy] = mnist_softmax::eval(probs, y_test)\n",
     "\n",
     "print(\"Test Accuracy: \" + accuracy)\n",
     "\n",
@@ -193,7 +194,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython2",
-   "version": "2.7.11"
+   "version": "2.7.12"
   }
  },
  "nbformat": 4,

http://git-wip-us.apache.org/repos/asf/incubator-systemml/blob/20e05458/scripts/staging/SystemML-NN/examples/README.md
----------------------------------------------------------------------
diff --git a/scripts/staging/SystemML-NN/examples/README.md 
b/scripts/staging/SystemML-NN/examples/README.md
index eee5e9b..cb0454f 100644
--- a/scripts/staging/SystemML-NN/examples/README.md
+++ b/scripts/staging/SystemML-NN/examples/README.md
@@ -23,6 +23,26 @@ limitations under the License.
 
 ---
 
+# Examples
+### MNIST Softmax Classifier
+
+* This example trains a softmax classifier, which is essentially a multi-class 
logistic regression model, on the MNIST data.  The model will be trained on the 
*training* images, validated on the *validation* images, and tested for final 
performance metrics on the *test* images.
+* Notebook: `Example - MNIST Softmax Classifier.ipynb`.
+* DML Functions: `mnist_softmax.dml`
+* Training script: `mnist_softmax-train.dml`
+* Prediction script: `mnist_softmax-predict.dml`
+
+### MNIST "LeNet" Neural Net
+
+* This example trains a neural network on the MNIST data using a ["LeNet" 
architecture](http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf). The model 
will be trained on the *training* images, validated on the *validation* images, 
and tested for final performance metrics on the *test* images.
+* Notebook: `Example - MNIST LeNet.ipynb`.
+* DML Functions: `mnist_lenet.dml`
+* Training script: `mnist_lenet-train.dml`
+* Prediction script: `mnist_lenet-predict.dml`
+
+---
+
+# Setup
 ## Code
 * To run the examples, please first download and unzip the project via GitHub 
using the "Clone or download" button on the [homepage of the 
project](https://github.com/dusenberrymw/systemml-nn), *or* via the following 
commands:
 
@@ -37,12 +57,14 @@ limitations under the License.
   ```
 
 ## Data
-* The following examples use the classic 
[MNIST](http://yann.lecun.com/exdb/mnist/) dataset, which contains labeled 
28x28 pixel images of handwritten digits in the range of 0-9.  There are 60,000 
training images, and 10,000 testing images.  Of the 60,000 training images, 
5,000 will be used as validation images.
-* The data will be automatically downloaded as a step in either of the example 
notebooks.  (*If* you wish to download it separately, please run 
`get_mnist_data.sh`).
+* These examples use the classic [MNIST](http://yann.lecun.com/exdb/mnist/) 
dataset, which contains labeled 28x28 pixel images of handwritten digits in the 
range of 0-9.  There are 60,000 training images, and 10,000 testing images.  Of 
the 60,000 training images, 5,000 will be used as validation images.
+* **Download**:
+  * **Notebooks**: The data will be automatically downloaded as a step in 
either of the example notebooks.
+  * **Training scripts**: Please run `get_mnist_data.sh` to download the data 
separately.
 
 ## Execution
 * These examples contain scripts written in SystemML's R-like language 
(`*.dml`), as well as PySpark Jupyter notebooks (`*.ipynb`).  The scripts 
contain the math for the algorithms, enclosed in functions, and the notebooks 
serve as full, end-to-end examples of reading in data, training models using 
the functions within the scripts, and evaluating final performance.
-* To run the notebook examples, please startup Jupyter in the following manner 
from this directory (or for more information, please see [this great blog 
post](http://spark.tc/0-to-life-changing-application-with-apache-systemml/)):
+* **Notebooks**: To run the notebook examples, please startup Jupyter in the 
following manner from this directory (or for more information, please see [this 
great blog 
post](http://spark.tc/0-to-life-changing-application-with-apache-systemml/)):
 
   ```
   PYSPARK_DRIVER_PYTHON=jupyter PYSPARK_DRIVER_PYTHON_OPTS="notebook" 
$SPARK_HOME/bin/pyspark --master local[*] --driver-memory 3G 
--driver-class-path $SYSTEMML_HOME/SystemML.jar
@@ -50,18 +72,4 @@ limitations under the License.
 
   Note that all printed output, such as training statistics, from the SystemML 
scripts will be sent to the terminal in which Jupyter was started (for now...).
 
-* To run the scripts directly using `spark-submit`, please see the comments 
located at the bottom of the scripts.
-
-## Examples
-### MNIST Softmax Classifier
-
-* This example trains a softmax classifier, which is essentially a multi-class 
logistic regression model, on the MNIST data.  The model will be trained on the 
*training* images, validated on the *validation* images, and tested for final 
performance metrics on the *test* images.
-* Notebook: `Example - MNIST Softmax Classifier.ipynb`.
-* Script: `mnist_softmax.dml`
-
-### MNIST "LeNet" Neural Net
-
-* This example trains a neural network on the MNIST data using a ["LeNet" 
architecture](http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf). The model 
will be trained on the *training* images, validated on the *validation* images, 
and tested for final performance metrics on the *test* images.
-* Notebook: `Example - MNIST LeNet.ipynb`.
-* Script: `mnist_lenet.dml`
-
+* **Scripts**: To run the scripts from the command line using `spark-submit`, 
please see the comments located at the top of the `-train` and `-predict` 
scripts.

http://git-wip-us.apache.org/repos/asf/incubator-systemml/blob/20e05458/scripts/staging/SystemML-NN/examples/mnist_lenet-predict.dml
----------------------------------------------------------------------
diff --git a/scripts/staging/SystemML-NN/examples/mnist_lenet-predict.dml 
b/scripts/staging/SystemML-NN/examples/mnist_lenet-predict.dml
new file mode 100644
index 0000000..fc8e904
--- /dev/null
+++ b/scripts/staging/SystemML-NN/examples/mnist_lenet-predict.dml
@@ -0,0 +1,87 @@
+#-------------------------------------------------------------
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+# 
+#   http://www.apache.org/licenses/LICENSE-2.0
+# 
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+#-------------------------------------------------------------
+
+# MNIST LeNet - Predict
+#
+# This script computes the class probability predictions of a
+# trained convolutional net using the "LeNet" architecture on
+# images of handwritten digits.
+#
+# Inputs:
+#  - X: File containing training images.
+#     The format is "pixel_1, pixel_2, ..., pixel_n".
+#  - C: Number of color chanels in the images.
+#  - Hin: Input image height.
+#  - Win: Input image width.
+#  - model_dir: Directory containing the trained weights and biases
+#     of the model.
+#  - out_dir: Directory to store class probability predictions for
+#     each image.
+#  - fmt: [DEFAULT: "csv"] File format of `X` and output predictions.
+#     Options include: "csv", "mm", "text", and "binary".
+#
+# Outputs:
+#  - probs: File containing class probability predictions for each
+#     image.
+# 
+# Data:
+# The X file should contain images of handwritten digits,
+# where each example is a 28x28 pixel image of grayscale values in
+# the range [0,255] stretched out as 784 pixels.
+#
+# Sample Invocation:
+# Execute using Spark
+#   ```
+#   $SPARK_HOME/bin/spark-submit --master local[*] --driver-memory 5G
+#   --conf spark.driver.maxResultSize=0 --conf spark.akka.frameSize=128
+#   $SYSTEMML_HOME/target/SystemML.jar -f mnist_lenet-predict.dml
+#   -nvargs X=data/mnist/images.csv C=1 Hin=28 Win=28
+#   model_dir=model/mnist_lenet out_dir=data/mnist
+#   ```
+#
+source("mnist_lenet.dml") as mnist_lenet
+
+# Read training data
+fmt = ifdef($fmt, "csv")
+X = read($X, format=fmt)
+C = $C
+Hin = $Hin
+Win = $Win
+
+# Scale images to [-1,1]
+X = (X / 255.0) * 2 - 1
+
+# Read model coefficients
+W1 = read($model_dir+"/W1")
+b1 = read($model_dir+"/b1")
+W2 = read($model_dir+"/W2")
+b2 = read($model_dir+"/b2")
+W3 = read($model_dir+"/W3")
+b3 = read($model_dir+"/b3")
+W4 = read($model_dir+"/W4")
+b4 = read($model_dir+"/b4")
+
+# Predict classes 
+probs = mnist_lenet::predict(X, C, Hin, Win, W1, b1, W2, b2, W3, b3, W4, b4)
+
+# Output results
+write(probs, $out_dir+"/probs."+fmt, format=fmt)
+

http://git-wip-us.apache.org/repos/asf/incubator-systemml/blob/20e05458/scripts/staging/SystemML-NN/examples/mnist_lenet-train.dml
----------------------------------------------------------------------
diff --git a/scripts/staging/SystemML-NN/examples/mnist_lenet-train.dml 
b/scripts/staging/SystemML-NN/examples/mnist_lenet-train.dml
new file mode 100644
index 0000000..ed2d12c
--- /dev/null
+++ b/scripts/staging/SystemML-NN/examples/mnist_lenet-train.dml
@@ -0,0 +1,119 @@
+#-------------------------------------------------------------
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+# 
+#   http://www.apache.org/licenses/LICENSE-2.0
+# 
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+#-------------------------------------------------------------
+
+# MNIST LeNet - Train
+#
+# This script trains a convolutional net using the "LeNet" architecture
+# on images of handwritten digits.
+#
+# Inputs:
+#  - train: File containing labeled MNIST training images.
+#     The format is "label, pixel_1, pixel_2, ..., pixel_n".
+#  - test: File containing labeled MNIST test images.
+#     The format is "label, pixel_1, pixel_2, ..., pixel_n".
+#  - C: Number of color chanels in the images.
+#  - Hin: Input image height.
+#  - Win: Input image width.
+#  - out_dir: Directory to store weights and bias matrices of
+#     trained model, as well as final test accuracy.
+#  - fmt: [DEFAULT: "csv"] File format of `train` and `test` data.
+#     Options include: "csv", "mm", "text", and "binary".
+#
+# Outputs:
+#  - W1, W2, W3, W4: Files containing the trained weights of the model.
+#  - b1, b2, b3, b4: Files containing the trained biases of the model.
+#  - accuracy: File containing the final accuracy on the test data.
+# 
+# Data:
+# The MNIST dataset contains labeled images of handwritten digits,
+# where each example is a 28x28 pixel image of grayscale values in
+# the range [0,255] stretched out as 784 pixels, and each label is
+# one of 10 possible digits in [0,9].
+#
+# Sample Invocation:
+# 1. Download data (60,000 training examples, and 10,000 test examples)
+#   ```
+#   examples/get_mnist_data.sh
+#   ```
+#
+# 2. Execute using Spark
+#   ```
+#   $SPARK_HOME/bin/spark-submit --master local[*] --driver-memory 10G
+#   --conf spark.driver.maxResultSize=0 --conf spark.akka.frameSize=128
+#   $SYSTEMML_HOME/target/SystemML.jar -f mnist_lenet-train.dml
+#   -nvargs train=data/mnist/mnist_train.csv test=data/mnist/mnist_test.csv
+#   C=1 Hin=28 Win=28 out_dir=model/mnist_lenet
+#   ```
+#
+source("mnist_lenet.dml") as mnist_lenet
+
+# Read training data
+fmt = ifdef($fmt, "csv")
+train = read($train, format=fmt)
+test = read($test, format=fmt)
+C = $C
+Hin = $Hin
+Win = $Win
+
+# Extract images and labels
+images = train[,2:ncol(train)]
+labels = train[,1]
+X_test = test[,2:ncol(test)]
+y_test = test[,1]
+
+# Scale images to [-1,1], and one-hot encode the labels
+n = nrow(train)
+n_test = nrow(test)
+images = (images / 255.0) * 2 - 1
+labels = table(seq(1, n), labels+1, n, 10)
+X_test = X_test / 255.0
+y_test = table(seq(1, n_test), y_test+1, n_test, 10)
+
+# Split into training (55,000 examples) and validation (5,000 examples)
+X = images[5001:nrow(images),]
+X_val = images[1:5000,]
+y = labels[5001:nrow(images),]
+y_val = labels[1:5000,]
+
+# Train
+[W1, b1, W2, b2, W3, b3, W4, b4] = mnist_lenet::train(X, y, X_val, y_val, C, 
Hin, Win)
+
+# Write model out
+write(W1, $out_dir+"/W1")
+write(b1, $out_dir+"/b1")
+write(W2, $out_dir+"/W2")
+write(b2, $out_dir+"/b2")
+write(W3, $out_dir+"/W3")
+write(b3, $out_dir+"/b3")
+write(W4, $out_dir+"/W4")
+write(b4, $out_dir+"/b4")
+
+# Eval on test set
+probs = mnist_lenet::predict(X_test, C, Hin, Win, W1, b1, W2, b2, W3, b3, W4, 
b4)
+[loss, accuracy] = mnist_lenet::eval(probs, y_test)
+
+# Output results
+print("Test Accuracy: " + accuracy)
+write(accuracy, $out_dir+"/accuracy")
+
+print("")
+print("")
+

http://git-wip-us.apache.org/repos/asf/incubator-systemml/blob/20e05458/scripts/staging/SystemML-NN/examples/mnist_lenet.dml
----------------------------------------------------------------------
diff --git a/scripts/staging/SystemML-NN/examples/mnist_lenet.dml 
b/scripts/staging/SystemML-NN/examples/mnist_lenet.dml
index 4474358..b9ece2f 100644
--- a/scripts/staging/SystemML-NN/examples/mnist_lenet.dml
+++ b/scripts/staging/SystemML-NN/examples/mnist_lenet.dml
@@ -139,7 +139,9 @@ train = function(matrix[double] X, matrix[double] y,
         accuracy = mean(rowIndexMax(probs) == rowIndexMax(y_batch))
 
         # Compute validation loss & accuracy
-        [loss_val, accuracy_val] = eval(X_val, y_val, C, Hin, Win, W1, b1, W2, 
b2, W3, b3, W4, b4)
+        probs_val = predict(X_val, C, Hin, Win, W1, b1, W2, b2, W3, b3, W4, b4)
+        loss_val = cross_entropy_loss::forward(probs_val, y_val)
+        accuracy_val = mean(rowIndexMax(probs_val) == rowIndexMax(y_val))
 
         # Output results
         print("Epoch: " + e + ", Iter: " + i + ", Train Loss: " + loss + ", 
Train Accuracy: " + accuracy + ", Val Loss: " + loss_val + ", Val Accuracy: " + 
accuracy_val)
@@ -190,22 +192,21 @@ train = function(matrix[double] X, matrix[double] y,
   }
 }
 
-eval = function(matrix[double] X, matrix[double] y, int C, int Hin, int Win,
-                matrix[double] W1, matrix[double] b1,
-                matrix[double] W2, matrix[double] b2,
-                matrix[double] W3, matrix[double] b3,
-                matrix[double] W4, matrix[double] b4)
-    return (double loss, double accuracy) {
+predict = function(matrix[double] X, int C, int Hin, int Win,
+                   matrix[double] W1, matrix[double] b1,
+                   matrix[double] W2, matrix[double] b2,
+                   matrix[double] W3, matrix[double] b3,
+                   matrix[double] W4, matrix[double] b4)
+    return (matrix[double] probs) {
   /*
-   * Evaluates a convolutional net using the "LeNet" architecture.
+   * Computes the class probability predictions of a convolutional
+   * net using the "LeNet" architecture.
    *
    * The input matrix, X, has N examples, each represented as a 3D
-   * volume unrolled into a single vector.  The targets, y, have K
-   * classes, and are one-hot encoded.
+   * volume unrolled into a single vector.
    *
    * Inputs:
    *  - X: Input data matrix, of shape (N, C*Hin*Win).
-   *  - y: Target matrix, of shape (N, K).
    *  - C: Number of input channels (dimensionality of input depth).
    *  - Hin: Input height.
    *  - Win: Input width.
@@ -219,10 +220,9 @@ eval = function(matrix[double] X, matrix[double] y, int C, 
int Hin, int Win,
    *  - b4: 4th layer biases vector, of shape (1, K).
    *
    * Outputs:
-   *  - loss: Scalar loss, of shape (1).
-   *  - accuracy: Scalar accuracy, of shape (1).
+   *  - probs: Class probabilities, of shape (N, K).
    */
-  # Eval network:
+  # Network:
   # conv1 -> relu1 -> pool1 -> conv2 -> relu2 -> pool2 -> affine3 -> relu3 -> 
affine4 -> softmax
   Hf = 5  # filter height
   Wf = 5  # filter width
@@ -249,7 +249,25 @@ eval = function(matrix[double] X, matrix[double] y, int C, 
int Hin, int Win,
   ## layer 4:  affine4 -> softmax
   outa4 = affine::forward(outr3, W4, b4)
   probs = softmax::forward(outa4)
+}
 
+eval = function(matrix[double] probs, matrix[double] y)
+    return (double loss, double accuracy) {
+  /*
+   * Evaluates a convolutional net using the "LeNet" architecture.
+   *
+   * The probs matrix contains the class probability predictions
+   * of K classes over N examples.  The targets, y, have K classes,
+   * and are one-hot encoded.
+   *
+   * Inputs:
+   *  - probs: Class probabilities, of shape (N, K).
+   *  - y: Target matrix, of shape (N, 
+   *
+   * Outputs:
+   *  - loss: Scalar loss, of shape (1).
+   *  - accuracy: Scalar accuracy, of shape (1).
+   */
   # Compute loss & accuracy
   loss = cross_entropy_loss::forward(probs, y)
   correct_pred = rowIndexMax(probs) == rowIndexMax(y)
@@ -279,82 +297,3 @@ generate_dummy_data = function()
   y = table(seq(1, N), classes)  # one-hot encoding
 }
 
-
-#
-# Main
-#
-# This runs if called as a script.
-#
-# The MNIST dataset contains labeled images of handwritten digits,
-# where each example is a 28x28 pixel image of grayscale values in
-# the range [0,255] stretched out as 784 pixels, and each label is
-# one of 10 possible digits in [0,9].
-#
-# Here, we assume 60,000 training examples, and 10,000 test examples,
-# where the format is "label, pixel_1, pixel_2, ..., pixel_n".
-#
-# 1. Download data
-#   ```
-#   examples/get_mnist_data.sh
-#   ```
-#
-# 2. Execute using Spark
-#   ```
-#   $SPARK_HOME/bin/spark-submit --master local[*] --driver-memory 10G
-#   --conf spark.driver.maxResultSize=0 --conf spark.akka.frameSize=128
-#   $SYSTEMML_HOME/target/SystemML.jar -f mnist_lenet.dml
-#   -nvargs train=data/mnist/mnist_train.csv test=data/mnist/mnist_test.csv
-#   C=1 Hin=28 Win=28 out_dir=model/mnist_lenet
-#   ```
-#
-
-# Read training data
-train = read($train, format="csv")
-test = read($test, format="csv")
-C = $C
-Hin = $Hin
-Win = $Win
-
-# Extract images and labels
-images = train[,2:ncol(train)]
-labels = train[,1]
-X_test = test[,2:ncol(test)]
-y_test = test[,1]
-
-# Scale images to [-1,1], and one-hot encode the labels
-n = nrow(train)
-n_test = nrow(test)
-images = (images / 255.0) * 2 - 1
-labels = table(seq(1, n), labels+1, n, 10)
-X_test = X_test / 255.0
-y_test = table(seq(1, n_test), y_test+1, n_test, 10)
-
-# Split into training (55,000 examples) and validation (5,000 examples)
-X = images[5001:nrow(images),]
-X_val = images[1:5000,]
-y = labels[5001:nrow(images),]
-y_val = labels[1:5000,]
-
-# Train
-[W1, b1, W2, b2, W3, b3, W4, b4] = train(X, y, X_val, y_val, C, Hin, Win)
-
-# Write model out
-write(W1, $out_dir+"/W1")
-write(b1, $out_dir+"/b1")
-write(W2, $out_dir+"/W2")
-write(b2, $out_dir+"/b2")
-write(W3, $out_dir+"/W3")
-write(b3, $out_dir+"/b3")
-write(W4, $out_dir+"/W4")
-write(b4, $out_dir+"/b4")
-
-# Eval on test set
-[loss, accuracy] = eval(X_test, y_test, C, Hin, Win, W1, b1, W2, b2, W3, b3, 
W4, b4)
-
-# Output results
-print("Test Accuracy: " + accuracy)
-write(accuracy, $out_dir+"/accuracy")
-
-print("")
-print("")
-

http://git-wip-us.apache.org/repos/asf/incubator-systemml/blob/20e05458/scripts/staging/SystemML-NN/examples/mnist_softmax-predict.dml
----------------------------------------------------------------------
diff --git a/scripts/staging/SystemML-NN/examples/mnist_softmax-predict.dml 
b/scripts/staging/SystemML-NN/examples/mnist_softmax-predict.dml
new file mode 100644
index 0000000..bc7d158
--- /dev/null
+++ b/scripts/staging/SystemML-NN/examples/mnist_softmax-predict.dml
@@ -0,0 +1,74 @@
+#-------------------------------------------------------------
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+# 
+#   http://www.apache.org/licenses/LICENSE-2.0
+# 
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+#-------------------------------------------------------------
+
+# MNIST Softmax - Predict
+#
+# This script computes the class probability predictions of a
+# trained softmax classifier on images of handwritten digits.
+#
+# Inputs:
+#  - X: File containing training images.
+#     The format is "pixel_1, pixel_2, ..., pixel_n".
+#  - model_dir: Directory containing the trained weights and biases
+#     of the model.
+#  - out_dir: Directory to store class probability predictions for
+#     each image.
+#  - fmt: [DEFAULT: "csv"] File format of `X` and output predictions.
+#     Options include: "csv", "mm", "text", and "binary".
+#
+# Outputs:
+#  - probs: File containing class probability predictions for each
+#     image.
+# 
+# Data:
+# The X file should contain images of handwritten digits,
+# where each example is a 28x28 pixel image of grayscale values in
+# the range [0,255] stretched out as 784 pixels.
+#
+# Sample Invocation:
+# Execute using Spark
+#   ```
+#   $SPARK_HOME/bin/spark-submit --master local[*] --driver-memory 5G
+#   --conf spark.driver.maxResultSize=0 --conf spark.akka.frameSize=128
+#   $SYSTEMML_HOME/target/SystemML.jar -f mnist_softmax-predict.dml
+#   -nvargs X=data/mnist/images.csv model_dir=model/mnist_softmax
+#   out_dir=data/mnist
+#   ```
+#
+source("mnist_softmax.dml") as mnist_softmax
+
+# Read training data
+fmt = ifdef($fmt, "csv")
+X = read($X, format=fmt)
+
+# Scale images to [0,1], and one-hot encode the labels
+X = X / 255.0
+
+# Read model coefficients
+W = read($model_dir+"/W")
+b = read($model_dir+"/b")
+
+# Predict classes 
+probs = mnist_softmax::predict(X, W, b)
+
+# Output results
+write(probs, $out_dir+"/probs."+fmt, format=fmt)
+

http://git-wip-us.apache.org/repos/asf/incubator-systemml/blob/20e05458/scripts/staging/SystemML-NN/examples/mnist_softmax-train.dml
----------------------------------------------------------------------
diff --git a/scripts/staging/SystemML-NN/examples/mnist_softmax-train.dml 
b/scripts/staging/SystemML-NN/examples/mnist_softmax-train.dml
new file mode 100644
index 0000000..742110b
--- /dev/null
+++ b/scripts/staging/SystemML-NN/examples/mnist_softmax-train.dml
@@ -0,0 +1,108 @@
+#-------------------------------------------------------------
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+# 
+#   http://www.apache.org/licenses/LICENSE-2.0
+# 
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+#-------------------------------------------------------------
+
+# MNIST Softmax - Train
+#
+# This script trains a softmax classifier on images of handwritten
+# digits.
+#
+# Inputs:
+#  - train: File containing labeled MNIST training images.
+#     The format is "label, pixel_1, pixel_2, ..., pixel_n".
+#  - test: File containing labeled MNIST test images.
+#     The format is "label, pixel_1, pixel_2, ..., pixel_n".
+#  - out_dir: Directory to store weights and bias matrices of
+#     trained model, as well as final test accuracy.
+#  - fmt: [DEFAULT: "csv"] File format of `train` and `test` data.
+#     Options include: "csv", "mm", "text", and "binary".
+#
+# Outputs:
+#  - W: File containing the trained weights of the model.
+#  - b: File containing the trained biases of the model.
+#  - accuracy: File containing the final accuracy on the test data.
+# 
+# Data:
+# The MNIST dataset contains labeled images of handwritten digits,
+# where each example is a 28x28 pixel image of grayscale values in
+# the range [0,255] stretched out as 784 pixels, and each label is
+# one of 10 possible digits in [0,9].
+#
+# Sample Invocation:
+# 1. Download data (60,000 training examples, and 10,000 test examples)
+#   ```
+#   examples/get_mnist_data.sh
+#   ```
+#
+# 2. Execute using Spark
+#   ```
+#   $SPARK_HOME/bin/spark-submit --master local[*] --driver-memory 5G
+#   --conf spark.driver.maxResultSize=0 --conf spark.akka.frameSize=128
+#   $SYSTEMML_HOME/target/SystemML.jar -f mnist_softmax-train.dml
+#   -nvargs train=data/mnist/mnist_train.csv test=data/mnist/mnist_test.csv
+#   out_dir=model/mnist_softmax
+#   ```
+#
+source("mnist_softmax.dml") as mnist_softmax
+
+# Read training data
+fmt = ifdef($fmt, "csv")
+train = read($train, format=fmt)
+test = read($test, format=fmt)
+
+# Extract images and labels
+images = train[,2:ncol(train)]
+labels = train[,1]
+X_test = test[,2:ncol(test)]
+y_test = test[,1]
+
+# Scale images to [0,1], and one-hot encode the labels
+n = nrow(train)
+n_test = nrow(test)
+classes = 10
+images = images / 255.0
+labels = table(seq(1, n), labels+1, n, classes)
+X_test = X_test / 255.0
+y_test = table(seq(1, n_test), y_test+1, n_test, classes)
+
+# Split into training (55,000 examples) and validation (5,000 examples)
+X = images[5001:nrow(images),]
+X_val = images[1:5000,]
+y = labels[5001:nrow(images),]
+y_val = labels[1:5000,]
+
+# Train
+[W, b] = mnist_softmax::train(X, y, X_val, y_val)
+
+# Write model out
+write(W, $out_dir+"/W")
+write(b, $out_dir+"/b")
+
+# Eval on test set
+probs = mnist_softmax::predict(X_test, W, b)
+[loss, accuracy] = mnist_softmax::eval(probs, y_test)
+
+# Output results
+print("Test Accuracy: " + accuracy)
+write(accuracy, $out_dir+"/accuracy")
+
+print("")
+print("")
+

http://git-wip-us.apache.org/repos/asf/incubator-systemml/blob/20e05458/scripts/staging/SystemML-NN/examples/mnist_softmax.dml
----------------------------------------------------------------------
diff --git a/scripts/staging/SystemML-NN/examples/mnist_softmax.dml 
b/scripts/staging/SystemML-NN/examples/mnist_softmax.dml
index f3c00e8..ee0d3cb 100644
--- a/scripts/staging/SystemML-NN/examples/mnist_softmax.dml
+++ b/scripts/staging/SystemML-NN/examples/mnist_softmax.dml
@@ -32,8 +32,10 @@ train = function(matrix[double] X, matrix[double] y,
                  matrix[double] X_val, matrix[double] y_val)
     return (matrix[double] W, matrix[double] b) {
   /*
-   * Trains a softmax classifier.  The input matrix, X, has N examples,
-   * each with D features.  The targets, y, have K classes.
+   * Trains a softmax classifier.
+   *
+   * The input matrix, X, has N examples, each with D features.
+   * The targets, y, have K classes, and are one-hot encoded.
    *
    * Inputs:
    *  - X: Input data matrix, of shape (N, D).
@@ -82,8 +84,11 @@ train = function(matrix[double] X, matrix[double] y,
       # Compute loss & accuracy for training & validation data
       loss = cross_entropy_loss::forward(probs, y_batch)
       accuracy = mean(rowIndexMax(probs) == rowIndexMax(y_batch))
-      [loss_val, accuracy_val] = eval(X_val, y_val, W, b)
-      print("Epoch: " + e + ", Iter: " + i + ", Train Loss: " + loss + ", 
Train Accuracy: " + accuracy + ", Val Loss: " + loss_val + ", Val Accuracy: " + 
accuracy_val)
+      probs_val = predict(X_val, W, b)
+      loss_val = cross_entropy_loss::forward(probs_val, y_val)
+      accuracy_val = mean(rowIndexMax(probs_val) == rowIndexMax(y_val))
+      print("Epoch: " + e + ", Iter: " + i + ", Train Loss: " + loss + ", 
Train Accuracy: " +
+            accuracy + ", Val Loss: " + loss_val + ", Val Accuracy: " + 
accuracy_val)
 
       # Compute backward pass
       ## loss:
@@ -103,27 +108,44 @@ train = function(matrix[double] X, matrix[double] y,
   }
 }
 
-eval = function(matrix[double] X, matrix[double] y, matrix[double] W, 
matrix[double] b)
-    return (double loss, double accuracy) {
+predict = function(matrix[double] X, matrix[double] W, matrix[double] b)
+    return (matrix[double] probs) {
   /*
-   * Evaluates a softmax classifier.  The input matrix, X, has N
-   * examples, each with D features.  The targets, y, have K classes.
+   * Computes the class probability predictions of a softmax classifier.
+   *
+   * The input matrix, X, has N examples, each with D features.
    *
    * Inputs:
    *  - X: Input data matrix, of shape (N, D).
-   *  - y: Target matrix, of shape (N, K).
    *  - W: Weights (parameters) matrix, of shape (D, M).
    *  - b: Biases vector, of shape (1, M).
    *
    * Outputs:
-   *  - loss: Scalar loss, of shape (1).
-   *  - accuracy: Scalar accuracy, of shape (1).
+   *  - probs: Class probabilities, of shape (N, K).
    */
   # Compute forward pass
   ## affine & softmax:
   out = affine::forward(X, W, b)
   probs = softmax::forward(out)
+}
 
+eval = function(matrix[double] probs, matrix[double] y)
+    return (double loss, double accuracy) {
+  /*
+   * Evaluates a softmax classifier.
+   *
+   * The probs matrix contains the class probability predictions
+   * of K classes over N examples.  The targets, y, have K classes,
+   * and are one-hot encoded.
+   *
+   * Inputs:
+   *  - probs: Class probabilities, of shape (N, K).
+   *  - y: Target matrix, of shape (N, K).
+   *
+   * Outputs:
+   *  - loss: Scalar loss, of shape (1).
+   *  - accuracy: Scalar accuracy, of shape (1).
+   */
   # Compute loss & accuracy
   loss = cross_entropy_loss::forward(probs, y)
   correct_pred = rowIndexMax(probs) == rowIndexMax(y)
@@ -153,73 +175,3 @@ generate_dummy_data = function()
   y = table(seq(1, N), classes)  # one-hot encoding
 }
 
-
-#
-# Main
-#
-# This runs if called as a script.
-#
-# The MNIST dataset contains labeled images of handwritten digits,
-# where each example is a 28x28 pixel image of grayscale values in
-# the range [0,255] stretched out as 784 pixels, and each label is
-# one of 10 possible digits in [0,9].
-#
-# Here, we assume 60,000 training examples, and 10,000 test examples,
-# where the format is "label, pixel_1, pixel_2, ..., pixel_n".
-#
-# 1. Download data
-#   ```
-#   examples/get_mnist_data.sh
-#   ```
-#
-# 2. Execute using Spark
-#   ```
-#   $SPARK_HOME/bin/spark-submit --master local[*] --driver-memory 5G
-#   --conf spark.driver.maxResultSize=0 --conf spark.akka.frameSize=128
-#   $SYSTEMML_HOME/target/SystemML.jar -f mnist_softmax.dml
-#   -nvargs train=data/mnist/mnist_train.csv test=data/mnist/mnist_test.csv
-#   out_dir=model/mnist_softmax
-#   ```
-#
-
-# Read training data
-train = read($train, format="csv")
-test = read($test, format="csv")
-
-# Extract images and labels
-images = train[,2:ncol(train)]
-labels = train[,1]
-X_test = test[,2:ncol(test)]
-y_test = test[,1]
-
-# Scale images to [0,1], and one-hot encode the labels
-n = nrow(train)
-n_test = nrow(test)
-images = images / 255.0
-labels = table(seq(1, n), labels+1, n, 10)
-X_test = X_test / 255.0
-y_test = table(seq(1, n_test), y_test+1, n_test, 10)
-
-# Split into training (55,000 examples) and validation (5,000 examples)
-X = images[5001:nrow(images),]
-X_val = images[1:5000,]
-y = labels[5001:nrow(images),]
-y_val = labels[1:5000,]
-
-# Train
-[W, b] = train(X, y, X_val, y_val)
-
-# Write model out
-write(W, $out_dir+"/W")
-write(b, $out_dir+"/b")
-
-# Eval on test set
-[loss, accuracy] = eval(X_test, y_test, W, b)
-
-# Output results
-print("Test Accuracy: " + accuracy)
-write(accuracy, $out_dir+"/accuracy")
-
-print("")
-print("")
-

Reply via email to