Repository: incubator-systemml
Updated Branches:
  refs/heads/gh-pages cbc960226 -> 003cc3e29


Update source references based on new package structure

The refactor from com.ibm.bi.dml to org.apache.sysml
is being done in two phases to avoid loosing the file
history. In this phase, all references to old package
is being updated to reference the new project structure.


Project: http://git-wip-us.apache.org/repos/asf/incubator-systemml/repo
Commit: 
http://git-wip-us.apache.org/repos/asf/incubator-systemml/commit/350f7feb
Tree: http://git-wip-us.apache.org/repos/asf/incubator-systemml/tree/350f7feb
Diff: http://git-wip-us.apache.org/repos/asf/incubator-systemml/diff/350f7feb

Branch: refs/heads/gh-pages
Commit: 350f7feb9d90bfa4292a4ca5bd54a15ca28e8276
Parents: cbc9602
Author: Luciano Resende <[email protected]>
Authored: Tue Dec 1 13:58:16 2015 -0800
Committer: Luciano Resende <[email protected]>
Committed: Thu Dec 3 10:44:00 2015 -0800

----------------------------------------------------------------------
 .../SystemML_Language_Reference.html            |  4 +-
 dml-language-reference.md                       |  4 +-
 mlcontext-programming-guide.md                  | 96 ++++++++++----------
 quick-start-guide.md                            |  2 +-
 4 files changed, 53 insertions(+), 53 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-systemml/blob/350f7feb/Language
 Reference/SystemML_Language_Reference.html
----------------------------------------------------------------------
diff --git a/Language Reference/SystemML_Language_Reference.html b/Language 
Reference/SystemML_Language_Reference.html
index ce3635d..b442da3 100644
--- a/Language Reference/SystemML_Language_Reference.html       
+++ b/Language Reference/SystemML_Language_Reference.html       
@@ -3673,7 +3673,7 @@ class=SpellE>eval</span>) <o:p></o:p></span></p>
 margin-left:.5in;margin-bottom:.0001pt'><span class=GramE><span
 style='font-size:11.0pt;font-family:"Courier 
New";mso-bidi-font-style:italic'>implemented</span></span><span
 style='font-size:11.0pt;font-family:"Courier New";mso-bidi-font-style:italic'>
-in (<span class=SpellE>classname</span>=&quot;<span 
class=SpellE>com.ibm.bi.dml.packagesupport.JLapackEigenWrapper</span>&quot;)<o:p></o:p></span></p>
+in (<span class=SpellE>classname</span>=&quot;<span 
class=SpellE>org.apache.sysml.packagesupport.JLapackEigenWrapper</span>&quot;)<o:p></o:p></span></p>
 
 <p class=MsoNormal>A UDF invocation specifies the function identifier, variable
 identifiers for calling parameters, and the variables to be populated by the
@@ -9479,7 +9479,7 @@ shell is as follows:</p>
 <p class=MsoNormal style='margin:0in;margin-bottom:.0001pt'><span 
class=SpellE><span
 class=GramE><span style='font-size:11.0pt;font-family:"Courier 
New";mso-bidi-font-style:
 italic'>scala</span></span></span><span style='font-size:11.0pt;font-family:
-"Courier New";mso-bidi-font-style:italic'>&gt; import <span 
class=SpellE>com.ibm.bi.dml.api.MLContext</span><o:p></o:p></span></p>
+"Courier New";mso-bidi-font-style:italic'>&gt; import <span 
class=SpellE>org.apache.sysml.api.MLContext</span><o:p></o:p></span></p>
 
 <p class=MsoNormal 
style='margin:0in;margin-bottom:.0001pt'><o:p>&nbsp;</o:p></p>
 

http://git-wip-us.apache.org/repos/asf/incubator-systemml/blob/350f7feb/dml-language-reference.md
----------------------------------------------------------------------
diff --git a/dml-language-reference.md b/dml-language-reference.md
index dff0886..abebe1f 100644
--- a/dml-language-reference.md
+++ b/dml-language-reference.md
@@ -407,7 +407,7 @@ userParam=value | User-defined parameter to invoke the 
package. | Yes | Any non-
     # example of an external UDF
     eigen = externalFunction(matrix[double] A) 
     return (matrix[double] evec, matrix[double] eval) 
-    implemented in 
(classname="com.ibm.bi.dml.packagesupport.JLapackEigenWrapper")
+    implemented in 
(classname="org.apache.sysml.packagesupport.JLapackEigenWrapper")
 
 A UDF invocation specifies the function identifier, variable identifiers for 
calling parameters, and the variables to be populated by the returned values 
from the function. The syntax for function calls is as follows.
 
@@ -1186,7 +1186,7 @@ The MLContext API allows users to pass RDDs as 
input/output to SystemML through
 
 Typical usage for MLContext using Spark's Scala Shell is as follows:
 
-    scala> import com.ibm.bi.dml.api.MLContext
+    scala> import org.apache.sysml.api.MLContext
 
 Create input DataFrame from CSV file and potentially perform some feature 
transformation
 

http://git-wip-us.apache.org/repos/asf/incubator-systemml/blob/350f7feb/mlcontext-programming-guide.md
----------------------------------------------------------------------
diff --git a/mlcontext-programming-guide.md b/mlcontext-programming-guide.md
index c7d415d..66caa3d 100644
--- a/mlcontext-programming-guide.md
+++ b/mlcontext-programming-guide.md
@@ -44,17 +44,17 @@ An `MLContext` object can be created by passing its 
constructor a reference to t
 
 <div data-lang="Spark Shell" markdown="1">
 {% highlight scala %}
-scala>import com.ibm.bi.dml.api.MLContext
-import com.ibm.bi.dml.api.MLContext
+scala>import org.apache.sysml.api.MLContext
+import org.apache.sysml.api.MLContext
 
 scala> val ml = new MLContext(sc)
-ml: com.ibm.bi.dml.api.MLContext = com.ibm.bi.dml.api.MLContext@33e38c6b
+ml: org.apache.sysml.api.MLContext = org.apache.sysml.api.MLContext@33e38c6b
 {% endhighlight %}
 </div>
 
 <div data-lang="Statements" markdown="1">
 {% highlight scala %}
-import com.ibm.bi.dml.api.MLContext
+import org.apache.sysml.api.MLContext
 val ml = new MLContext(sc)
 {% endhighlight %}
 </div>
@@ -125,27 +125,27 @@ an `MLOutput` object. The `getScalar()` method extracts a 
scalar value from a `D
 
 <div data-lang="Spark Shell" markdown="1">
 {% highlight scala %}
-scala> import com.ibm.bi.dml.api.MLOutput
-import com.ibm.bi.dml.api.MLOutput
+scala> import org.apache.sysml.api.MLOutput
+import org.apache.sysml.api.MLOutput
 
 scala> def getScalar(outputs: MLOutput, symbol: String): Any =
      | outputs.getDF(sqlContext, symbol).first()(1)
-getScalar: (outputs: com.ibm.bi.dml.api.MLOutput, symbol: String)Any
+getScalar: (outputs: org.apache.sysml.api.MLOutput, symbol: String)Any
 
 scala> def getScalarDouble(outputs: MLOutput, symbol: String): Double =
      | getScalar(outputs, symbol).asInstanceOf[Double]
-getScalarDouble: (outputs: com.ibm.bi.dml.api.MLOutput, symbol: String)Double
+getScalarDouble: (outputs: org.apache.sysml.api.MLOutput, symbol: String)Double
 
 scala> def getScalarInt(outputs: MLOutput, symbol: String): Int =
      | getScalarDouble(outputs, symbol).toInt
-getScalarInt: (outputs: com.ibm.bi.dml.api.MLOutput, symbol: String)Int
+getScalarInt: (outputs: org.apache.sysml.api.MLOutput, symbol: String)Int
 
 {% endhighlight %}
 </div>
 
 <div data-lang="Statements" markdown="1">
 {% highlight scala %}
-import com.ibm.bi.dml.api.MLOutput
+import org.apache.sysml.api.MLOutput
 def getScalar(outputs: MLOutput, symbol: String): Any =
 outputs.getDF(sqlContext, symbol).first()(1)
 def getScalarDouble(outputs: MLOutput, symbol: String): Double =
@@ -176,11 +176,11 @@ to convert the `DataFrame df` to a SystemML binary-block 
matrix, which is repres
 
 <div data-lang="Spark Shell" markdown="1">
 {% highlight scala %}
-scala> import 
com.ibm.bi.dml.runtime.instructions.spark.utils.{RDDConverterUtilsExt => 
RDDConverterUtils}
-import 
com.ibm.bi.dml.runtime.instructions.spark.utils.{RDDConverterUtilsExt=>RDDConverterUtils}
+scala> import 
org.apache.sysml.runtime.instructions.spark.utils.{RDDConverterUtilsExt => 
RDDConverterUtils}
+import 
org.apache.sysml.runtime.instructions.spark.utils.{RDDConverterUtilsExt=>RDDConverterUtils}
 
-scala> import com.ibm.bi.dml.runtime.matrix.MatrixCharacteristics;
-import com.ibm.bi.dml.runtime.matrix.MatrixCharacteristics
+scala> import org.apache.sysml.runtime.matrix.MatrixCharacteristics;
+import org.apache.sysml.runtime.matrix.MatrixCharacteristics
 
 scala> val numRowsPerBlock = 1000
 numRowsPerBlock: Int = 1000
@@ -189,18 +189,18 @@ scala> val numColsPerBlock = 1000
 numColsPerBlock: Int = 1000
 
 scala> val mc = new MatrixCharacteristics(numRows, numCols, numRowsPerBlock, 
numColsPerBlock)
-mc: com.ibm.bi.dml.runtime.matrix.MatrixCharacteristics = [100000 x 1000, 
nnz=-1, blocks (1000 x 1000)]
+mc: org.apache.sysml.runtime.matrix.MatrixCharacteristics = [100000 x 1000, 
nnz=-1, blocks (1000 x 1000)]
 
 scala> val sysMlMatrix = RDDConverterUtils.dataFrameToBinaryBlock(sc, df, mc, 
false)
-sysMlMatrix: 
org.apache.spark.api.java.JavaPairRDD[com.ibm.bi.dml.runtime.matrix.data.MatrixIndexes,com.ibm.bi.dml.runtime.matrix.data.MatrixBlock]
 = org.apache.spark.api.java.JavaPairRDD@2bce3248
+sysMlMatrix: 
org.apache.spark.api.java.JavaPairRDD[org.apache.sysml.runtime.matrix.data.MatrixIndexes,org.apache.sysml.runtime.matrix.data.MatrixBlock]
 = org.apache.spark.api.java.JavaPairRDD@2bce3248
 
 {% endhighlight %}
 </div>
 
 <div data-lang="Statements" markdown="1">
 {% highlight scala %}
-import com.ibm.bi.dml.runtime.instructions.spark.utils.{RDDConverterUtilsExt 
=> RDDConverterUtils}
-import com.ibm.bi.dml.runtime.matrix.MatrixCharacteristics;
+import org.apache.sysml.runtime.instructions.spark.utils.{RDDConverterUtilsExt 
=> RDDConverterUtils}
+import org.apache.sysml.runtime.matrix.MatrixCharacteristics;
 val numRowsPerBlock = 1000
 val numColsPerBlock = 1000
 val mc = new MatrixCharacteristics(numRows, numCols, numRowsPerBlock, 
numColsPerBlock)
@@ -268,7 +268,7 @@ nargs: scala.collection.immutable.Map[String,String] = 
Map(Xin -> " ", Mout -> "
 scala> val outputs = ml.execute("shape.dml", nargs)
 15/10/12 16:29:15 WARN : Your hostname, derons-mbp.usca.ibm.com resolves to a 
loopback/non-reachable address: 127.0.0.1, but we couldn't find any external IP 
address!
 15/10/12 16:29:15 WARN OptimizerUtils: Auto-disable multi-threaded text read 
for 'text' and 'csv' due to thread contention on JRE < 1.8 
(java.version=1.7.0_80).
-outputs: com.ibm.bi.dml.api.MLOutput = com.ibm.bi.dml.api.MLOutput@4d424743
+outputs: org.apache.sysml.api.MLOutput = org.apache.sysml.api.MLOutput@4d424743
 
 scala> val m = getScalarInt(outputs, "m")
 m: Int = 100000
@@ -362,11 +362,11 @@ mean value of the matrix.
 
 <div data-lang="Spark Shell" markdown="1">
 {% highlight scala %}
-scala> import com.ibm.bi.dml.runtime.matrix.data.MatrixIndexes
-import com.ibm.bi.dml.runtime.matrix.data.MatrixIndexes
+scala> import org.apache.sysml.runtime.matrix.data.MatrixIndexes
+import org.apache.sysml.runtime.matrix.data.MatrixIndexes
 
-scala> import com.ibm.bi.dml.runtime.matrix.data.MatrixBlock
-import com.ibm.bi.dml.runtime.matrix.data.MatrixBlock
+scala> import org.apache.sysml.runtime.matrix.data.MatrixBlock
+import org.apache.sysml.runtime.matrix.data.MatrixBlock
 
 scala> import org.apache.spark.api.java.JavaPairRDD
 import org.apache.spark.api.java.JavaPairRDD
@@ -383,15 +383,15 @@ scala> def minMaxMean(mat: JavaPairRDD[MatrixIndexes, 
MatrixBlock], rows: Int, c
      | val meanOut = getScalarDouble(outputs, "meanOut")
      | (minOut, maxOut, meanOut)
      | }
-minMaxMean: (mat: 
org.apache.spark.api.java.JavaPairRDD[com.ibm.bi.dml.runtime.matrix.data.MatrixIndexes,com.ibm.bi.dml.runtime.matrix.data.MatrixBlock],
 rows: Int, cols: Int, ml: com.ibm.bi.dml.api.MLContext)(Double, Double, Double)
+minMaxMean: (mat: 
org.apache.spark.api.java.JavaPairRDD[org.apache.sysml.runtime.matrix.data.MatrixIndexes,org.apache.sysml.runtime.matrix.data.MatrixBlock],
 rows: Int, cols: Int, ml: org.apache.sysml.api.MLContext)(Double, Double, 
Double)
 
 {% endhighlight %}
 </div>
 
 <div data-lang="Statements" markdown="1">
 {% highlight scala %}
-import com.ibm.bi.dml.runtime.matrix.data.MatrixIndexes
-import com.ibm.bi.dml.runtime.matrix.data.MatrixBlock
+import org.apache.sysml.runtime.matrix.data.MatrixIndexes
+import org.apache.sysml.runtime.matrix.data.MatrixBlock
 import org.apache.spark.api.java.JavaPairRDD
 def minMaxMean(mat: JavaPairRDD[MatrixIndexes, MatrixBlock], rows: Int, cols: 
Int, ml: MLContext): (Double, Double, Double) = {
 ml.reset()
@@ -452,7 +452,7 @@ to standard output.
 
 
 {% highlight java %}
-package com.ibm.bi.dml;
+package org.apache.sysml;
 
 import java.util.HashMap;
 
@@ -462,8 +462,8 @@ import org.apache.spark.api.java.JavaSparkContext;
 import org.apache.spark.sql.DataFrame;
 import org.apache.spark.sql.SQLContext;
 
-import com.ibm.bi.dml.api.MLContext;
-import com.ibm.bi.dml.api.MLOutput;
+import org.apache.sysml.api.MLContext;
+import org.apache.sysml.api.MLOutput;
 
 public class MLContextExample {
 
@@ -835,7 +835,7 @@ This cell contains helper methods to return `Double` and 
`Int` values from outpu
 **Cell:**
 {% highlight scala %}
 // Helper functions
-import com.ibm.bi.dml.api.MLOutput
+import org.apache.sysml.api.MLOutput
 
 def getScalar(outputs: MLOutput, symbol: String): Any =
     outputs.getDF(sqlContext, symbol).first()(1)
@@ -849,10 +849,10 @@ def getScalarInt(outputs: MLOutput, symbol: String): Int =
 
 **Output:**
 {% highlight scala %}
-import com.ibm.bi.dml.api.MLOutput
-getScalar: (outputs: com.ibm.bi.dml.api.MLOutput, symbol: String)Any
-getScalarDouble: (outputs: com.ibm.bi.dml.api.MLOutput, symbol: String)Double
-getScalarInt: (outputs: com.ibm.bi.dml.api.MLOutput, symbol: String)Int
+import org.apache.sysml.api.MLOutput
+getScalar: (outputs: org.apache.sysml.api.MLOutput, symbol: String)Any
+getScalarDouble: (outputs: org.apache.sysml.api.MLOutput, symbol: String)Double
+getScalarInt: (outputs: org.apache.sysml.api.MLOutput, symbol: String)Int
 {% endhighlight %}
 
 
@@ -867,9 +867,9 @@ and single-column `label` matrix, both represented by the
 **Cell:**
 {% highlight scala %}
 // Imports
-import com.ibm.bi.dml.api.MLContext
-import com.ibm.bi.dml.runtime.instructions.spark.utils.{RDDConverterUtilsExt 
=> RDDConverterUtils}
-import com.ibm.bi.dml.runtime.matrix.MatrixCharacteristics;
+import org.apache.sysml.api.MLContext
+import org.apache.sysml.runtime.instructions.spark.utils.{RDDConverterUtilsExt 
=> RDDConverterUtils}
+import org.apache.sysml.runtime.matrix.MatrixCharacteristics;
 
 // Create SystemML context
 val ml = new MLContext(sc)
@@ -890,16 +890,16 @@ val cnt2 = y2.count()
 
 **Output:**
 {% highlight scala %}
-import com.ibm.bi.dml.api.MLContext
-import 
com.ibm.bi.dml.runtime.instructions.spark.utils.{RDDConverterUtilsExt=>RDDConverterUtils}
-import com.ibm.bi.dml.runtime.matrix.MatrixCharacteristics
-ml: com.ibm.bi.dml.api.MLContext = com.ibm.bi.dml.api.MLContext@38d59245
-mcX: com.ibm.bi.dml.runtime.matrix.MatrixCharacteristics = [10000 x 1000, 
nnz=-1, blocks (1000 x 1000)]
-mcY: com.ibm.bi.dml.runtime.matrix.MatrixCharacteristics = [10000 x 1, nnz=-1, 
blocks (1000 x 1000)]
-X: 
org.apache.spark.api.java.JavaPairRDD[com.ibm.bi.dml.runtime.matrix.data.MatrixIndexes,com.ibm.bi.dml.runtime.matrix.data.MatrixBlock]
 = org.apache.spark.api.java.JavaPairRDD@b5a86e3
-y: 
org.apache.spark.api.java.JavaPairRDD[com.ibm.bi.dml.runtime.matrix.data.MatrixIndexes,com.ibm.bi.dml.runtime.matrix.data.MatrixBlock]
 = org.apache.spark.api.java.JavaPairRDD@56377665
-X2: 
org.apache.spark.api.java.JavaPairRDD[com.ibm.bi.dml.runtime.matrix.data.MatrixIndexes,com.ibm.bi.dml.runtime.matrix.data.MatrixBlock]
 = org.apache.spark.api.java.JavaPairRDD@650f29d2
-y2: 
org.apache.spark.api.java.JavaPairRDD[com.ibm.bi.dml.runtime.matrix.data.MatrixIndexes,com.ibm.bi.dml.runtime.matrix.data.MatrixBlock]
 = org.apache.spark.api.java.JavaPairRDD@334857a8
+import org.apache.sysml.api.MLContext
+import 
org.apache.sysml.runtime.instructions.spark.utils.{RDDConverterUtilsExt=>RDDConverterUtils}
+import org.apache.sysml.runtime.matrix.MatrixCharacteristics
+ml: org.apache.sysml.api.MLContext = org.apache.sysml.api.MLContext@38d59245
+mcX: org.apache.sysml.runtime.matrix.MatrixCharacteristics = [10000 x 1000, 
nnz=-1, blocks (1000 x 1000)]
+mcY: org.apache.sysml.runtime.matrix.MatrixCharacteristics = [10000 x 1, 
nnz=-1, blocks (1000 x 1000)]
+X: 
org.apache.spark.api.java.JavaPairRDD[org.apache.sysml.runtime.matrix.data.MatrixIndexes,org.apache.sysml.runtime.matrix.data.MatrixBlock]
 = org.apache.spark.api.java.JavaPairRDD@b5a86e3
+y: 
org.apache.spark.api.java.JavaPairRDD[org.apache.sysml.runtime.matrix.data.MatrixIndexes,org.apache.sysml.runtime.matrix.data.MatrixBlock]
 = org.apache.spark.api.java.JavaPairRDD@56377665
+X2: 
org.apache.spark.api.java.JavaPairRDD[org.apache.sysml.runtime.matrix.data.MatrixIndexes,org.apache.sysml.runtime.matrix.data.MatrixBlock]
 = org.apache.spark.api.java.JavaPairRDD@650f29d2
+y2: 
org.apache.spark.api.java.JavaPairRDD[org.apache.sysml.runtime.matrix.data.MatrixIndexes,org.apache.sysml.runtime.matrix.data.MatrixBlock]
 = org.apache.spark.api.java.JavaPairRDD@334857a8
 cnt1: Long = 10
 cnt2: Long = 10
 {% endhighlight %}
@@ -936,7 +936,7 @@ val trainingTimePerIter = trainingTime / iters
 **Output:**
 {% highlight scala %}
 start: Long = 1444672090620
-outputs: com.ibm.bi.dml.api.MLOutput = com.ibm.bi.dml.api.MLOutput@5d2c22d0
+outputs: org.apache.sysml.api.MLOutput = org.apache.sysml.api.MLOutput@5d2c22d0
 trainingTime: Double = 1.176
 B: org.apache.spark.sql.DataFrame = [C1: double]
 r2: Double = 0.9677079547216473

http://git-wip-us.apache.org/repos/asf/incubator-systemml/blob/350f7feb/quick-start-guide.md
----------------------------------------------------------------------
diff --git a/quick-start-guide.md b/quick-start-guide.md
index b01aea1..1a7935b 100644
--- a/quick-start-guide.md
+++ b/quick-start-guide.md
@@ -357,7 +357,7 @@ If you encounter a `"java.lang.OutOfMemoryError"` you can 
edit the invocation
 script (`runStandaloneSystemML.sh` or `runStandaloneSystemML.bat`) to increase
 the memory available to the JVM, i.e: 
 
-    java -Xmx16g -Xms4g -Xmn1g -cp ${CLASSPATH} com.ibm.bi.dml.api.DMLScript \
+    java -Xmx16g -Xms4g -Xmn1g -cp ${CLASSPATH} org.apache.sysml.api.DMLScript 
\
          -f ${SCRIPT_FILE} -exec singlenode -config=SystemML-config.xml \
          $@
 

Reply via email to