[jira] [Resolved] (SYSTEMML-1483) Add Deconvolution layer in nn library and Caffe2DML

2017-05-12 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare resolved SYSTEMML-1483.
---
   Resolution: Fixed
Fix Version/s: SystemML 1.0

Resolved by commit 
https://github.com/apache/incubator-systemml/commit/d04d2381f369bc29c4c33e98381bcdc8a4d0aebb

> Add Deconvolution layer in nn library and Caffe2DML
> ---
>
> Key: SYSTEMML-1483
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1483
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: Niketan Pansare
>Assignee: Prithviraj Sen
> Fix For: SystemML 1.0
>
>
> http://caffe.berkeleyvision.org/tutorial/layers/deconvolution.html
> [~mwdus...@us.ibm.com] [~prithvi_r_s] 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (SYSTEMML-1483) Add Deconvolution layer in nn library and Caffe2DML

2017-05-12 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare reassigned SYSTEMML-1483:
-

Assignee: Prithviraj Sen

> Add Deconvolution layer in nn library and Caffe2DML
> ---
>
> Key: SYSTEMML-1483
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1483
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: Niketan Pansare
>Assignee: Prithviraj Sen
>
> http://caffe.berkeleyvision.org/tutorial/layers/deconvolution.html
> [~mwdus...@us.ibm.com] [~prithvi_r_s] 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1599) Extend nn layers to support different initialization type

2017-05-10 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1599:
-

 Summary: Extend nn layers to support different initialization type
 Key: SYSTEMML-1599
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1599
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


Caffe currently supports the users to configure different initialization types:
1. constant
2. uniform
3. gaussian
4. positive_unitball
5. xavier
6. msra
7. bilinear

The init() function of the layer should accept `type` variable which can be 
passed by Caffe2DML.

[~mwdus...@us.ibm.com] [~prithvi_r_s]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (SYSTEMML-1589) conv2d_bias_add fails w/ NPE on lenet with random data

2017-05-07 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare closed SYSTEMML-1589.
-
   Resolution: Fixed
Fix Version/s: SystemML 1.0

Closed by the commit 
https://github.com/apache/incubator-systemml/commit/6863632088c8d0b548a17413692b399d512a991d

> conv2d_bias_add fails w/ NPE on lenet with random data
> --
>
> Key: SYSTEMML-1589
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1589
> Project: SystemML
>  Issue Type: Bug
>Reporter: Matthias Boehm
>Assignee: Niketan Pansare
> Fix For: SystemML 1.0
>
>
> The lenet dml script fails with a null pointer exception for random multi 
> class data, generated with
> {code}
> X_full = rand(rows=6,cols=784);
> y_full = round(rand(rows=nrow(X_full), cols=1, min=1, max=10));
> {code}
> The detailed stacktrace is as follows:
> {code}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.sysml.runtime.matrix.data.LibMatrixDNN.getRowInDenseFormat(LibMatrixDNN.java:1355)
> at 
> org.apache.sysml.runtime.matrix.data.LibMatrixDNN.doIm2colSparse(LibMatrixDNN.java:1382)
> at 
> org.apache.sysml.runtime.matrix.data.LibMatrixDNN.doIm2col(LibMatrixDNN.java:1421)
> at 
> org.apache.sysml.runtime.matrix.data.LibMatrixDNN.doLoopedIm2ColConv2d(LibMatrixDNN.java:406)
> at 
> org.apache.sysml.runtime.matrix.data.LibMatrixDNN.access$400(LibMatrixDNN.java:51)
> at 
> org.apache.sysml.runtime.matrix.data.LibMatrixDNN$ConvTask.call(LibMatrixDNN.java:1143)
> at 
> org.apache.sysml.runtime.matrix.data.LibMatrixDNN$ConvTask.call(LibMatrixDNN.java:1076)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (SYSTEMML-1573) Incorporate ALLOW_OPERATOR_FUSION in ConvolutionOp for developer testing

2017-05-07 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare closed SYSTEMML-1573.
-
   Resolution: Fixed
 Assignee: Niketan Pansare
Fix Version/s: SystemML 1.0

Closed by the commit 
https://github.com/apache/incubator-systemml/commit/6c215e700c1855074228972f952663663f6eabaa.

> Incorporate ALLOW_OPERATOR_FUSION in ConvolutionOp for developer testing
> 
>
> Key: SYSTEMML-1573
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1573
> Project: SystemML
>  Issue Type: Improvement
>Reporter: Niketan Pansare
>Assignee: Niketan Pansare
> Fix For: SystemML 1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (SYSTEMML-1585) Include JCuda jars into SystemML's extra.jar

2017-05-07 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998701#comment-15998701
 ] 

Niketan Pansare edited comment on SYSTEMML-1585 at 5/7/17 9:16 PM:
---

[~nakul02] [~deron] 


was (Author: niketanpansare):
[~nakul02]

> Include JCuda jars into SystemML's extra.jar
> 
>
> Key: SYSTEMML-1585
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1585
> Project: SystemML
>  Issue Type: Improvement
>Reporter: Niketan Pansare
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (SYSTEMML-1585) Include JCuda jars into SystemML's extra.jar

2017-05-07 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare reassigned SYSTEMML-1585:
-

Assignee: (was: Niketan Pansare)

> Include JCuda jars into SystemML's extra.jar
> 
>
> Key: SYSTEMML-1585
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1585
> Project: SystemML
>  Issue Type: Improvement
>Reporter: Niketan Pansare
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SYSTEMML-1481) Add Python helper function to convert LMDB to binaryblocks

2017-05-07 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1680#comment-1680
 ] 

Niketan Pansare commented on SYSTEMML-1481:
---

[~acs_s] We can possibly extend 
https://github.com/niketanpansare/model_zoo/tree/master/caffe/#lmdb-to-jpeg-conversion
 by converting caffe.proto to `caffe_pb.py` and including `save_lmdb` in our 
python converter utils.

> Add Python helper function to convert LMDB to binaryblocks
> --
>
> Key: SYSTEMML-1481
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1481
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: Niketan Pansare
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SYSTEMML-1583) Implement converter in Python to convert caffemodel in SystemML format

2017-05-07 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1677#comment-1677
 ] 

Niketan Pansare commented on SYSTEMML-1583:
---

To allow for transfer learning as well as prediction, Caffe2DML provides 
optional 'weights' parameter in the constructor. This is consistent with 
Caffe's usage.

{code}
# Prediction
predict_lenet = Caffe2DML(sqlCtx, solver='lenet_solver.proto', 
weights='lenet_model', input_shape=(1, 28, 28))
predict_lenet.predict(X_test)
{code}

The key question remains what should be the format of 'lenet_model' ? 
- The current version requires that 'lenet_model' is a directory that contains 
weights/bias of the relevant layers in the format accepted by SystemML's read.
- In addition, we can extend Caffe2DML to accept .caffemodel file which calls 
the converter function developed as part of this JIRA. 

Until we stabilize the converter functions, 
- Using .caffemodel will be a 2-step process: First, convert .caffemodel to 
csv/binaryblock and store in HDFS/Local FS. Then, invoke Caffe2DML with the 
path to the HDFS/Local FS directory. 
- We should maintain a Model Zoo (https://github.com/niketanpansare/model_zoo/) 
with weights in our format along with guided notebook. A good starting point 
will be https://github.com/caffe2/caffe2/wiki/Model-Zoo

[~acs_s] [~reinwald] [~mwdus...@us.ibm.com] [~freiss] 

> Implement converter in Python to convert caffemodel in SystemML format
> --
>
> Key: SYSTEMML-1583
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1583
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: Niketan Pansare
>Assignee: Arvind Surve
>
> Ideally, this converter shouldnot require the caffe to be installed. Please 
> see 
> http://stackoverflow.com/questions/37572948/extracting-weights-from-caffemodel-without-caffe-installed-in-python
> An example code to convert a caffe model to csv if caffe is installed:
> {code}
> import caffe
> import numpy as np
> #net = 
> caffe.Net('/home/biuser/nike/barista/VGG_ILSVRC_19_layers_train_val.prototxt',
>  caffe.TEST)
> net = 
> caffe.Net('/home/biuser/VGG_trained_models/VGG_ILSVRC_19_layers_deploy.prototxt',
>  '/home/biuser/VGG_trained_models/VGG_ILSVRC_19_layers.caffemodel', 
> caffe.TEST)
> #surgery.transplant(net, base_net)
> for l in [ "conv1_1", "conv1_2", "conv2_1", "conv2_2", "conv3_1", "conv3_2", 
> "conv3_3", "conv3_4", "conv4_1", "conv4_2", "conv4_3", "conv4_4", "conv5_1", 
> "conv5_2", "conv5_3", "conv5_4", "fc6", "fc7", "fc8" ]:
> w = net.params[l][0].data
> w = w.reshape(w.shape[0], -1)
> b = net.params[l][1].data
> b = b.reshape(b.shape[0], -1)
> # You may have to reshape it for fc layers
> np.savetxt("VGG_trained_models/" + l + "_weight.csv", w, 
> delimiter=",")
> np.savetxt("VGG_trained_models/" + l + "_bias.csv", b, delimiter=",")
> {code}
> Here is an example pyspark script to test this JIRA:
> {code}
> from systemml.mllearn import Caffe2DML
> from pyspark.sql import SQLContext
> import numpy as np
> import urllib, os, scipy.ndimage
> from PIL import Image
> import systemml as sml
> # ImageNet specific parameters
> img_shape = (3, 224, 224)
> # Downloads a jpg image, resizes it to 224 and return as numpy array in N X 
> CHW format
> url = 
> 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/58/MountainLion.jpg/312px-MountainLion.jpg'
> outFile = 'test.jpg'
> urllib.urlretrieve(url, outFile)
> input_image = sml.convertImageToNumPyArr(Image.open(outFile), 
> img_shape=img_shape)
> # Download the ResNet network
> import urllib
> urllib.urlretrieve('https://raw.githubusercontent.com/niketanpansare/model_zoo/master/caffe/vision/resnet/ilsvrc12/ResNet_50_network.proto',
>  'ResNet_50_network.proto')
> urllib.urlretrieve('https://raw.githubusercontent.com/niketanpansare/model_zoo/master/caffe/vision/resnet/ilsvrc12/ResNet_50_solver.proto',
>  'ResNet_50_solver.proto')
> home_dir = os.path.expanduser('~')
> # let's assume that this function is implemented as 
> saveAsBinaryBlock(inputCaffeModel, outputDir)
> resnet_pretrained_weight_dir = os.path.join(home_dir, 'model_zoo', 'caffe', 
> 'vision', 'resnet', 'ilsvrc12', 'ResNet_50_pretrained_weights')
> urllib.urlretrieve('https://deepdetect.com/models/resnet/ResNet-50-model.caffemodel',
>  'ResNet-50-model.caffemodel')
> ###
> # To be implemented as part of this JIRA
> sml.saveAsBinaryBlock('ResNet-50-model.caffemodel', 
> resnet_pretrained_weight_dir)
> ###
> resnet = Caffe2DML(sqlCtx, solver='ResNet_50_solver.proto', 
> weights=resnet_pretrained_weight_dir, input_shape=img_shape)
> resnet.predict(input_ima

[jira] [Commented] (SYSTEMML-1585) Include JCuda jars into extra

2017-05-05 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998701#comment-15998701
 ] 

Niketan Pansare commented on SYSTEMML-1585:
---

[~nakul02]

> Include JCuda jars into extra
> -
>
> Key: SYSTEMML-1585
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1585
> Project: SystemML
>  Issue Type: Improvement
>Reporter: Niketan Pansare
>Assignee: Niketan Pansare
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SYSTEMML-1585) Include JCuda jars into extra.jar

2017-05-05 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare updated SYSTEMML-1585:
--
Summary: Include JCuda jars into extra.jar  (was: Include JCuda jars into 
extra)

> Include JCuda jars into extra.jar
> -
>
> Key: SYSTEMML-1585
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1585
> Project: SystemML
>  Issue Type: Improvement
>Reporter: Niketan Pansare
>Assignee: Niketan Pansare
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SYSTEMML-1585) Include JCuda jars into SystemML's extra.jar

2017-05-05 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare updated SYSTEMML-1585:
--
Summary: Include JCuda jars into SystemML's extra.jar  (was: Include JCuda 
jars into extra.jar)

> Include JCuda jars into SystemML's extra.jar
> 
>
> Key: SYSTEMML-1585
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1585
> Project: SystemML
>  Issue Type: Improvement
>Reporter: Niketan Pansare
>Assignee: Niketan Pansare
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1585) Include JCuda jars into extra

2017-05-05 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1585:
-

 Summary: Include JCuda jars into extra
 Key: SYSTEMML-1585
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1585
 Project: SystemML
  Issue Type: Improvement
Reporter: Niketan Pansare
Assignee: Niketan Pansare






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (SYSTEMML-1583) Implement converter in Python to convert caffemodel in SystemML format

2017-05-04 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997718#comment-15997718
 ] 

Niketan Pansare edited comment on SYSTEMML-1583 at 5/5/17 2:07 AM:
---

[~acs_s] Please follow the instructions described in 
https://github.com/niketanpansare/model_zoo/tree/master/caffe#caffemodel-to-csv-conversion
 and use 
https://github.com/niketanpansare/model_zoo/blob/master/caffe/conversion_utils.py
 as a starting point :)


was (Author: niketanpansare):
[~acs_s] Please follow the instructions described in 
https://github.com/niketanpansare/model_zoo/tree/master/caffe#converting-pretrained-caffe-model-caffemodel-to-the-format-supported-by-systemml-for-example-csv
 and use 
https://github.com/niketanpansare/model_zoo/blob/master/caffe/conversion_utils.py
 as a starting point :)

> Implement converter in Python to convert caffemodel in SystemML format
> --
>
> Key: SYSTEMML-1583
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1583
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: Niketan Pansare
>Assignee: Arvind Surve
>
> Ideally, this converter shouldnot require the caffe to be installed. Please 
> see 
> http://stackoverflow.com/questions/37572948/extracting-weights-from-caffemodel-without-caffe-installed-in-python
> An example code to convert a caffe model to csv if caffe is installed:
> {code}
> import caffe
> import numpy as np
> #net = 
> caffe.Net('/home/biuser/nike/barista/VGG_ILSVRC_19_layers_train_val.prototxt',
>  caffe.TEST)
> net = 
> caffe.Net('/home/biuser/VGG_trained_models/VGG_ILSVRC_19_layers_deploy.prototxt',
>  '/home/biuser/VGG_trained_models/VGG_ILSVRC_19_layers.caffemodel', 
> caffe.TEST)
> #surgery.transplant(net, base_net)
> for l in [ "conv1_1", "conv1_2", "conv2_1", "conv2_2", "conv3_1", "conv3_2", 
> "conv3_3", "conv3_4", "conv4_1", "conv4_2", "conv4_3", "conv4_4", "conv5_1", 
> "conv5_2", "conv5_3", "conv5_4", "fc6", "fc7", "fc8" ]:
> w = net.params[l][0].data
> w = w.reshape(w.shape[0], -1)
> b = net.params[l][1].data
> b = b.reshape(b.shape[0], -1)
> # You may have to reshape it for fc layers
> np.savetxt("VGG_trained_models/" + l + "_weight.csv", w, 
> delimiter=",")
> np.savetxt("VGG_trained_models/" + l + "_bias.csv", b, delimiter=",")
> {code}
> Here is an example pyspark script to test this JIRA:
> {code}
> from systemml.mllearn import Caffe2DML
> from pyspark.sql import SQLContext
> import numpy as np
> import urllib, os, scipy.ndimage
> from PIL import Image
> import systemml as sml
> # ImageNet specific parameters
> img_shape = (3, 224, 224)
> # Downloads a jpg image, resizes it to 224 and return as numpy array in N X 
> CHW format
> url = 
> 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/58/MountainLion.jpg/312px-MountainLion.jpg'
> outFile = 'test.jpg'
> urllib.urlretrieve(url, outFile)
> input_image = sml.convertImageToNumPyArr(Image.open(outFile), 
> img_shape=img_shape)
> # Download the ResNet network
> import urllib
> urllib.urlretrieve('https://raw.githubusercontent.com/niketanpansare/model_zoo/master/caffe/vision/resnet/ilsvrc12/ResNet_50_network.proto',
>  'ResNet_50_network.proto')
> urllib.urlretrieve('https://raw.githubusercontent.com/niketanpansare/model_zoo/master/caffe/vision/resnet/ilsvrc12/ResNet_50_solver.proto',
>  'ResNet_50_solver.proto')
> home_dir = os.path.expanduser('~')
> # let's assume that this function is implemented as 
> saveAsBinaryBlock(inputCaffeModel, outputDir)
> resnet_pretrained_weight_dir = os.path.join(home_dir, 'model_zoo', 'caffe', 
> 'vision', 'resnet', 'ilsvrc12', 'ResNet_50_pretrained_weights')
> urllib.urlretrieve('https://deepdetect.com/models/resnet/ResNet-50-model.caffemodel',
>  'ResNet-50-model.caffemodel')
> ###
> # To be implemented as part of this JIRA
> sml.saveAsBinaryBlock('ResNet-50-model.caffemodel', 
> resnet_pretrained_weight_dir)
> ###
> resnet = Caffe2DML(sqlCtx, solver='ResNet_50_solver.proto', 
> weights=resnet_pretrained_weight_dir, input_shape=img_shape)
> resnet.predict(input_image)
> # This should return array(['cougar, puma, catamount, mountain lion, painter, 
> panther, Felis '], dtype='|S64')
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SYSTEMML-1583) Implement converter in Python to convert caffemodel in SystemML format

2017-05-04 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997718#comment-15997718
 ] 

Niketan Pansare commented on SYSTEMML-1583:
---

[~acs_s] Please follow the instructions described in 
https://github.com/niketanpansare/model_zoo/tree/master/caffe#converting-pretrained-caffe-model-caffemodel-to-the-format-supported-by-systemml-for-example-csv
 and use 
https://github.com/niketanpansare/model_zoo/blob/master/caffe/conversion_utils.py
 as a starting point :)

> Implement converter in Python to convert caffemodel in SystemML format
> --
>
> Key: SYSTEMML-1583
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1583
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: Niketan Pansare
>Assignee: Arvind Surve
>
> Ideally, this converter shouldnot require the caffe to be installed. Please 
> see 
> http://stackoverflow.com/questions/37572948/extracting-weights-from-caffemodel-without-caffe-installed-in-python
> An example code to convert a caffe model to csv if caffe is installed:
> {code}
> import caffe
> import numpy as np
> #net = 
> caffe.Net('/home/biuser/nike/barista/VGG_ILSVRC_19_layers_train_val.prototxt',
>  caffe.TEST)
> net = 
> caffe.Net('/home/biuser/VGG_trained_models/VGG_ILSVRC_19_layers_deploy.prototxt',
>  '/home/biuser/VGG_trained_models/VGG_ILSVRC_19_layers.caffemodel', 
> caffe.TEST)
> #surgery.transplant(net, base_net)
> for l in [ "conv1_1", "conv1_2", "conv2_1", "conv2_2", "conv3_1", "conv3_2", 
> "conv3_3", "conv3_4", "conv4_1", "conv4_2", "conv4_3", "conv4_4", "conv5_1", 
> "conv5_2", "conv5_3", "conv5_4", "fc6", "fc7", "fc8" ]:
> w = net.params[l][0].data
> w = w.reshape(w.shape[0], -1)
> b = net.params[l][1].data
> b = b.reshape(b.shape[0], -1)
> # You may have to reshape it for fc layers
> np.savetxt("VGG_trained_models/" + l + "_weight.csv", w, 
> delimiter=",")
> np.savetxt("VGG_trained_models/" + l + "_bias.csv", b, delimiter=",")
> {code}
> Here is an example pyspark script to test this JIRA:
> {code}
> from systemml.mllearn import Caffe2DML
> from pyspark.sql import SQLContext
> import numpy as np
> import urllib, os, scipy.ndimage
> from PIL import Image
> import systemml as sml
> # ImageNet specific parameters
> img_shape = (3, 224, 224)
> # Downloads a jpg image, resizes it to 224 and return as numpy array in N X 
> CHW format
> url = 
> 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/58/MountainLion.jpg/312px-MountainLion.jpg'
> outFile = 'test.jpg'
> urllib.urlretrieve(url, outFile)
> input_image = sml.convertImageToNumPyArr(Image.open(outFile), 
> img_shape=img_shape)
> # Download the ResNet network
> import urllib
> urllib.urlretrieve('https://raw.githubusercontent.com/niketanpansare/model_zoo/master/caffe/vision/resnet/ilsvrc12/ResNet_50_network.proto',
>  'ResNet_50_network.proto')
> urllib.urlretrieve('https://raw.githubusercontent.com/niketanpansare/model_zoo/master/caffe/vision/resnet/ilsvrc12/ResNet_50_solver.proto',
>  'ResNet_50_solver.proto')
> home_dir = os.path.expanduser('~')
> # let's assume that this function is implemented as 
> saveAsBinaryBlock(inputCaffeModel, outputDir)
> resnet_pretrained_weight_dir = os.path.join(home_dir, 'model_zoo', 'caffe', 
> 'vision', 'resnet', 'ilsvrc12', 'ResNet_50_pretrained_weights')
> urllib.urlretrieve('https://deepdetect.com/models/resnet/ResNet-50-model.caffemodel',
>  'ResNet-50-model.caffemodel')
> ###
> # To be implemented as part of this JIRA
> sml.saveAsBinaryBlock('ResNet-50-model.caffemodel', 
> resnet_pretrained_weight_dir)
> ###
> resnet = Caffe2DML(sqlCtx, solver='ResNet_50_solver.proto', 
> weights=resnet_pretrained_weight_dir, input_shape=img_shape)
> resnet.predict(input_image)
> # This should return array(['cougar, puma, catamount, mountain lion, painter, 
> panther, Felis '], dtype='|S64')
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SYSTEMML-1583) Implement converter in Python to convert caffemodel in SystemML format

2017-05-04 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare updated SYSTEMML-1583:
--
Description: 
Ideally, this converter shouldnot require the caffe to be installed. Please see 
http://stackoverflow.com/questions/37572948/extracting-weights-from-caffemodel-without-caffe-installed-in-python

An example code to convert a caffe model to csv if caffe is installed:
{code}
import caffe
import numpy as np
#net = 
caffe.Net('/home/biuser/nike/barista/VGG_ILSVRC_19_layers_train_val.prototxt', 
caffe.TEST)
net = 
caffe.Net('/home/biuser/VGG_trained_models/VGG_ILSVRC_19_layers_deploy.prototxt',
 '/home/biuser/VGG_trained_models/VGG_ILSVRC_19_layers.caffemodel', caffe.TEST)
#surgery.transplant(net, base_net)
for l in [ "conv1_1", "conv1_2", "conv2_1", "conv2_2", "conv3_1", "conv3_2", 
"conv3_3", "conv3_4", "conv4_1", "conv4_2", "conv4_3", "conv4_4", "conv5_1", 
"conv5_2", "conv5_3", "conv5_4", "fc6", "fc7", "fc8" ]:
w = net.params[l][0].data
w = w.reshape(w.shape[0], -1)
b = net.params[l][1].data
b = b.reshape(b.shape[0], -1)
# You may have to reshape it for fc layers
np.savetxt("VGG_trained_models/" + l + "_weight.csv", w, delimiter=",")
np.savetxt("VGG_trained_models/" + l + "_bias.csv", b, delimiter=",")
{code}

Here is an example pyspark script to test this JIRA:
{code}
from systemml.mllearn import Caffe2DML
from pyspark.sql import SQLContext
import numpy as np
import urllib, os, scipy.ndimage
from PIL import Image
import systemml as sml

# ImageNet specific parameters
img_shape = (3, 224, 224)

# Downloads a jpg image, resizes it to 224 and return as numpy array in N X CHW 
format
url = 
'https://upload.wikimedia.org/wikipedia/commons/thumb/5/58/MountainLion.jpg/312px-MountainLion.jpg'
outFile = 'test.jpg'
urllib.urlretrieve(url, outFile)
input_image = sml.convertImageToNumPyArr(Image.open(outFile), 
img_shape=img_shape)

# Download the ResNet network
import urllib
urllib.urlretrieve('https://raw.githubusercontent.com/niketanpansare/model_zoo/master/caffe/vision/resnet/ilsvrc12/ResNet_50_network.proto',
 'ResNet_50_network.proto')
urllib.urlretrieve('https://raw.githubusercontent.com/niketanpansare/model_zoo/master/caffe/vision/resnet/ilsvrc12/ResNet_50_solver.proto',
 'ResNet_50_solver.proto')

home_dir = os.path.expanduser('~')

# let's assume that this function is implemented as 
saveAsBinaryBlock(inputCaffeModel, outputDir)
resnet_pretrained_weight_dir = os.path.join(home_dir, 'model_zoo', 'caffe', 
'vision', 'resnet', 'ilsvrc12', 'ResNet_50_pretrained_weights')
urllib.urlretrieve('https://deepdetect.com/models/resnet/ResNet-50-model.caffemodel',
 'ResNet-50-model.caffemodel')
###
# To be implemented as part of this JIRA
sml.saveAsBinaryBlock('ResNet-50-model.caffemodel', 
resnet_pretrained_weight_dir)
###
resnet = Caffe2DML(sqlCtx, solver='ResNet_50_solver.proto', 
weights=resnet_pretrained_weight_dir, input_shape=img_shape)
resnet.predict(input_image)
# This should return array(['cougar, puma, catamount, mountain lion, painter, 
panther, Felis '], dtype='|S64')
{code}

  was:
Ideally, this converter shouldnot require the caffe to be installed. Please see 
http://stackoverflow.com/questions/37572948/extracting-weights-from-caffemodel-without-caffe-installed-in-python

An example code to convert a caffe model to csv if caffe is installed:
{code}
import caffe
import numpy as np
#net = 
caffe.Net('/home/biuser/nike/barista/VGG_ILSVRC_19_layers_train_val.prototxt', 
caffe.TEST)
net = 
caffe.Net('/home/biuser/VGG_trained_models/VGG_ILSVRC_19_layers_deploy.prototxt',
 '/home/biuser/VGG_trained_models/VGG_ILSVRC_19_layers.caffemodel', caffe.TEST)
#surgery.transplant(net, base_net)
for l in [ "conv1_1", "conv1_2", "conv2_1", "conv2_2", "conv3_1", "conv3_2", 
"conv3_3", "conv3_4", "conv4_1", "conv4_2", "conv4_3", "conv4_4", "conv5_1", 
"conv5_2", "conv5_3", "conv5_4", "fc6", "fc7", "fc8" ]:
w = net.params[l][0].data
w = w.reshape(w.shape[0], -1)
b = net.params[l][1].data
b = b.reshape(b.shape[0], -1)
# You may have to reshape it for fc layers
np.savetxt("VGG_trained_models/" + l + "_weight.csv", w, delimiter=",")
np.savetxt("VGG_trained_models/" + l + "_bias.csv", b, delimiter=",")
{code}

Here is an example pyspark script to test this JIRA:
{code}
from systemml.mllearn import Caffe2DML
from pyspark.sql import SQLContext
import numpy as np
import urllib, os, scipy.ndimage
from PIL import Image
import systemml as sml

# ImageNet specific parameters
img_shape = (3, 224, 224)

# Downloads a jpg image, resizes it to 224 and return as numpy array in N X CHW 
format
url = 
'https://upload.wikimedia.org/wikipedia/commons/thumb/5/58/MountainLion.jpg/312p

[jira] [Updated] (SYSTEMML-1583) Implement converter in Python to convert caffemodel in SystemML format

2017-05-04 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare updated SYSTEMML-1583:
--
Description: 
Ideally, this converter shouldnot require the caffe to be installed. Please see 
http://stackoverflow.com/questions/37572948/extracting-weights-from-caffemodel-without-caffe-installed-in-python

An example code to convert a caffe model to csv if caffe is installed:
{code}
import caffe
import numpy as np
#net = 
caffe.Net('/home/biuser/nike/barista/VGG_ILSVRC_19_layers_train_val.prototxt', 
caffe.TEST)
net = 
caffe.Net('/home/biuser/VGG_trained_models/VGG_ILSVRC_19_layers_deploy.prototxt',
 '/home/biuser/VGG_trained_models/VGG_ILSVRC_19_layers.caffemodel', caffe.TEST)
#surgery.transplant(net, base_net)
for l in [ "conv1_1", "conv1_2", "conv2_1", "conv2_2", "conv3_1", "conv3_2", 
"conv3_3", "conv3_4", "conv4_1", "conv4_2", "conv4_3", "conv4_4", "conv5_1", 
"conv5_2", "conv5_3", "conv5_4", "fc6", "fc7", "fc8" ]:
w = net.params[l][0].data
w = w.reshape(w.shape[0], -1)
b = net.params[l][1].data
b = b.reshape(b.shape[0], -1)
# You may have to reshape it for fc layers
np.savetxt("VGG_trained_models/" + l + "_weight.csv", w, delimiter=",")
np.savetxt("VGG_trained_models/" + l + "_bias.csv", b, delimiter=",")
{code}

Here is an example pyspark script to test this JIRA:
{code}
from systemml.mllearn import Caffe2DML
from pyspark.sql import SQLContext
import numpy as np
import urllib, os, scipy.ndimage
from PIL import Image
import systemml as sml

# ImageNet specific parameters
img_shape = (3, 224, 224)

# Downloads a jpg image, resizes it to 224 and return as numpy array in N X CHW 
format
url = 
'https://upload.wikimedia.org/wikipedia/commons/thumb/5/58/MountainLion.jpg/312px-MountainLion.jpg'
outFile = 'test.jpg'
urllib.urlretrieve(url, outFile)
input_image = sml.convertImageToNumPyArr(Image.open(outFile), 
img_shape=img_shape)

# Download the ResNet network
import urllib
urllib.urlretrieve('https://raw.githubusercontent.com/niketanpansare/model_zoo/master/caffe/vision/resnet/ilsvrc12/ResNet_50_network.proto',
 'ResNet_50_network.proto')
urllib.urlretrieve('https://raw.githubusercontent.com/niketanpansare/model_zoo/master/caffe/vision/resnet/ilsvrc12/ResNet_50_solver.proto',
 'ResNet_50_solver.proto')

home_dir = os.path.expanduser('~')

# let's assume that this function is implemented as 
saveAsBinaryBlock(inputCaffeModel, outputDir)
resnet_pretrained_weight_dir = os.path.join(home_dir, 'model_zoo', 'caffe', 
'vision', 'resnet', 'ilsvrc12', 'ResNet_50_pretrained_weights')
urllib.urlretrieve(('https://deepdetect.com/models/resnet/ResNet-50-model.caffemodel',
 'ResNet-50-model.caffemodel')
###
# To be implemented as part of this JIRA
sml.saveAsBinaryBlock('ResNet-50-model.caffemodel', 
resnet_pretrained_weight_dir)
###
resnet = Caffe2DML(sqlCtx, solver='ResNet_50_solver.proto', 
weights=resnet_pretrained_weight_dir, input_shape=img_shape)
resnet.predict(input_image)
# This should return array(['cougar, puma, catamount, mountain lion, painter, 
panther, Felis '], dtype='|S64')
{code}

  was:
Ideally, this converter shouldnot require the caffe to be installed. Please see 
http://stackoverflow.com/questions/37572948/extracting-weights-from-caffemodel-without-caffe-installed-in-python

An example code to convert a caffe model to csv if caffe is installed:
{code}
import caffe
import numpy as np
#net = 
caffe.Net('/home/biuser/nike/barista/VGG_ILSVRC_19_layers_train_val.prototxt', 
caffe.TEST)
net = 
caffe.Net('/home/biuser/VGG_trained_models/VGG_ILSVRC_19_layers_deploy.prototxt',
 '/home/biuser/VGG_trained_models/VGG_ILSVRC_19_layers.caffemodel', caffe.TEST)
#surgery.transplant(net, base_net)
for l in [ "conv1_1", "conv1_2", "conv2_1", "conv2_2", "conv3_1", "conv3_2", 
"conv3_3", "conv3_4", "conv4_1", "conv4_2", "conv4_3", "conv4_4", "conv5_1", 
"conv5_2", "conv5_3", "conv5_4", "fc6", "fc7", "fc8" ]:
w = net.params[l][0].data
w = w.reshape(w.shape[0], -1)
b = net.params[l][1].data
b = b.reshape(b.shape[0], -1)
# You may have to reshape it for fc layers
np.savetxt("VGG_trained_models/" + l + "_weight.csv", w, delimiter=",")
np.savetxt("VGG_trained_models/" + l + "_bias.csv", b, delimiter=",")
{code}

Here is an example pyspark script to test this JIRA:
{code}
from systemml.mllearn import Caffe2DML
from pyspark.sql import SQLContext
import numpy as np
import urllib, os, scipy.ndimage
from PIL import Image
import systemml as sml

# ImageNet specific parameters
img_shape = (3, 224, 224)

# Downloads a jpg image, resizes it to 224 and return as numpy array in N X CHW 
format
url = 
'https://upload.wikimedia.org/wikipedia/commons/thumb/5/58/MountainLion.jpg/312

[jira] [Updated] (SYSTEMML-1583) Implement converter in Python to convert caffemodel in SystemML format

2017-05-04 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare updated SYSTEMML-1583:
--
Description: 
Ideally, this converter shouldnot require the caffe to be installed. Please see 
http://stackoverflow.com/questions/37572948/extracting-weights-from-caffemodel-without-caffe-installed-in-python

An example code to convert a caffe model to csv if caffe is installed:
{code}
import caffe
import numpy as np
#net = 
caffe.Net('/home/biuser/nike/barista/VGG_ILSVRC_19_layers_train_val.prototxt', 
caffe.TEST)
net = 
caffe.Net('/home/biuser/VGG_trained_models/VGG_ILSVRC_19_layers_deploy.prototxt',
 '/home/biuser/VGG_trained_models/VGG_ILSVRC_19_layers.caffemodel', caffe.TEST)
#surgery.transplant(net, base_net)
for l in [ "conv1_1", "conv1_2", "conv2_1", "conv2_2", "conv3_1", "conv3_2", 
"conv3_3", "conv3_4", "conv4_1", "conv4_2", "conv4_3", "conv4_4", "conv5_1", 
"conv5_2", "conv5_3", "conv5_4", "fc6", "fc7", "fc8" ]:
w = net.params[l][0].data
w = w.reshape(w.shape[0], -1)
b = net.params[l][1].data
b = b.reshape(b.shape[0], -1)
# You may have to reshape it for fc layers
np.savetxt("VGG_trained_models/" + l + "_weight.csv", w, delimiter=",")
np.savetxt("VGG_trained_models/" + l + "_bias.csv", b, delimiter=",")
{code}

Here is an example pyspark script to test this JIRA:
{code}
from systemml.mllearn import Caffe2DML
from pyspark.sql import SQLContext
import numpy as np
import urllib, os, scipy.ndimage
from PIL import Image
import systemml as sml

# ImageNet specific parameters
img_shape = (3, 224, 224)

# Downloads a jpg image, resizes it to 224 and return as numpy array in N X CHW 
format
url = 
'https://upload.wikimedia.org/wikipedia/commons/thumb/5/58/MountainLion.jpg/312px-MountainLion.jpg'
outFile = 'test.jpg'
urllib.urlretrieve(url, outFile)
input_image = sml.convertImageToNumPyArr(Image.open(outFile), 
img_shape=img_shape)

# Download the ResNet network
import urllib
urllib.urlretrieve('https://raw.githubusercontent.com/niketanpansare/model_zoo/master/caffe/vision/resnet/ilsvrc12/ResNet_50_network.proto',
 'ResNet_50_network.proto')
urllib.urlretrieve('https://raw.githubusercontent.com/niketanpansare/model_zoo/master/caffe/vision/resnet/ilsvrc12/ResNet_50_solver.proto',
 'ResNet_50_solver.proto')

home_dir = os.path.expanduser('~')

# let's assume that this function is implemented as 
saveAsBinaryBlock(inputCaffeModel, outputDir)
resnet_pretrained_weight_dir = os.path.join(home_dir, 'model_zoo', 'caffe', 
'vision', 'resnet', 'ilsvrc12', 'ResNet_50_pretrained_weights')
urllib.urlretrieve(('https://deepdetect.com/models/resnet/ResNet-50-model.caffemodel',
 'ResNet-50-model.caffemodel')
sml.saveAsBinaryBlock('ResNet-50-model.caffemodel', 
resnet_pretrained_weight_dir)
resnet = Caffe2DML(sqlCtx, solver='ResNet_50_solver.proto', 
weights=resnet_pretrained_weight_dir, input_shape=img_shape)
resnet.predict(input_image)
# This should return array(['cougar, puma, catamount, mountain lion, painter, 
panther, Felis '], dtype='|S64')
{code}

> Implement converter in Python to convert caffemodel in SystemML format
> --
>
> Key: SYSTEMML-1583
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1583
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: Niketan Pansare
>Assignee: Arvind Surve
>
> Ideally, this converter shouldnot require the caffe to be installed. Please 
> see 
> http://stackoverflow.com/questions/37572948/extracting-weights-from-caffemodel-without-caffe-installed-in-python
> An example code to convert a caffe model to csv if caffe is installed:
> {code}
> import caffe
> import numpy as np
> #net = 
> caffe.Net('/home/biuser/nike/barista/VGG_ILSVRC_19_layers_train_val.prototxt',
>  caffe.TEST)
> net = 
> caffe.Net('/home/biuser/VGG_trained_models/VGG_ILSVRC_19_layers_deploy.prototxt',
>  '/home/biuser/VGG_trained_models/VGG_ILSVRC_19_layers.caffemodel', 
> caffe.TEST)
> #surgery.transplant(net, base_net)
> for l in [ "conv1_1", "conv1_2", "conv2_1", "conv2_2", "conv3_1", "conv3_2", 
> "conv3_3", "conv3_4", "conv4_1", "conv4_2", "conv4_3", "conv4_4", "conv5_1", 
> "conv5_2", "conv5_3", "conv5_4", "fc6", "fc7", "fc8" ]:
> w = net.params[l][0].data
> w = w.reshape(w.shape[0], -1)
> b = net.params[l][1].data
> b = b.reshape(b.shape[0], -1)
> # You may have to reshape it for fc layers
> np.savetxt("VGG_trained_models/" + l + "_weight.csv", w, 
> delimiter=",")
> np.savetxt("VGG_trained_models/" + l + "_bias.csv", b, delimiter=",")
> {code}
> Here is an example pyspark script to test this JIRA:
> {code}
> from systemml.mllearn import Caffe2DML
> from pyspark.sql import SQLContext
> import numpy as np
> import urllib, os, scipy.ndimage
> 

[jira] [Commented] (SYSTEMML-1483) Add Deconvolution layer in nn library and Caffe2DML

2017-05-03 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15996178#comment-15996178
 ] 

Niketan Pansare commented on SYSTEMML-1483:
---

[~prithvi_r_s] The commit 
https://github.com/apache/incubator-systemml/commit/76f3ca5d39e492fc3075c4bd8240ec5339647001
 should fix the incorrect metadata issue :)

> Add Deconvolution layer in nn library and Caffe2DML
> ---
>
> Key: SYSTEMML-1483
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1483
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: Niketan Pansare
>
> http://caffe.berkeleyvision.org/tutorial/layers/deconvolution.html
> [~mwdus...@us.ibm.com] [~prithvi_r_s] 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (SYSTEMML-1001) Allow users to pass non-0 padding in max_pool_builtin.dml

2017-05-03 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare resolved SYSTEMML-1001.
---
   Resolution: Fixed
Fix Version/s: SystemML 1.0

Resolved as part of the commit 
https://github.com/apache/incubator-systemml/commit/16e990928fa0201132688a8f7476856a02253030

> Allow users to pass non-0 padding in max_pool_builtin.dml
> -
>
> Key: SYSTEMML-1001
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1001
> Project: SystemML
>  Issue Type: Wish
>Reporter: Niketan Pansare
>Assignee: Mike Dusenberry
> Fix For: SystemML 1.0
>
>
> Useful pre-step to incorporate nn functions with caffe
> https://github.com/apache/incubator-systemml/blob/master/scripts/staging/SystemML-NN/nn/layers/max_pool_builtin.dml#L59
> https://github.com/apache/incubator-systemml/pull/158
> [~mwdus...@us.ibm.com]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (SYSTEMML-687) Optimize CP convolution/pooling instructions for sparse inputs

2017-05-03 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare resolved SYSTEMML-687.
--
   Resolution: Fixed
Fix Version/s: SystemML 1.0

Resolved in the commit 
https://github.com/apache/incubator-systemml/commit/2d2196d84750df8801f1218df2c7160ca8b438cb

> Optimize CP convolution/pooling instructions for sparse inputs
> --
>
> Key: SYSTEMML-687
> URL: https://issues.apache.org/jira/browse/SYSTEMML-687
> Project: SystemML
>  Issue Type: Task
>Reporter: Niketan Pansare
> Fix For: SystemML 1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SYSTEMML-1479) Make Caffe2DML feature-complete

2017-05-03 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15995738#comment-15995738
 ] 

Niketan Pansare commented on SYSTEMML-1479:
---

[~freiss] [~reinwald] [~mwdus...@us.ibm.com] [~acs_s]

> Make Caffe2DML feature-complete
> ---
>
> Key: SYSTEMML-1479
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1479
> Project: SystemML
>  Issue Type: Task
>Reporter: Niketan Pansare
>Assignee: Niketan Pansare
>
> This task will list all the remaining subtask to get Caffe2DML in production 
> ready state



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1583) Implement converter in Python to convert caffemodel in SystemML format

2017-05-03 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1583:
-

 Summary: Implement converter in Python to convert caffemodel in 
SystemML format
 Key: SYSTEMML-1583
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1583
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare
Assignee: Arvind Surve






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (SYSTEMML-1553) Add Caffe and TensorFlow license

2017-05-03 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare resolved SYSTEMML-1553.
---
   Resolution: Fixed
 Assignee: Niketan Pansare
Fix Version/s: SystemML 1.0

Addressed in the commit 
https://github.com/apache/incubator-systemml/commit/d0ab2af196013d31ae82675e7eb28416886a96b5

> Add Caffe and TensorFlow license
> 
>
> Key: SYSTEMML-1553
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1553
> Project: SystemML
>  Issue Type: Bug
>Reporter: Niketan Pansare
>Assignee: Niketan Pansare
> Fix For: SystemML 1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1582) Consider exploiting other MKL functionality such as sparse matrix multiplication, solve, etc

2017-05-03 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1582:
-

 Summary: Consider exploiting other MKL functionality such as 
sparse matrix multiplication, solve, etc
 Key: SYSTEMML-1582
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1582
 Project: SystemML
  Issue Type: Task
Reporter: Niketan Pansare






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SYSTEMML-1576) Support Native BLAS on SystemML

2017-05-03 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15995731#comment-15995731
 ] 

Niketan Pansare commented on SYSTEMML-1576:
---

[~reinwald] [~freiss] [~nakul02] [~gweidner] [~mboehm7] [~deron] 
[~mwdus...@us.ibm.com] [~acs_s]

> Support Native BLAS on SystemML 
> 
>
> Key: SYSTEMML-1576
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1576
> Project: SystemML
>  Issue Type: Epic
>  Components: Runtime
>Reporter: Niketan Pansare
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1581) Ensure that the performance of shared library generated by cmake is comparable to the shared library generate by g++

2017-05-03 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1581:
-

 Summary: Ensure that the performance of shared library generated 
by cmake is comparable to the shared library generate by g++
 Key: SYSTEMML-1581
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1581
 Project: SystemML
  Issue Type: Task
Reporter: Niketan Pansare
Assignee: Nakul Jindal
Priority: Minor


cmake setup: 
http://apache.github.io/incubator-systemml/native-backend#developer-guide
g++ command: 
g++ -o lib/libsystemml_mkl-Linux-x86_64.so *.cpp  -I$JAVA_HOME/include 
-I$MKLROOT/include -I$JAVA_HOME/include/linux -lmkl_rt -lpthread  -lm -ldl 
-DUSE_INTEL_MKL -DUSE_GNU_THREADING -L$MKLROOT/lib/intel64 -m64 -fopenmp -O3 
-shared -fPIC
g++ -o lib/libsystemml_openblas-Linux-x86_64.so *.cpp  -I$JAVA_HOME/include  
-I$JAVA_HOME/include/linux -lopenblas -lpthread -lm -ldl -DUSE_OPEN_BLAS 
-I/opt/OpenBLAS/include/ -L/opt/OpenBLAS/lib/ -fopenmp -O3 -shared -fPIC



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (SYSTEMML-1580) Remove the usage of SYSTEMML_BLAS environment variable from NativeHelper

2017-05-03 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare reassigned SYSTEMML-1580:
-

Assignee: Niketan Pansare

> Remove the usage of SYSTEMML_BLAS environment variable from NativeHelper
> 
>
> Key: SYSTEMML-1580
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1580
> Project: SystemML
>  Issue Type: Task
>Reporter: Niketan Pansare
>Assignee: Niketan Pansare
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1580) Remove the usage of SYSTEMML_BLAS environment variable from NativeHelper

2017-05-03 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1580:
-

 Summary: Remove the usage of SYSTEMML_BLAS environment variable 
from NativeHelper
 Key: SYSTEMML-1580
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1580
 Project: SystemML
  Issue Type: Task
Reporter: Niketan Pansare






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SYSTEMML-1576) Support Native BLAS on SystemML

2017-05-03 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare updated SYSTEMML-1576:
--
Summary: Support Native BLAS on SystemML   (was: This epic contains the 
tasks related to native blas support.)

> Support Native BLAS on SystemML 
> 
>
> Key: SYSTEMML-1576
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1576
> Project: SystemML
>  Issue Type: Epic
>  Components: Runtime
>Reporter: Niketan Pansare
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1579) Investigate issues regarding supporting native BLAS on Windows

2017-05-03 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1579:
-

 Summary: Investigate issues regarding supporting native BLAS on 
Windows
 Key: SYSTEMML-1579
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1579
 Project: SystemML
  Issue Type: Task
Reporter: Niketan Pansare






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1578) Investigate issues regarding supporting native BLAS on Mac

2017-05-03 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1578:
-

 Summary: Investigate issues regarding supporting native BLAS on Mac
 Key: SYSTEMML-1578
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1578
 Project: SystemML
  Issue Type: Task
Reporter: Niketan Pansare






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1577) Compare the performance of MKL and OpenBLAS with Java matmult using our performance test suite

2017-05-03 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1577:
-

 Summary: Compare the performance of MKL and OpenBLAS with Java 
matmult using our performance test suite
 Key: SYSTEMML-1577
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1577
 Project: SystemML
  Issue Type: Task
Reporter: Niketan Pansare






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1576) This epic contains the tasks related to native blas support.

2017-05-03 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1576:
-

 Summary: This epic contains the tasks related to native blas 
support.
 Key: SYSTEMML-1576
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1576
 Project: SystemML
  Issue Type: Epic
  Components: Runtime
Reporter: Niketan Pansare






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1573) Incorporate ALLOW_OPERATOR_FUSION in ConvolutionOp for developer testing

2017-05-02 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1573:
-

 Summary: Incorporate ALLOW_OPERATOR_FUSION in ConvolutionOp for 
developer testing
 Key: SYSTEMML-1573
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1573
 Project: SystemML
  Issue Type: Improvement
Reporter: Niketan Pansare






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1572) Enable Native BLAS on remote executors

2017-05-02 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1572:
-

 Summary: Enable Native BLAS on remote executors
 Key: SYSTEMML-1572
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1572
 Project: SystemML
  Issue Type: New Feature
Reporter: Niketan Pansare






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (SYSTEMML-1568) NULL condition not check for Spark version in MLContext

2017-05-01 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare closed SYSTEMML-1568.
-
   Resolution: Fixed
Fix Version/s: SystemML 1.0

[~deron] Yes, it was fixed in current master.

Also, the commit 
https://github.com/apache/incubator-systemml/commit/8e5599dd9fa94a1b4467ea2866c7203aeac90d12
 addresses the wink and antlr issue :)

> NULL condition not check for Spark version in MLContext
> ---
>
> Key: SYSTEMML-1568
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1568
> Project: SystemML
>  Issue Type: Bug
>Reporter: Niketan Pansare
>Assignee: Niketan Pansare
>Priority: Minor
> Fix For: SystemML 1.0
>
>
> I see following warning after starting pyspark shell:
> {code}
> 17/04/30 14:05:25 WARN MLContext: Apache Spark null or above is recommended 
> for SystemML null
> Welcome to Apache SystemML!
> {code}
> To reproduce the warning, please use Spark 2.1:
> {code}
> # checkout current master
> mvn package -P distribution
> pip install target/systemml-1.0.0-incubating-SNAPSHOT-python.tgz
> pyspark
> >> run simple script with python mlcontext
> {code} 
> [~deron] Can you please take a look at it ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SYSTEMML-1570) Remove fused sel+ operator

2017-05-01 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15991142#comment-15991142
 ] 

Niketan Pansare commented on SYSTEMML-1570:
---

Before closing this PR, please ensure that the performance benefits for CNNs 
(eg: Lenet) due fused sel+ operators (such as relu_maxpooling and 
relu_maxpooling_backward) are preserved. 

> Remove fused sel+ operator
> --
>
> Key: SYSTEMML-1570
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1570
> Project: SystemML
>  Issue Type: Task
>Reporter: Matthias Boehm
> Fix For: SystemML 1.0
>
>
> The fused operator sel+ (select positive values) is applied for patterns like 
> (X>0)*X and max(X,0) in order to eliminate unnecessary intermediates. It 
> stems from a time when max was sparse-unsafe and hence inefficient over 
> sparse data. However, meanwhile we mark scalar operators as conditionally 
> sparse-safe depending on the given scalar constant c, which applies for max 
> if c<=0. Hence, this sel+ operator is meanwhile completely useless and should 
> be removed.
> Furthermore, we should also generalize the rewrites to rewrite the selection 
> of negative values (X<0)*X to min(X,0)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (SYSTEMML-1567) Remove conditionals from nn layers

2017-05-01 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare closed SYSTEMML-1567.
-
   Resolution: Duplicate
Fix Version/s: SystemML 1.0

This issue should be fixed by 
https://issues.apache.org/jira/browse/SYSTEMML-1554 and 
https://issues.apache.org/jira/browse/SYSTEMML-1561

> Remove conditionals from nn layers
> --
>
> Key: SYSTEMML-1567
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1567
> Project: SystemML
>  Issue Type: Improvement
>  Components: APIs
>Affects Versions: SystemML 1.0
>Reporter: Niketan Pansare
> Fix For: SystemML 1.0
>
>
> Conditionals in nn layers introduce transient read/write variables that 
> disables fused operators such as CP relu_maxpooling_backward and hence 
> redundant execute sparsity-introducing sel+ operator. This operator causes 
> unnecessary dense-to-sparse-to-dense conversion and becomes the heavy hitter 
> after native BLAS change. Note: some fused operators such as CP 
> relu_maxpooling are still applied because there is no conditional in between 
> those layers.
> Without conditionals in dropout layer: 
> https://github.com/apache/incubator-systemml/blob/master/scripts/nn/layers/dropout.dml#L49-L53
>  
> {code}
> Iter:2000.0, training loss:0.003149394810197065, training accuracy:100.0
> Iter:2000.0, validation loss:191.9888157354513, validation accuracy:96.875
> SystemML Statistics:
> Total elapsed time: 416.609 sec.
> Total compilation time: 0.000 sec.
> Total execution time:   416.609 sec.
> Number of compiled Spark inst:  69.
> Number of executed Spark inst:  2.
> Native mkl calls (LibMatrixMult/LibMatrixDNN):  4270/10553.
> Cache hits (Mem, WB, FS, HDFS): 277973/0/0/0.
> Cache writes (WB, FS, HDFS):143616/0/0.
> Cache times (ACQr/m, RLS, EXP): 0.101/0.080/1.988/0.000 sec.
> HOP DAGs recompiled (PRED, SB): 0/2277.
> HOP DAGs recompile time:6.146 sec.
> Spark ctx create time (lazy):   0.027 sec.
> Spark trans counts (par,bc,col):0/0/0.
> Spark trans times (par,bc,col): 0.000/0.000/0.000 secs.
> Total JIT compile time: 37.746 sec.
> Total JVM GC count: 3949.
> Total JVM GC time:  56.609 sec.
> Heavy hitter instructions (name, time, count):
> -- 1)   conv2d_bias_add 48.984 sec  4514
> -- 2)   conv2d_backward_filter  47.780 sec  4026
> -- 3)   -*  38.246 sec  16104
> -- 4)   +*  35.902 sec  8052
> -- 5)   +   34.227 sec  30566
> -- 6)   ba+*30.643 sec  12566
> -- 7)   relu_maxpooling_backward29.678 sec  4026
> -- 8)   conv2d_backward_data28.520 sec  2013
> -- 9)   *   26.825 sec  35275
> -- 10)  relu_backward   24.842 sec  6039
> {code}
> With conditional, we add sel+ to the heavy hitter:
> {code}
> -- 1)   sel+55.054 sec  6283
> {code}
> [~mwdus...@us.ibm.com] Since you created the layers, I think you should 
> decide how best to restructure the DML. My recommendation would be to create 
> two layers in case of conditionals.
> [~mboehm7] [~reinwald]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (SYSTEMML-1568) NULL condition not check for Spark version in MLContext

2017-05-01 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare reassigned SYSTEMML-1568:
-

Assignee: Deron Eriksson

> NULL condition not check for Spark version in MLContext
> ---
>
> Key: SYSTEMML-1568
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1568
> Project: SystemML
>  Issue Type: Bug
>Reporter: Niketan Pansare
>Assignee: Deron Eriksson
>Priority: Minor
>
> I see following warning after starting pyspark shell:
> {code}
> 17/04/30 14:05:25 WARN MLContext: Apache Spark null or above is recommended 
> for SystemML null
> Welcome to Apache SystemML!
> {code}
> To reproduce the warning, please use Spark 2.1:
> {code}
> # checkout current master
> mvn package -P distribution
> pip install target/systemml-1.0.0-incubating-SNAPSHOT-python.tgz
> pyspark
> >> run simple script with python mlcontext
> {code} 
> [~deron] Can you please take a look at it ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SYSTEMML-1568) NULL condition not check for Spark version in MLContext

2017-05-01 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15991101#comment-15991101
 ] 

Niketan Pansare commented on SYSTEMML-1568:
---

Thanks Deron. Appreciate it :)

> NULL condition not check for Spark version in MLContext
> ---
>
> Key: SYSTEMML-1568
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1568
> Project: SystemML
>  Issue Type: Bug
>Reporter: Niketan Pansare
>Priority: Minor
>
> I see following warning after starting pyspark shell:
> {code}
> 17/04/30 14:05:25 WARN MLContext: Apache Spark null or above is recommended 
> for SystemML null
> Welcome to Apache SystemML!
> {code}
> To reproduce the warning, please use Spark 2.1:
> {code}
> # checkout current master
> mvn package -P distribution
> pip install target/systemml-1.0.0-incubating-SNAPSHOT-python.tgz
> pyspark
> >> run simple script with python mlcontext
> {code} 
> [~deron] Can you please take a look at it ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1569) Test MLContext for robustness and scalability

2017-04-30 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1569:
-

 Summary: Test MLContext for robustness and scalability
 Key: SYSTEMML-1569
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1569
 Project: SystemML
  Issue Type: Test
Affects Versions: SystemML 1.0
Reporter: Niketan Pansare


As more APIs are getting built on top of MLContext and with large-scale demos 
using MLContext and notebooks, we should test MLContext for robustness and 
scalability. The goal is that using MLContext should have constant overhead 
compared to commandline execution (both using similar formats).

As an example: we should check for potential OOM in Script History logic: 
https://github.com/apache/incubator-systemml/blob/master/src/main/java/org/apache/sysml/api/mlcontext/MLContextUtil.java#L902

If we uncomment 
https://github.com/apache/incubator-systemml/blob/master/src/main/java/org/apache/sysml/api/mlcontext/MLContextUtil.java#L897-L901,
 then you should get an OOM when passing large Numpy array with Python 
MLContext. This is because toString() method on MatrixBlock converts double [] 
into String.

[~deron] [~mwdus...@us.ibm.com] [~reinwald] [~mboehm7]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1568) NULL condition not check for Spark version in MLContext

2017-04-30 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1568:
-

 Summary: NULL condition not check for Spark version in MLContext
 Key: SYSTEMML-1568
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1568
 Project: SystemML
  Issue Type: Bug
Reporter: Niketan Pansare
Priority: Minor


I see following warning after starting pyspark shell:
{code}
17/04/30 14:05:25 WARN MLContext: Apache Spark null or above is recommended for 
SystemML null

Welcome to Apache SystemML!
{code}

To reproduce the warning, please use Spark 2.1:
{code}
# checkout current master
mvn package -P distribution
pip install target/systemml-1.0.0-incubating-SNAPSHOT-python.tgz
pyspark
>> run simple script with python mlcontext
{code} 

[~deron] Can you please take a look at it ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1567) Remove conditionals from nn layers

2017-04-30 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1567:
-

 Summary: Remove conditionals from nn layers
 Key: SYSTEMML-1567
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1567
 Project: SystemML
  Issue Type: Improvement
  Components: APIs
Affects Versions: SystemML 1.0
Reporter: Niketan Pansare


Conditionals in nn layers introduce transient read/write variables that 
disables fused operators such as CP relu_maxpooling_backward and hence 
redundant execute sparsity-introducing sel+ operator. This operator causes 
unnecessary dense-to-sparse-to-dense conversion and becomes the heavy hitter 
after native BLAS change. Note: some fused operators such as CP relu_maxpooling 
are still applied because there is no conditional in between those layers.

Without conditionals in dropout layer: 
https://github.com/apache/incubator-systemml/blob/master/scripts/nn/layers/dropout.dml#L49-L53
 
{code}
Iter:2000.0, training loss:0.003149394810197065, training accuracy:100.0
Iter:2000.0, validation loss:191.9888157354513, validation accuracy:96.875
SystemML Statistics:
Total elapsed time: 416.609 sec.
Total compilation time: 0.000 sec.
Total execution time:   416.609 sec.
Number of compiled Spark inst:  69.
Number of executed Spark inst:  2.
Native mkl calls (LibMatrixMult/LibMatrixDNN):  4270/10553.
Cache hits (Mem, WB, FS, HDFS): 277973/0/0/0.
Cache writes (WB, FS, HDFS):143616/0/0.
Cache times (ACQr/m, RLS, EXP): 0.101/0.080/1.988/0.000 sec.
HOP DAGs recompiled (PRED, SB): 0/2277.
HOP DAGs recompile time:6.146 sec.
Spark ctx create time (lazy):   0.027 sec.
Spark trans counts (par,bc,col):0/0/0.
Spark trans times (par,bc,col): 0.000/0.000/0.000 secs.
Total JIT compile time: 37.746 sec.
Total JVM GC count: 3949.
Total JVM GC time:  56.609 sec.
Heavy hitter instructions (name, time, count):
-- 1)   conv2d_bias_add 48.984 sec  4514
-- 2)   conv2d_backward_filter  47.780 sec  4026
-- 3)   -*  38.246 sec  16104
-- 4)   +*  35.902 sec  8052
-- 5)   +   34.227 sec  30566
-- 6)   ba+*30.643 sec  12566
-- 7)   relu_maxpooling_backward29.678 sec  4026
-- 8)   conv2d_backward_data28.520 sec  2013
-- 9)   *   26.825 sec  35275
-- 10)  relu_backward   24.842 sec  6039
{code}

With conditional, we add sel+ to the heavy hitter:
{code}
-- 1)   sel+55.054 sec  6283
{code}

[~mwdus...@us.ibm.com] Since you created the layers, I think you should decide 
how best to restructure the DML. My recommendation would be to create two 
layers in case of conditionals.

[~mboehm7] [~reinwald]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SYSTEMML-1561) Improve constant folding during compilation

2017-04-26 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985362#comment-15985362
 ] 

Niketan Pansare commented on SYSTEMML-1561:
---

[~mwdus...@us.ibm.com] I am pretty swamped until mid-June with conferences and 
other features. 

> Improve constant folding during compilation
> ---
>
> Key: SYSTEMML-1561
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1561
> Project: SystemML
>  Issue Type: Improvement
>Reporter: Mike Dusenberry
> Attachments: scenario1_plan.txt, scenario1.py, scenario2_plan.txt, 
> scenario2.py
>
>
> In our `nn` library, our convolution and pooling layers have to pass around 
> the spatial dimensions (height and width) of the images that are stretched 
> out into rows of the input/output matrices.  These output dimensions are 
> computed within the forward functions of the above layers as small scalar 
> equations.  From a mathematical standpoint, these sizes can be determined at 
> compile time, and it is nice to have these size equations in DML (v.s. hiding 
> them inside the engine within built-in functions).  However, we do not 
> currently evaluate these expressions during compilation, and thus we are left 
> with unknown sizes even during recompilation.  This naturally leads to max 
> memory estimates and thus often leads to unnecessary distributed runtime ops 
> rather than simple CP ones.
> I have two related scenarios for which this is a problem.  They both involve 
> the {{Houtc1}} & {{Woutc1}} values that are returned from a 
> `conv2d::forward(...)` function.  These represent the spatial dimensions of 
> the volume with each of the rows of the output {{outc1}} of the function, and 
> the third dimension is {{F1}}.  Thus, {{outc1}} has a number of columns equal 
> to {{F1*Houtc1*Wouc1}}.
> In the first scenario ({{scenario1.py}}), a random matrix {{doutc1}} is 
> created that should have the same dimensions as {{outc1}}.  For the columns, 
> if I use {{cols=ncol(outc1)}} in this rand statement, the size will be 
> propagated and CP ops will be compiled and run.  I I instead use 
> {{cols=F1*Houtc1*Woutc1}}, the size will forever be unknown, even during 
> recompilation, and thus Spark ops will be compiled and run.  I have included 
> the recompile hops plan ({{scenario1_plan.txt}}).
> In the second scenario ({{scenario2.py}}), a {{max_pool2d::forward(...)}} 
> function is inserted after the {{conv2d::forward(...)}} function that 
> requires the {{Houtc1}} and {{Woutc1}} variables to be supplied as arguments. 
>  Since those latter variables are not executed during compilation time, the 
> max pooling sizes remain unknown, even during recompilation, and thus Spark 
> ops will be compiled and run.  I have included the recompile hops plan 
> ({{scenario2_plan.txt}}).
> We should either improve or fix our constant folding rewrites so that these 
> scenarios are fixed, as they are necessary for performant deep learning 
> applications.  Note too that this issue will be present in other non-deep 
> learning scenarios as well.
> Mailing list thread: 
> https://www.mail-archive.com/dev@systemml.incubator.apache.org/msg01657.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (SYSTEMML-1561) Improve constant folding during compilation

2017-04-26 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985257#comment-15985257
 ] 

Niketan Pansare edited comment on SYSTEMML-1561 at 4/26/17 5:57 PM:


[~mwdus...@us.ibm.com] We definitely need to address this issue. As an FYI, 
Caffe2DML works around this issue by computing the shapes during DML generation 
rather than at runtime.


was (Author: niketanpansare):
[~mwdus...@us.ibm.com] We definitely need to address this issue. As an FYI, 
Caffe2DML works around this issue by computing the shapes at compile time 
rather than runtime.

> Improve constant folding during compilation
> ---
>
> Key: SYSTEMML-1561
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1561
> Project: SystemML
>  Issue Type: Improvement
>Reporter: Mike Dusenberry
> Attachments: scenario1_plan.txt, scenario1.py, scenario2_plan.txt, 
> scenario2.py
>
>
> In our `nn` library, our convolution and pooling layers have to pass around 
> the spatial dimensions (height and width) of the images that are stretched 
> out into rows of the input/output matrices.  These output dimensions are 
> computed within the forward functions of the above layers as small scalar 
> equations.  From a mathematical standpoint, these sizes can be determined at 
> compile time, and it is nice to have these size equations in DML (v.s. hiding 
> them inside the engine within built-in functions).  However, we do not 
> currently evaluate these expressions during compilation, and thus we are left 
> with unknown sizes even during recompilation.  This naturally leads to max 
> memory estimates and thus often leads to unnecessary distributed runtime ops 
> rather than simple CP ones.
> I have two related scenarios for which this is a problem.  They both involve 
> the {{Houtc1}} & {{Woutc1}} values that are returned from a 
> `conv2d::forward(...)` function.  These represent the spatial dimensions of 
> the volume with each of the rows of the output {{outc1}} of the function, and 
> the third dimension is {{F1}}.  Thus, {{outc1}} has a number of columns equal 
> to {{F1*Houtc1*Wouc1}}.
> In the first scenario ({{scenario1.py}}), a random matrix {{doutc1}} is 
> created that should have the same dimensions as {{outc1}}.  For the columns, 
> if I use {{cols=ncol(outc1)}} in this rand statement, the size will be 
> propagated and CP ops will be compiled and run.  I I instead use 
> {{cols=F1*Houtc1*Woutc1}}, the size will forever be unknown, even during 
> recompilation, and thus Spark ops will be compiled and run.  I have included 
> the recompile hops plan ({{scenario1_plan.txt}}).
> In the second scenario ({{scenario2.py}}), a {{max_pool2d::forward(...)}} 
> function is inserted after the {{conv2d::forward(...)}} function that 
> requires the {{Houtc1}} and {{Woutc1}} variables to be supplied as arguments. 
>  Since those latter variables are not executed during compilation time, the 
> max pooling sizes remain unknown, even during recompilation, and thus Spark 
> ops will be compiled and run.  I have included the recompile hops plan 
> ({{scenario2_plan.txt}}).
> We should either improve or fix our constant folding rewrites so that these 
> scenarios are fixed, as they are necessary for performant deep learning 
> applications.  Note too that this issue will be present in other non-deep 
> learning scenarios as well.
> Mailing list thread: 
> https://www.mail-archive.com/dev@systemml.incubator.apache.org/msg01657.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (SYSTEMML-1561) Improve constant folding during compilation

2017-04-26 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985257#comment-15985257
 ] 

Niketan Pansare edited comment on SYSTEMML-1561 at 4/26/17 5:57 PM:


[~mwdus...@us.ibm.com] We definitely need to address this issue. As an FYI, 
Caffe2DML works around this issue by computing the shapes at compile time 
rather than runtime.


was (Author: niketanpansare):
[~mwdus...@us.ibm.com] We definitely need to address this issue. As an FYI, 
Caffe2DML addresses this issue by computing the shapes at compile time rather 
than runtime.

> Improve constant folding during compilation
> ---
>
> Key: SYSTEMML-1561
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1561
> Project: SystemML
>  Issue Type: Improvement
>Reporter: Mike Dusenberry
> Attachments: scenario1_plan.txt, scenario1.py, scenario2_plan.txt, 
> scenario2.py
>
>
> In our `nn` library, our convolution and pooling layers have to pass around 
> the spatial dimensions (height and width) of the images that are stretched 
> out into rows of the input/output matrices.  These output dimensions are 
> computed within the forward functions of the above layers as small scalar 
> equations.  From a mathematical standpoint, these sizes can be determined at 
> compile time, and it is nice to have these size equations in DML (v.s. hiding 
> them inside the engine within built-in functions).  However, we do not 
> currently evaluate these expressions during compilation, and thus we are left 
> with unknown sizes even during recompilation.  This naturally leads to max 
> memory estimates and thus often leads to unnecessary distributed runtime ops 
> rather than simple CP ones.
> I have two related scenarios for which this is a problem.  They both involve 
> the {{Houtc1}} & {{Woutc1}} values that are returned from a 
> `conv2d::forward(...)` function.  These represent the spatial dimensions of 
> the volume with each of the rows of the output {{outc1}} of the function, and 
> the third dimension is {{F1}}.  Thus, {{outc1}} has a number of columns equal 
> to {{F1*Houtc1*Wouc1}}.
> In the first scenario ({{scenario1.py}}), a random matrix {{doutc1}} is 
> created that should have the same dimensions as {{outc1}}.  For the columns, 
> if I use {{cols=ncol(outc1)}} in this rand statement, the size will be 
> propagated and CP ops will be compiled and run.  I I instead use 
> {{cols=F1*Houtc1*Woutc1}}, the size will forever be unknown, even during 
> recompilation, and thus Spark ops will be compiled and run.  I have included 
> the recompile hops plan ({{scenario1_plan.txt}}).
> In the second scenario ({{scenario2.py}}), a {{max_pool2d::forward(...)}} 
> function is inserted after the {{conv2d::forward(...)}} function that 
> requires the {{Houtc1}} and {{Woutc1}} variables to be supplied as arguments. 
>  Since those latter variables are not executed during compilation time, the 
> max pooling sizes remain unknown, even during recompilation, and thus Spark 
> ops will be compiled and run.  I have included the recompile hops plan 
> ({{scenario2_plan.txt}}).
> We should either improve or fix our constant folding rewrites so that these 
> scenarios are fixed, as they are necessary for performant deep learning 
> applications.  Note too that this issue will be present in other non-deep 
> learning scenarios as well.
> Mailing list thread: 
> https://www.mail-archive.com/dev@systemml.incubator.apache.org/msg01657.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SYSTEMML-1561) Improve constant folding during compilation

2017-04-26 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985257#comment-15985257
 ] 

Niketan Pansare commented on SYSTEMML-1561:
---

[~mwdus...@us.ibm.com] We definitely need to address this issue. As an FYI, 
Caffe2DML addresses this issue by computing the shapes at compile time rather 
than runtime.

> Improve constant folding during compilation
> ---
>
> Key: SYSTEMML-1561
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1561
> Project: SystemML
>  Issue Type: Improvement
>Reporter: Mike Dusenberry
> Attachments: scenario1_plan.txt, scenario1.py, scenario2_plan.txt, 
> scenario2.py
>
>
> In our `nn` library, our convolution and pooling layers have to pass around 
> the spatial dimensions (height and width) of the images that are stretched 
> out into rows of the input/output matrices.  These output dimensions are 
> computed within the forward functions of the above layers as small scalar 
> equations.  From a mathematical standpoint, these sizes can be determined at 
> compile time, and it is nice to have these size equations in DML (v.s. hiding 
> them inside the engine within built-in functions).  However, we do not 
> currently evaluate these expressions during compilation, and thus we are left 
> with unknown sizes even during recompilation.  This naturally leads to max 
> memory estimates and thus often leads to unnecessary distributed runtime ops 
> rather than simple CP ones.
> I have two related scenarios for which this is a problem.  They both involve 
> the {{Houtc1}} & {{Woutc1}} values that are returned from a 
> `conv2d::forward(...)` function.  These represent the spatial dimensions of 
> the volume with each of the rows of the output {{outc1}} of the function, and 
> the third dimension is {{F1}}.  Thus, {{outc1}} has a number of columns equal 
> to {{F1*Houtc1*Wouc1}}.
> In the first scenario ({{scenario1.py}}), a random matrix {{doutc1}} is 
> created that should have the same dimensions as {{outc1}}.  For the columns, 
> if I use {{cols=ncol(outc1)}} in this rand statement, the size will be 
> propagated and CP ops will be compiled and run.  I I instead use 
> {{cols=F1*Houtc1*Woutc1}}, the size will forever be unknown, even during 
> recompilation, and thus Spark ops will be compiled and run.  I have included 
> the recompile hops plan ({{scenario1_plan.txt}}).
> In the second scenario ({{scenario2.py}}), a {{max_pool2d::forward(...)}} 
> function is inserted after the {{conv2d::forward(...)}} function that 
> requires the {{Houtc1}} and {{Woutc1}} variables to be supplied as arguments. 
>  Since those latter variables are not executed during compilation time, the 
> max pooling sizes remain unknown, even during recompilation, and thus Spark 
> ops will be compiled and run.  I have included the recompile hops plan 
> ({{scenario2_plan.txt}}).
> We should either improve or fix our constant folding rewrites so that these 
> scenarios are fixed, as they are necessary for performant deep learning 
> applications.  Note too that this issue will be present in other non-deep 
> learning scenarios as well.
> Mailing list thread: 
> https://www.mail-archive.com/dev@systemml.incubator.apache.org/msg01657.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1553) Add Caffe and TensorFlow license

2017-04-21 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1553:
-

 Summary: Add Caffe and TensorFlow license
 Key: SYSTEMML-1553
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1553
 Project: SystemML
  Issue Type: Bug
Reporter: Niketan Pansare






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (SYSTEMML-1552) Support GPU via Python APIs

2017-04-21 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare resolved SYSTEMML-1552.
---
Resolution: Fixed
  Assignee: Niketan Pansare

Closed by commit 
https://github.com/apache/incubator-systemml/commit/9ed27ad6066a143a0e5ac5ccb800c7ca20e81ceb

> Support GPU via Python APIs
> ---
>
> Key: SYSTEMML-1552
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1552
> Project: SystemML
>  Issue Type: Sub-task
>  Components: Compiler, Runtime
>Reporter: Niketan Pansare
>Assignee: Niketan Pansare
> Fix For: SystemML 0.13
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1552) Support GPU via Python APIs

2017-04-21 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1552:
-

 Summary: Support GPU via Python APIs
 Key: SYSTEMML-1552
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1552
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SYSTEMML-1474) Index out of bounds error in test_naive_bayes1 of test_mllearn_numpy.py

2017-04-10 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963163#comment-15963163
 ] 

Niketan Pansare commented on SYSTEMML-1474:
---

Thanks Glenn :)

> Index out of bounds error in test_naive_bayes1 of test_mllearn_numpy.py
> ---
>
> Key: SYSTEMML-1474
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1474
> Project: SystemML
>  Issue Type: Bug
>Reporter: Glenn Weidner
>Assignee: Niketan Pansare
>Priority: Minor
> Fix For: SystemML 0.14
>
>
> The following error was observed running the python tests from command line 
> with spark-submit:
> {code}
> ==
> ERROR: test_naive_bayes1 (__main__.TestMLLearn)
> --
> Traceback (most recent call last):
>   File "/home/spark/test_mllearn_numpy.py", line 184, in test_naive_bayes1
> mllearn_predicted = nb.fit(vectors, 
> newsgroups_train.target).predict(vectors_test)
>   File "/usr/lib/python2.7/site-packages/systemml/mllearn/estimators.py", 
> line 142, in fit
> self.fit_numpy(X, y)
>   File "/usr/lib/python2.7/site-packages/systemml/mllearn/estimators.py", 
> line 95, in fit_numpy
> self._fit_numpy()
>   File "/usr/lib/python2.7/site-packages/systemml/mllearn/estimators.py", 
> line 88, in _fit_numpy
> self.model = self.estimator.fit(convertToMatrixBlock(self.sc, self.X), 
> y_mb)
>   File "/usr/lib/python2.7/site-packages/systemml/converters.py", line 106, 
> in convertToMatrixBlock
> [ _copyRowBlock(i, sc, ret, src, numRowsPerBlock,  rlen, clen) for i in 
> range(0, src.shape[0], numRowsPerBlock) ]
>   File "/usr/lib/python2.7/site-packages/systemml/converters.py", line 83, in 
> _copyRowBlock
> mb = _convertSPMatrixToMB(sc, src[i:i+numRowsPerBlock,]) if 
> isinstance(src, spmatrix) else _convertDenseMatrixToMB(sc, 
> src[i:i+numRowsPerBlock,])
>   File "/usr/lib64/python2.7/site-packages/scipy/sparse/csr.py", line 304, in 
> __getitem__
> return self._get_submatrix(row, col)
>   File "/usr/lib64/python2.7/site-packages/scipy/sparse/csr.py", line 447, in 
> _get_submatrix
> check_bounds(i0, i1, M)
>   File "/usr/lib64/python2.7/site-packages/scipy/sparse/csr.py", line 443, in 
> check_bounds
> " %d <= %d" % (i0, num, i1, num, i0, i1))
> IndexError: index out of bounds: 0 <= 2030 <= 2034, 0 <= 2059 <= 2034, 2030 
> <= 2059
> {code}
> The IndexError was first observed when running the test under a Notebook 
> cloud environment with Spark 2.0.2, then reproduced at command line on local 
> system with Spark 2.1.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SYSTEMML-1474) Index out of bounds error in test_naive_bayes1 of test_mllearn_numpy.py

2017-04-09 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962197#comment-15962197
 ] 

Niketan Pansare commented on SYSTEMML-1474:
---

Thanks Glenn, we definitely need to address the accuracy issue. 

{code}
# Default arguments: uses icpt=1 and regularization set to 1. This is 
consistent with scikit-learn but may not be optimal.
logistic = LogisticRegression(sparkSession)
# Here is an example of different hyperaparameter: icpt = 2 and disable 
regularization
logistic = LogisticRegression(sparkSession, normalize=True, C=float("inf"))
{code}

The fix would likely involve tuning the hyperparameters to get the desired 
results. Do you want to take this over ?

> Index out of bounds error in test_naive_bayes1 of test_mllearn_numpy.py
> ---
>
> Key: SYSTEMML-1474
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1474
> Project: SystemML
>  Issue Type: Bug
>Reporter: Glenn Weidner
>Assignee: Niketan Pansare
>Priority: Minor
> Fix For: SystemML 0.14
>
>
> The following error was observed running the python tests from command line 
> with spark-submit:
> {code}
> ==
> ERROR: test_naive_bayes1 (__main__.TestMLLearn)
> --
> Traceback (most recent call last):
>   File "/home/spark/test_mllearn_numpy.py", line 184, in test_naive_bayes1
> mllearn_predicted = nb.fit(vectors, 
> newsgroups_train.target).predict(vectors_test)
>   File "/usr/lib/python2.7/site-packages/systemml/mllearn/estimators.py", 
> line 142, in fit
> self.fit_numpy(X, y)
>   File "/usr/lib/python2.7/site-packages/systemml/mllearn/estimators.py", 
> line 95, in fit_numpy
> self._fit_numpy()
>   File "/usr/lib/python2.7/site-packages/systemml/mllearn/estimators.py", 
> line 88, in _fit_numpy
> self.model = self.estimator.fit(convertToMatrixBlock(self.sc, self.X), 
> y_mb)
>   File "/usr/lib/python2.7/site-packages/systemml/converters.py", line 106, 
> in convertToMatrixBlock
> [ _copyRowBlock(i, sc, ret, src, numRowsPerBlock,  rlen, clen) for i in 
> range(0, src.shape[0], numRowsPerBlock) ]
>   File "/usr/lib/python2.7/site-packages/systemml/converters.py", line 83, in 
> _copyRowBlock
> mb = _convertSPMatrixToMB(sc, src[i:i+numRowsPerBlock,]) if 
> isinstance(src, spmatrix) else _convertDenseMatrixToMB(sc, 
> src[i:i+numRowsPerBlock,])
>   File "/usr/lib64/python2.7/site-packages/scipy/sparse/csr.py", line 304, in 
> __getitem__
> return self._get_submatrix(row, col)
>   File "/usr/lib64/python2.7/site-packages/scipy/sparse/csr.py", line 447, in 
> _get_submatrix
> check_bounds(i0, i1, M)
>   File "/usr/lib64/python2.7/site-packages/scipy/sparse/csr.py", line 443, in 
> check_bounds
> " %d <= %d" % (i0, num, i1, num, i0, i1))
> IndexError: index out of bounds: 0 <= 2030 <= 2034, 0 <= 2059 <= 2034, 2030 
> <= 2059
> {code}
> The IndexError was first observed when running the test under a Notebook 
> cloud environment with Spark 2.0.2, then reproduced at command line on local 
> system with Spark 2.1.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (SYSTEMML-1474) Index out of bounds error in test_naive_bayes1 of test_mllearn_numpy.py

2017-04-08 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare resolved SYSTEMML-1474.
---
   Resolution: Fixed
Fix Version/s: SystemML 0.14

Fixed by commit 
https://github.com/apache/incubator-systemml/commit/8a5450dde5b55c6fb67d9fb034b69e5eafa15bf7

> Index out of bounds error in test_naive_bayes1 of test_mllearn_numpy.py
> ---
>
> Key: SYSTEMML-1474
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1474
> Project: SystemML
>  Issue Type: Bug
>Reporter: Glenn Weidner
>Assignee: Niketan Pansare
>Priority: Minor
> Fix For: SystemML 0.14
>
>
> The following error was observed running the python tests from command line 
> with spark-submit:
> {code}
> ==
> ERROR: test_naive_bayes1 (__main__.TestMLLearn)
> --
> Traceback (most recent call last):
>   File "/home/spark/test_mllearn_numpy.py", line 184, in test_naive_bayes1
> mllearn_predicted = nb.fit(vectors, 
> newsgroups_train.target).predict(vectors_test)
>   File "/usr/lib/python2.7/site-packages/systemml/mllearn/estimators.py", 
> line 142, in fit
> self.fit_numpy(X, y)
>   File "/usr/lib/python2.7/site-packages/systemml/mllearn/estimators.py", 
> line 95, in fit_numpy
> self._fit_numpy()
>   File "/usr/lib/python2.7/site-packages/systemml/mllearn/estimators.py", 
> line 88, in _fit_numpy
> self.model = self.estimator.fit(convertToMatrixBlock(self.sc, self.X), 
> y_mb)
>   File "/usr/lib/python2.7/site-packages/systemml/converters.py", line 106, 
> in convertToMatrixBlock
> [ _copyRowBlock(i, sc, ret, src, numRowsPerBlock,  rlen, clen) for i in 
> range(0, src.shape[0], numRowsPerBlock) ]
>   File "/usr/lib/python2.7/site-packages/systemml/converters.py", line 83, in 
> _copyRowBlock
> mb = _convertSPMatrixToMB(sc, src[i:i+numRowsPerBlock,]) if 
> isinstance(src, spmatrix) else _convertDenseMatrixToMB(sc, 
> src[i:i+numRowsPerBlock,])
>   File "/usr/lib64/python2.7/site-packages/scipy/sparse/csr.py", line 304, in 
> __getitem__
> return self._get_submatrix(row, col)
>   File "/usr/lib64/python2.7/site-packages/scipy/sparse/csr.py", line 447, in 
> _get_submatrix
> check_bounds(i0, i1, M)
>   File "/usr/lib64/python2.7/site-packages/scipy/sparse/csr.py", line 443, in 
> check_bounds
> " %d <= %d" % (i0, num, i1, num, i0, i1))
> IndexError: index out of bounds: 0 <= 2030 <= 2034, 0 <= 2059 <= 2034, 2030 
> <= 2059
> {code}
> The IndexError was first observed when running the test under a Notebook 
> cloud environment with Spark 2.0.2, then reproduced at command line on local 
> system with Spark 2.1.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SYSTEMML-1474) Index out of bounds error in test_naive_bayes1 of test_mllearn_numpy.py

2017-04-08 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962025#comment-15962025
 ] 

Niketan Pansare commented on SYSTEMML-1474:
---

Sure. I have created PR https://github.com/apache/incubator-systemml/pull/455. 
Once Glenn verifies the fix, I can merge :)

> Index out of bounds error in test_naive_bayes1 of test_mllearn_numpy.py
> ---
>
> Key: SYSTEMML-1474
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1474
> Project: SystemML
>  Issue Type: Bug
>Reporter: Glenn Weidner
>Assignee: Niketan Pansare
>Priority: Minor
>
> The following error was observed running the python tests from command line 
> with spark-submit:
> {code}
> ==
> ERROR: test_naive_bayes1 (__main__.TestMLLearn)
> --
> Traceback (most recent call last):
>   File "/home/spark/test_mllearn_numpy.py", line 184, in test_naive_bayes1
> mllearn_predicted = nb.fit(vectors, 
> newsgroups_train.target).predict(vectors_test)
>   File "/usr/lib/python2.7/site-packages/systemml/mllearn/estimators.py", 
> line 142, in fit
> self.fit_numpy(X, y)
>   File "/usr/lib/python2.7/site-packages/systemml/mllearn/estimators.py", 
> line 95, in fit_numpy
> self._fit_numpy()
>   File "/usr/lib/python2.7/site-packages/systemml/mllearn/estimators.py", 
> line 88, in _fit_numpy
> self.model = self.estimator.fit(convertToMatrixBlock(self.sc, self.X), 
> y_mb)
>   File "/usr/lib/python2.7/site-packages/systemml/converters.py", line 106, 
> in convertToMatrixBlock
> [ _copyRowBlock(i, sc, ret, src, numRowsPerBlock,  rlen, clen) for i in 
> range(0, src.shape[0], numRowsPerBlock) ]
>   File "/usr/lib/python2.7/site-packages/systemml/converters.py", line 83, in 
> _copyRowBlock
> mb = _convertSPMatrixToMB(sc, src[i:i+numRowsPerBlock,]) if 
> isinstance(src, spmatrix) else _convertDenseMatrixToMB(sc, 
> src[i:i+numRowsPerBlock,])
>   File "/usr/lib64/python2.7/site-packages/scipy/sparse/csr.py", line 304, in 
> __getitem__
> return self._get_submatrix(row, col)
>   File "/usr/lib64/python2.7/site-packages/scipy/sparse/csr.py", line 447, in 
> _get_submatrix
> check_bounds(i0, i1, M)
>   File "/usr/lib64/python2.7/site-packages/scipy/sparse/csr.py", line 443, in 
> check_bounds
> " %d <= %d" % (i0, num, i1, num, i0, i1))
> IndexError: index out of bounds: 0 <= 2030 <= 2034, 0 <= 2059 <= 2034, 2030 
> <= 2059
> {code}
> The IndexError was first observed when running the test under a Notebook 
> cloud environment with Spark 2.0.2, then reproduced at command line on local 
> system with Spark 2.1.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (SYSTEMML-1474) Index out of bounds error in test_naive_bayes1 of test_mllearn_numpy.py

2017-04-08 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare reassigned SYSTEMML-1474:
-

Assignee: Niketan Pansare

> Index out of bounds error in test_naive_bayes1 of test_mllearn_numpy.py
> ---
>
> Key: SYSTEMML-1474
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1474
> Project: SystemML
>  Issue Type: Bug
>Reporter: Glenn Weidner
>Assignee: Niketan Pansare
>Priority: Minor
>
> The following error was observed running the python tests from command line 
> with spark-submit:
> {code}
> ==
> ERROR: test_naive_bayes1 (__main__.TestMLLearn)
> --
> Traceback (most recent call last):
>   File "/home/spark/test_mllearn_numpy.py", line 184, in test_naive_bayes1
> mllearn_predicted = nb.fit(vectors, 
> newsgroups_train.target).predict(vectors_test)
>   File "/usr/lib/python2.7/site-packages/systemml/mllearn/estimators.py", 
> line 142, in fit
> self.fit_numpy(X, y)
>   File "/usr/lib/python2.7/site-packages/systemml/mllearn/estimators.py", 
> line 95, in fit_numpy
> self._fit_numpy()
>   File "/usr/lib/python2.7/site-packages/systemml/mllearn/estimators.py", 
> line 88, in _fit_numpy
> self.model = self.estimator.fit(convertToMatrixBlock(self.sc, self.X), 
> y_mb)
>   File "/usr/lib/python2.7/site-packages/systemml/converters.py", line 106, 
> in convertToMatrixBlock
> [ _copyRowBlock(i, sc, ret, src, numRowsPerBlock,  rlen, clen) for i in 
> range(0, src.shape[0], numRowsPerBlock) ]
>   File "/usr/lib/python2.7/site-packages/systemml/converters.py", line 83, in 
> _copyRowBlock
> mb = _convertSPMatrixToMB(sc, src[i:i+numRowsPerBlock,]) if 
> isinstance(src, spmatrix) else _convertDenseMatrixToMB(sc, 
> src[i:i+numRowsPerBlock,])
>   File "/usr/lib64/python2.7/site-packages/scipy/sparse/csr.py", line 304, in 
> __getitem__
> return self._get_submatrix(row, col)
>   File "/usr/lib64/python2.7/site-packages/scipy/sparse/csr.py", line 447, in 
> _get_submatrix
> check_bounds(i0, i1, M)
>   File "/usr/lib64/python2.7/site-packages/scipy/sparse/csr.py", line 443, in 
> check_bounds
> " %d <= %d" % (i0, num, i1, num, i0, i1))
> IndexError: index out of bounds: 0 <= 2030 <= 2034, 0 <= 2059 <= 2034, 2030 
> <= 2059
> {code}
> The IndexError was first observed when running the test under a Notebook 
> cloud environment with Spark 2.0.2, then reproduced at command line on local 
> system with Spark 2.1.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1505) Add support in Caffe2DML to display filters in tensorboard

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1505:
-

 Summary: Add support in Caffe2DML to display filters in tensorboard
 Key: SYSTEMML-1505
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1505
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb
http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/01-learning-lenet.ipynb



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1504) Add perturbed gradient descent feature in Caffe2DML to escape saddle points

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1504:
-

 Summary: Add perturbed gradient descent feature in Caffe2DML to 
escape saddle points
 Key: SYSTEMML-1504
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1504
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1502) Add support in Caffe2DML to display Network/Hop DAG in tensorboard

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1502:
-

 Summary: Add support in Caffe2DML to display Network/Hop DAG in 
tensorboard
 Key: SYSTEMML-1502
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1502
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1503) Add parameter server support in Caffe2DML

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1503:
-

 Summary: Add parameter server support in Caffe2DML
 Key: SYSTEMML-1503
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1503
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1501) Add support in Caffe2DML to display histogram in tensorboard

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1501:
-

 Summary: Add support in Caffe2DML to display histogram in 
tensorboard
 Key: SYSTEMML-1501
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1501
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1500) Add missing loss layers to Caffe2DML

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1500:
-

 Summary: Add missing loss layers to Caffe2DML
 Key: SYSTEMML-1500
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1500
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


Multinomial Logistic Loss
Infogain Loss - a generalization of MultinomialLogisticLossLayer.
Softmax with Loss - computes the multinomial logistic loss of the softmax of 
its inputs. It’s conceptually identical to a softmax layer followed by a 
multinomial logistic loss layer, but provides a more numerically stable 
gradient.
Sum-of-Squares / Euclidean - computes the sum of squares of differences of its 
two inputs, 12N∑Ni=1∥x1i−x2i∥2212N∑i=1N‖xi1−xi2‖22.
Hinge / Margin - The hinge loss layer computes a one-vs-all hinge (L1) or 
squared hinge loss (L2).
Sigmoid Cross-Entropy Loss - computes the cross-entropy (logistic) loss, often 
used for predicting targets interpreted as probabilities.
Accuracy / Top-k layer - scores the output as an accuracy with respect to 
target – it is not actually a loss and has no backward step.
Contrastive Loss



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1499) Add Utility layers in Caffe2DML

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1499:
-

 Summary: Add Utility layers in Caffe2DML
 Key: SYSTEMML-1499
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1499
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


http://caffe.berkeleyvision.org/tutorial/layers/flatten.html
http://caffe.berkeleyvision.org/tutorial/layers/reshape.html
http://caffe.berkeleyvision.org/tutorial/layers/batchreindex.html
http://caffe.berkeleyvision.org/tutorial/layers/split.html
http://caffe.berkeleyvision.org/tutorial/layers/concat.html
http://caffe.berkeleyvision.org/tutorial/layers/slice.html
http://caffe.berkeleyvision.org/tutorial/layers/eltwise.html
http://caffe.berkeleyvision.org/tutorial/layers/filter.html
http://caffe.berkeleyvision.org/tutorial/layers/reduction.html
http://caffe.berkeleyvision.org/tutorial/layers/argmax.html
http://caffe.berkeleyvision.org/tutorial/layers/softmax.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1498) Add Threshold layer in nn library and Caffe2DML

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1498:
-

 Summary: Add Threshold layer in nn library and Caffe2DML
 Key: SYSTEMML-1498
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1498
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


http://caffe.berkeleyvision.org/tutorial/layers/threshold.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1497) Add BNLL layer in nn library and Caffe2DML

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1497:
-

 Summary: Add BNLL layer in nn library and Caffe2DML
 Key: SYSTEMML-1497
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1497
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


http://caffe.berkeleyvision.org/tutorial/layers/bnll.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1496) Add log layer in nn library and Caffe2DML

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1496:
-

 Summary: Add log layer in nn library and Caffe2DML
 Key: SYSTEMML-1496
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1496
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


http://caffe.berkeleyvision.org/tutorial/layers/log.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1495) Add Exp layer in nn library and Caffe2DML

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1495:
-

 Summary: Add Exp layer in nn library and Caffe2DML
 Key: SYSTEMML-1495
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1495
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


http://caffe.berkeleyvision.org/tutorial/layers/exp.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1494) Add Power layer to nn library and Caffe2DML

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1494:
-

 Summary: Add Power layer to nn library and Caffe2DML
 Key: SYSTEMML-1494
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1494
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


http://caffe.berkeleyvision.org/tutorial/layers/power.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1493) Add Tanh layer in nn library and Caffe2DML

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1493:
-

 Summary: Add Tanh layer in nn library and Caffe2DML
 Key: SYSTEMML-1493
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1493
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


http://caffe.berkeleyvision.org/tutorial/layers/tanh.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1492) Add Sigmoid layer in nn library and Caffe2DML

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1492:
-

 Summary: Add Sigmoid layer in nn library and Caffe2DML
 Key: SYSTEMML-1492
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1492
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


http://caffe.berkeleyvision.org/tutorial/layers/sigmoid.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1491) Add different ReLU variants in nn library and Caffe2DML

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1491:
-

 Summary: Add different ReLU variants in nn library and Caffe2DML
 Key: SYSTEMML-1491
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1491
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


http://caffe.berkeleyvision.org/tutorial/layers/relu.html
http://caffe.berkeleyvision.org/tutorial/layers/prelu.html
http://caffe.berkeleyvision.org/tutorial/layers/elu.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1490) Add Scale layer to nn library and Caffe2DML

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1490:
-

 Summary: Add Scale layer to nn library and Caffe2DML
 Key: SYSTEMML-1490
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1490
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


http://caffe.berkeleyvision.org/tutorial/layers/scale.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1489) Add Bias layer in nn library and Caffe2DML

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1489:
-

 Summary: Add Bias layer in nn library and Caffe2DML
 Key: SYSTEMML-1489
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1489
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


http://caffe.berkeleyvision.org/tutorial/layers/bias.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1487) Add Local Response Normalization layer in nn library and Caffe2DML

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1487:
-

 Summary: Add Local Response Normalization layer in nn library and 
Caffe2DML
 Key: SYSTEMML-1487
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1487
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


http://caffe.berkeleyvision.org/tutorial/layers/lrn.html

AlexNet needs this layer



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1488) Add Mean-Variance Normalization layer in nn library and Caffe2DML

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1488:
-

 Summary: Add Mean-Variance Normalization layer in nn library and 
Caffe2DML
 Key: SYSTEMML-1488
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1488
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


http://caffe.berkeleyvision.org/tutorial/layers/mvn.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1486) Add Embed layer in nn library and Caffe2DML

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1486:
-

 Summary: Add Embed layer in nn library and Caffe2DML
 Key: SYSTEMML-1486
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1486
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


http://caffe.berkeleyvision.org/tutorial/layers/embed.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1485) Add LSTM layer in Caffe2DML

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1485:
-

 Summary: Add LSTM layer in Caffe2DML
 Key: SYSTEMML-1485
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1485
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


http://caffe.berkeleyvision.org/tutorial/layers/lstm.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1483) Add Deconvolution layer in nn library and Caffe2DML

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1483:
-

 Summary: Add Deconvolution layer in nn library and Caffe2DML
 Key: SYSTEMML-1483
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1483
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


http://caffe.berkeleyvision.org/tutorial/layers/deconvolution.html

[~mwdus...@us.ibm.com] [~prithvi_r_s] 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1484) Add RNN layer in Caffe2DML

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1484:
-

 Summary: Add RNN layer in Caffe2DML
 Key: SYSTEMML-1484
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1484
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


http://caffe.berkeleyvision.org/tutorial/layers/rnn.html
http://caffe.berkeleyvision.org/tutorial/layers/recurrent.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1482) Add Crop layer in nn library and Caffe2DML

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1482:
-

 Summary: Add Crop layer in nn library and Caffe2DML
 Key: SYSTEMML-1482
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1482
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare


http://caffe.berkeleyvision.org/tutorial/layers/crop.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1481) Add Python helper function to convert LMDB to binaryblocks

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1481:
-

 Summary: Add Python helper function to convert LMDB to binaryblocks
 Key: SYSTEMML-1481
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1481
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SYSTEMML-1480) Extend Caffe2DMLModel to support TEST phase while prediction

2017-04-08 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare updated SYSTEMML-1480:
--
Summary: Extend Caffe2DMLModel to support TEST phase while prediction  
(was: Support TEST phase while prediction)

> Extend Caffe2DMLModel to support TEST phase while prediction
> 
>
> Key: SYSTEMML-1480
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1480
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: Niketan Pansare
>Assignee: Niketan Pansare
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1480) Support TEST phase while prediction

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1480:
-

 Summary: Support TEST phase while prediction
 Key: SYSTEMML-1480
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1480
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (SYSTEMML-1480) Support TEST phase while prediction

2017-04-08 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare reassigned SYSTEMML-1480:
-

Assignee: Niketan Pansare

> Support TEST phase while prediction
> ---
>
> Key: SYSTEMML-1480
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1480
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: Niketan Pansare
>Assignee: Niketan Pansare
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1479) Make Caffe2DML feature-complete

2017-04-08 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1479:
-

 Summary: Make Caffe2DML feature-complete
 Key: SYSTEMML-1479
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1479
 Project: SystemML
  Issue Type: Task
Reporter: Niketan Pansare
Assignee: Niketan Pansare


This task will list all the remaining subtask to get Caffe2DML in production 
ready state



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SYSTEMML-1471) Support PreparedScript for MLContext

2017-04-07 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15961277#comment-15961277
 ] 

Niketan Pansare commented on SYSTEMML-1471:
---

I think if certain settings are common for popular ML tasks, it is OK to keep a 
separate API for that. For example: JMLC for in-memory scoring and MLContext 
for Spark and Python setting. But, I would really prefer that all APIs has same 
user feel. For example: JMLC uses setMatrix, whereas Scala MLContext has in and 
Python MLContext has input, which is an headache.

We need to separate user-facing API classes and internal classes for sake of 
the discussion. Here is an initial proposal for the user-facing classes:
- One context for each API (MLContext or JMLC) --> used for initialization, 
optional settings (setStatistics, setExplain, ...), and execute(script).
- One script representation (Script, PreparedScript or JMLCPreparedScript) --> 
used for setting input and output variables as well as command-line parameters.
- One result representation (MLResults or JMLCResults) ---> returns output 
variable in user-specified format (eg: DataFrame, RDD, double [][], ...)

If absolutely required, we can add ScriptExecutor to user-facing classes.

{code}
val ctx = new MLContext(sc); // or new JMLC(will not have sc) or MLContext(sc)
val script = new Script(...) // or PreparedScript(..)  or 
JMLCPreparedScript(...) ... this way PreparedScript is subclass of Script
while () {
val results = ctx.execute(script); // execute the dml program
}
{code}

+1 for removing replicated code. 

> Support PreparedScript for MLContext
> 
>
> Key: SYSTEMML-1471
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1471
> Project: SystemML
>  Issue Type: Improvement
>Reporter: Niketan Pansare
>
> The intent of this JIRA is three-fold:
> 1. Allow MLContext to be used in prediction scenario.
> 2. Consolidate the code of JMLC and MLContext.
> 3. Explore what extensions are needed in SystemML to support Spark streaming.
> For prediction scenario, it is important to reduce the parsing/validation 
> overhead as much as possible and reusing the JMLC infrastructure might be a 
> good step in that direction. It is also important that MLContext continues to 
> support dynamic recompilation and other optimization as the input size could 
> be small (similar to JMLC), but could also be large (if window size is large, 
> making MLContext ideal for this scenario). 
> {code}
> val streamingContext = new StreamingContext(sc, SLIDE_INTERVAL)
> val windowDStream  = .window(WINDOW_LENGTH, SLIDE_INTERVAL)
> val preparedScript = prepareScript()
> windowDStream.foreachRDD(currentWindow => {
> if (currentWindow.count() > 0) {
>   ml.execute(preparedScript.in("X", currentWindow.toDF()))
>   ...
> }
> })
> {code}
> [~deron] [~mboehm7] [~reinwald] [~freiss] [~mwdus...@us.ibm.com] [~nakul02] 
> Is this something that interest anyone of you ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SYSTEMML-1471) Support PreparedScript for MLContext

2017-04-06 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15959993#comment-15959993
 ] 

Niketan Pansare commented on SYSTEMML-1471:
---

[~deron] Thanks for your reply. Is MLContext thread-safe ? Do you think there 
are other issues like multiple (i.e. above script) and/or parallel (i.e. 
parallel execute calls) invocation that also needs to be addressed ?

Additionally, we should also consider removing code-duplication in our JMLC and 
MLContext APIs if possible.

> Support PreparedScript for MLContext
> 
>
> Key: SYSTEMML-1471
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1471
> Project: SystemML
>  Issue Type: Improvement
>Reporter: Niketan Pansare
>
> The intent of this JIRA is three-fold:
> 1. Allow MLContext to be used in prediction scenario.
> 2. Consolidate the code of JMLC and MLContext.
> 3. Explore what extensions are needed in SystemML to support Spark streaming.
> For prediction scenario, it is important to reduce the parsing/validation 
> overhead as much as possible and reusing the JMLC infrastructure might be a 
> good step in that direction. It is also important that MLContext continues to 
> support dynamic recompilation and other optimization as the input size could 
> be small (similar to JMLC), but could also be large (if window size is large, 
> making MLContext ideal for this scenario). 
> {code}
> val streamingContext = new StreamingContext(sc, SLIDE_INTERVAL)
> val windowDStream  = .window(WINDOW_LENGTH, SLIDE_INTERVAL)
> val preparedScript = prepareScript()
> windowDStream.foreachRDD(currentWindow => {
> if (currentWindow.count() > 0) {
>   ml.execute(preparedScript.in("X", currentWindow.toDF()))
>   ...
> }
> })
> {code}
> [~deron] [~mboehm7] [~reinwald] [~freiss] [~mwdus...@us.ibm.com] [~nakul02] 
> Is this something that interest anyone of you ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1471) Support PreparedScript for MLContext

2017-04-06 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1471:
-

 Summary: Support PreparedScript for MLContext
 Key: SYSTEMML-1471
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1471
 Project: SystemML
  Issue Type: Improvement
Reporter: Niketan Pansare


The intent of this JIRA is three-fold:
1. Allow MLContext to be used in prediction scenario.
2. Consolidate the code of JMLC and MLContext.
3. Explore what extensions are needed in SystemML to support Spark streaming.

For prediction scenario, it is important to reduce the parsing/validation 
overhead as much as possible and reusing the JMLC infrastructure might be a 
good step in that direction. It is also important that MLContext continues to 
support dynamic recompilation and other optimization as the input size could be 
small (similar to JMLC), but could also be large (if window size is large, 
making MLContext ideal for this scenario). 

{code}
val streamingContext = new StreamingContext(sc, SLIDE_INTERVAL)
val windowDStream  = .window(WINDOW_LENGTH, SLIDE_INTERVAL)
val preparedScript = prepareScript()
windowDStream.foreachRDD(currentWindow => {
if (currentWindow.count() > 0) {
  ml.execute(preparedScript.in("X", currentWindow.toDF()))
  ...
}
})
{code}

[~deron] [~mboehm7] [~reinwald] [~freiss] [~mwdus...@us.ibm.com] [~nakul02] Is 
this something that interest anyone of you ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SYSTEMML-1445) Add support for matrix-vector GPU axpy operation

2017-03-30 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare updated SYSTEMML-1445:
--
Summary: Add support for matrix-vector GPU axpy operation  (was: Add 
support for GPU axpy operation)

> Add support for matrix-vector GPU axpy operation
> 
>
> Key: SYSTEMML-1445
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1445
> Project: SystemML
>  Issue Type: Sub-task
>  Components: Compiler, Runtime
>Reporter: Niketan Pansare
>Assignee: Niketan Pansare
> Fix For: SystemML 0.13
>
>
> Here's a short snippet to invoke the axpy rewrite and reproduce the issue on 
> GPU:
> {code}
> n = 100
> m = 10
> #a = 2
> a = as.scalar(rand(rows=1, cols=1))
> x = rand(rows=n, cols=m)
> y = rand(rows=1, cols=m)
> z = x + a*y  # broadcasting
> if (1==1){}
> print(sum(z))
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1445) Add support for GPU axpy operation

2017-03-30 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1445:
-

 Summary: Add support for GPU axpy operation
 Key: SYSTEMML-1445
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1445
 Project: SystemML
  Issue Type: Sub-task
Reporter: Niketan Pansare
Assignee: Niketan Pansare


Here's a short snippet to invoke the axpy rewrite and reproduce the issue on 
GPU:
{code}
n = 100
m = 10
#a = 2
a = as.scalar(rand(rows=1, cols=1))
x = rand(rows=n, cols=m)
y = rand(rows=1, cols=m)
z = x + a*y  # broadcasting
if (1==1){}
print(sum(z))
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (SYSTEMML-1442) Reduce the JVM memory required for transfering Numpy array

2017-03-30 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare resolved SYSTEMML-1442.
---
   Resolution: Fixed
Fix Version/s: SystemML 1.0

Fixed by commit 
https://github.com/apache/incubator-systemml/commit/0f45718106b8864ef5aee56a742644d11eabf3e8

> Reduce the JVM memory required for transfering Numpy array
> --
>
> Key: SYSTEMML-1442
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1442
> Project: SystemML
>  Issue Type: Bug
>Reporter: Niketan Pansare
>Assignee: Niketan Pansare
> Fix For: SystemML 1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SYSTEMML-1389) Update API: Pass in all outputs from `forward` to `backward` for performance

2017-03-29 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-1389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15948140#comment-15948140
 ] 

Niketan Pansare commented on SYSTEMML-1389:
---

Cool. I will make the relevant update to my PR once your code is merged in :)

> Update API: Pass in all outputs from `forward` to `backward` for performance
> 
>
> Key: SYSTEMML-1389
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1389
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: Mike Dusenberry
>Assignee: Mike Dusenberry
>
> Currently, we do not pass the outputs of the {{forward}} functions to the 
> {{backward}} functions in the {{nn}} library.  This aims to update the 
> {{backward}} API to include (1) all relevant gradients from upstream, (2) 
> *all* outputs from {{forward}}, and (3) *all* inputs given to {{forward}}.  
> Effectively, this would be equivalent to having a class object that maintains 
> all configuration and input + output tensors.  This provides two benefits: 
> first, many layers can benefit from a performance perspective from having 
> access to the outputs of the {{forward}} function within the {{backward}} 
> function, and second, this makes the API much simpler and less error prone by 
> allowing for simple copy-and-paste of forward inputs and outputs as arguments 
> to {{backward}} and by removing ambiguity related to the parameters.  A 
> downside is that often times, not every single parameter is needed by 
> {{backward}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1442) Reduce the JVM memory required for transfering Numpy array

2017-03-28 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1442:
-

 Summary: Reduce the JVM memory required for transfering Numpy array
 Key: SYSTEMML-1442
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1442
 Project: SystemML
  Issue Type: Bug
Reporter: Niketan Pansare
Assignee: Niketan Pansare






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (SYSTEMML-1431) Throw controlled error when one-dimensional numpy array is passed to SystemML

2017-03-23 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare reassigned SYSTEMML-1431:
-

Assignee: Niketan Pansare

> Throw controlled error when one-dimensional numpy array is passed to SystemML
> -
>
> Key: SYSTEMML-1431
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1431
> Project: SystemML
>  Issue Type: Bug
>Reporter: Niketan Pansare
>Assignee: Niketan Pansare
> Fix For: SystemML 1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (SYSTEMML-1431) Throw controlled error when one-dimensional numpy array is passed to SystemML

2017-03-23 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare resolved SYSTEMML-1431.
---
   Resolution: Fixed
Fix Version/s: SystemML 1.0

Resolved by the commit 
https://github.com/apache/incubator-systemml/commit/4c162afd93932b0fbf74d76113308ba3b5328878

> Throw controlled error when one-dimensional numpy array is passed to SystemML
> -
>
> Key: SYSTEMML-1431
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1431
> Project: SystemML
>  Issue Type: Bug
>Reporter: Niketan Pansare
>Assignee: Niketan Pansare
> Fix For: SystemML 1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (SYSTEMML-1428) Built-in max pooling functions give incorrect output with padding>0

2017-03-22 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare resolved SYSTEMML-1428.
---
   Resolution: Fixed
 Assignee: Niketan Pansare  (was: Mike Dusenberry)
Fix Version/s: SystemML 1.0

Resolved by the commit: 
https://github.com/apache/incubator-systemml/commit/16e990928fa0201132688a8f7476856a02253030

> Built-in max pooling functions give incorrect output with padding>0
> ---
>
> Key: SYSTEMML-1428
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1428
> Project: SystemML
>  Issue Type: Bug
>Reporter: Mike Dusenberry
>Assignee: Niketan Pansare
> Fix For: SystemML 1.0
>
>
> While working on SYSTEMML-1408, it was discovered that the built-in max 
> pooling function gives incorrect results if padding > 0 is used.  
> Padded Matrix:
> {code}
>   # -- channel 1
>   #  0  0  0  0  0  0
>   #  0  1  2  3  4  0
>   #  0  5  6  7  8  0
>   #  0  9 10 11 12  0
>   #  0 13 14 15 16  0
>   #  0  0  0  0  0  0
>   # -- channel 2
>   #  0  0  0  0  0  0
>   #  0  1  5  9 13  0
>   #  0  2  6 10 14  0
>   #  0  3  7 11 15  0
>   #  0  4  8 12 16  0
>   #  0  0  0  0  0  0
> {code}
> Correct output:
> {code}
>   # -- channel 1
>   #  1  3  4
>   #  9 11 12
>   # 13 15 16
>   # -- channel 2
>   #  1  9 13
>   #  3 11 15
>   #  4 12 16
> {code}
> Builtin output:
> {code}
>   # -- channel 1
>   #  2  3  4
>   # 10 11 12
>   # 14 15 16
>   # -- channel 2
>   #  5  9 13
>   #  7 11 15
>   #  8 12 16
> {code}
> The current behavior is as follows -- the builtin version is (1) *not* adding 
> the 1st and 6th **columns** of padding, and (2) is performing **stride-1** 
> max-pooling on the partially-padded matrix, rather than stride-2.
> Partially-padded matrix:
> {code}
>   # -- channel 1
>   #   0  0  0  0
>   #   1  2  3  4
>   #   5  6  7  8
>   #   9 10 11 12
>   #  13 14 15 16
>   #   0  0  0  0
>   # -- channel 2
>   #   0  0  0  0
>   #   1  5  9 13
>   #   2  6 10 14
>   #   3  7 11 15
>   #   4  8 12 16
>   #   0  0  0  0
> {code}
> ([PR 434 | https://github.com/apache/incubator-systemml/pull/434] contains 
> more information.)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SYSTEMML-1431) Throw controlled error when one-dimensional numpy array is passed to SystemML

2017-03-22 Thread Niketan Pansare (JIRA)
Niketan Pansare created SYSTEMML-1431:
-

 Summary: Throw controlled error when one-dimensional numpy array 
is passed to SystemML
 Key: SYSTEMML-1431
 URL: https://issues.apache.org/jira/browse/SYSTEMML-1431
 Project: SystemML
  Issue Type: Bug
Reporter: Niketan Pansare






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (SYSTEMML-1411) Add bias_multiply operator

2017-03-17 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare resolved SYSTEMML-1411.
---
   Resolution: Fixed
 Assignee: Niketan Pansare
Fix Version/s: SystemML 1.0

Closed in the commit 
https://github.com/apache/incubator-systemml/commit/d127dfa2d3e8a8c58b742e1722f797a9f6968955

> Add bias_multiply operator
> --
>
> Key: SYSTEMML-1411
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1411
> Project: SystemML
>  Issue Type: Sub-task
>  Components: Compiler, Runtime
>Reporter: Niketan Pansare
>Assignee: Niketan Pansare
> Fix For: SystemML 1.0
>
>
> bias_multiply performs similar operation as bias_add, except it does 
> element-wise multiplication instead of addition



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (SYSTEMML-1403) GPU relu_maxpooling produces incorrect results

2017-03-17 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare resolved SYSTEMML-1403.
---
Resolution: Fixed

Fixed in the commit 
https://github.com/apache/incubator-systemml/commit/6e7e8873ad0472cb66b91430e324c89a6a0a0d33

> GPU relu_maxpooling produces incorrect results
> --
>
> Key: SYSTEMML-1403
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1403
> Project: SystemML
>  Issue Type: Bug
>  Components: Runtime
>Reporter: Nakul Jindal
>Assignee: Niketan Pansare
> Fix For: SystemML 1.0
>
>
> fused GPU relu_maxpooling produces incorrect results.
> The following PR disables it : 
> https://github.com/apache/incubator-systemml/pull/431



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (SYSTEMML-1370) Py4JError: An error occurred while calling z:org.apache.sysml.runtime.instructions.spark.utils.RDDConverterUtilsExt.convertPy4JArrayToMB.

2017-03-17 Thread Niketan Pansare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-1370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niketan Pansare resolved SYSTEMML-1370.
---
   Resolution: Fixed
Fix Version/s: SystemML 1.0

Fixed in the commit 
https://github.com/apache/incubator-systemml/commit/81090134d2de04a3ae90c6f8d79b4c68cb14aab5

> Py4JError: An error occurred while calling 
> z:org.apache.sysml.runtime.instructions.spark.utils.RDDConverterUtilsExt.convertPy4JArrayToMB.
> -
>
> Key: SYSTEMML-1370
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1370
> Project: SystemML
>  Issue Type: Bug
>  Components: APIs
>Affects Versions: Not Applicable
> Environment: pyspark with local Spark 2.1
>Reporter: Berthold Reinwald
> Fix For: SystemML 1.0
>
>
> Do we have undocumented limits for RDDConverterUtilsExt.convertPy4JArrayToMB?
> Below simple script works for 23100 rows, while 46900 fails. This is how to 
> easily and consistently reproduce.
> START:
> $pyspark --master local --jars $SYSTEMML_HOME/SystemML.jar --driver-memory 8G 
> --executor-memory 2G
> PYTHON SCRIPT:
> from systemml import MLContext, dml
> import pandas as pd
> sc.version
> ml = MLContext(sc)
> print "Spark Version:", sc.version
> print "SystemML Version:", ml.version()
> print "SystemML Built-Time:", ml.buildTime()
> # !! number of rows 23100 works, while 46900 fails
> nr = 46900
> X_pd = pd.DataFrame(range(1, (nr*784)+1,1),dtype=float).values.reshape(nr,784)
> script ="""
> write(X, $Xfile, format="csv")
> """
> prog = dml(script).input(X=X_pd).input(**{"$Xfile":"/tmp/X_pd.csv"})
> ml.execute(prog)
> OUTPUT:
> Spark Version: 2.1.0
> SystemML Version: 0.14.0-incubating-SNAPSHOT
> SystemML Built-Time: 2017-03-03 07:33:40 UTC
> ---
> Py4JError Traceback (most recent call last)
> ...
> Py4JError: An error occurred while calling 
> z:org.apache.sysml.runtime.instructions.spark.utils.RDDConverterUtilsExt.convertPy4JArrayToMB.
>  Trace:
> java.lang.NegativeArraySizeException
>   at py4j.Base64.decode(Base64.java:321)
>   at py4j.Protocol.getBytes(Protocol.java:173)
>   at py4j.Protocol.getObject(Protocol.java:294)
>   at py4j.commands.AbstractCommand.getArguments(AbstractCommand.java:82)
>   at py4j.commands.CallCommand.execute(CallCommand.java:77)
>   at py4j.GatewayConnection.run(GatewayConnection.java:214)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   3   4   5   >