[GitHub] KellenSunderland commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
KellenSunderland commented on a change in pull request #11325: Added TensorRT 
runtime integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197796778
 
 

 ##
 File path: example/image-classification/tensorrt/test_tensorrt_resnet50.py
 ##
 @@ -0,0 +1,186 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+from __future__ import print_function
+
+import os.path
+import subprocess
 
 Review comment:
   Is subprocess used / required somewhere in this test?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tvandergeer commented on issue #9507: Segmentation Fault

2018-06-25 Thread GitBox
tvandergeer commented on issue #9507: Segmentation Fault
URL: 
https://github.com/apache/incubator-mxnet/issues/9507#issuecomment-399963570
 
 
   @larroy I followed [this 
howto](http://mxnet.incubator.apache.org/install/index.html) (select "Devices" 
and "Raspberry Pi"). I tried with both export USE_OPENCV = 0 and export 
USE_OPENCV = 1


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nihilityworld opened a new issue #11386: How to add out_grad in Regression_output-inl.h?

2018-06-25 Thread GitBox
nihilityworld opened a new issue #11386: How to add out_grad in 
Regression_output-inl.h?
URL: https://github.com/apache/incubator-mxnet/issues/11386
 
 
   I want to train a net using LinearRegressionOutput Layer and a SoftmaxLayer. 
And I want to ignore some labe in LinearRegressionOutput. So how to add 
out_grad in Regression_output-inl.h?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
KellenSunderland commented on a change in pull request #11325: Added TensorRT 
runtime integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197798293
 
 

 ##
 File path: example/image-classification/tensorrt/test_tensorrt_resnet50.py
 ##
 @@ -0,0 +1,186 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+from __future__ import print_function
+
+import os.path
+import subprocess
+import mxnet as mx
+import numpy as np
+from time import time
+import sys
+import urllib
+
+def get_use_tensorrt():
+return int(os.environ.get("MXNET_USE_TENSORRT", 0))
+
+def set_use_tensorrt(status = False):
+os.environ["MXNET_USE_TENSORRT"] = str(int(status))
+
+def download_file(url, local_fname=None, force_write=False):
+# requests is not default installed
+import requests
+if local_fname is None:
+local_fname = url.split('/')[-1]
+if not force_write and os.path.exists(local_fname):
+return local_fname
+
+dir_name = os.path.dirname(local_fname)
+
+if dir_name != "":
+if not os.path.exists(dir_name):
+try: # try to create the directory if it doesn't exists
+os.makedirs(dir_name)
+except OSError as exc:
+if exc.errno != errno.EEXIST:
 
 Review comment:
   Missing errno import?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
KellenSunderland commented on a change in pull request #11325: Added TensorRT 
runtime integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197796187
 
 

 ##
 File path: example/image-classification/tensorrt/test_tensorrt_resnet50.py
 ##
 @@ -0,0 +1,186 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+from __future__ import print_function
+
+import os.path
+import subprocess
+import mxnet as mx
+import numpy as np
+from time import time
+import sys
+import urllib
+
+def get_use_tensorrt():
+return int(os.environ.get("MXNET_USE_TENSORRT", 0))
+
+def set_use_tensorrt(status = False):
+os.environ["MXNET_USE_TENSORRT"] = str(int(status))
+
+def download_file(url, local_fname=None, force_write=False):
+# requests is not default installed
+import requests
+if local_fname is None:
+local_fname = url.split('/')[-1]
+if not force_write and os.path.exists(local_fname):
+return local_fname
+
+dir_name = os.path.dirname(local_fname)
+
+if dir_name != "":
+if not os.path.exists(dir_name):
+try: # try to create the directory if it doesn't exists
+os.makedirs(dir_name)
+except OSError as exc:
+if exc.errno != errno.EEXIST:
+raise
+
+r = requests.get(url, stream=True)
+assert r.status_code == 200, "failed to open %s" % url
+with open(local_fname, 'wb') as f:
+for chunk in r.iter_content(chunk_size=1024):
+if chunk: # filter out keep-alive new chunks
+f.write(chunk)
+return local_fname
+
+def download_cifar10(data_dir):
+fnames = (os.path.join(data_dir, "cifar10_train.rec"),
+  os.path.join(data_dir, "cifar10_val.rec"))
+download_file('http://data.mxnet.io/data/cifar10/cifar10_val.rec', 
fnames[1])
+download_file('http://data.mxnet.io/data/cifar10/cifar10_train.rec', 
fnames[0])
+return fnames
+
+def get_cifar10_iterator(args, kv):
+data_shape = (3, 32, 32) #28, 28) 
 
 Review comment:
   Is this comment needed?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland opened a new pull request #11387: WIP2

2018-06-25 Thread GitBox
KellenSunderland opened a new pull request #11387: WIP2
URL: https://github.com/apache/incubator-mxnet/pull/11387
 
 
   WIP, please do not merge.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
KellenSunderland commented on a change in pull request #11325: Added TensorRT 
runtime integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197772385
 
 

 ##
 File path: Makefile
 ##
 @@ -94,6 +94,14 @@ else
 endif
 CFLAGS += -I$(TPARTYDIR)/mshadow/ -I$(TPARTYDIR)/dmlc-core/include -fPIC 
-I$(NNVM_PATH)/include -I$(DLPACK_PATH)/include -I$(TPARTYDIR)/tvm/include 
-Iinclude $(MSHADOW_CFLAGS)
 LDFLAGS = -pthread $(MSHADOW_LDFLAGS) $(DMLC_LDFLAGS)
+
+
+ifeq ($(USE_TENSORRT), 1)
 
 Review comment:
   We spoke offline about this, but just a quick note that we should also add 
the ability to build MXNet-TensorRT integration to our cmake builds.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yyl199655 opened a new issue #11385: About loss

2018-06-25 Thread GitBox
yyl199655 opened a new issue #11385: About loss
URL: https://github.com/apache/incubator-mxnet/issues/11385
 
 
   
   If I want to add other loss function or penalty term, how to implement 
it?,Do I need to writer the BackPropagation


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vrakesh commented on issue #11384: BucketingModule crash with parameters conditioned on bucket key

2018-06-25 Thread GitBox
vrakesh commented on issue #11384: BucketingModule crash with parameters 
conditioned on bucket key
URL: 
https://github.com/apache/incubator-mxnet/issues/11384#issuecomment-44062
 
 
   @fhieber Thank you for reporting the issue, will look into this, 
@sandeep-krishnamurthy requesting the issue be labeled under bugs


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy closed issue #11386: How to add out_grad in Regression_output-inl.h?

2018-06-25 Thread GitBox
sandeep-krishnamurthy closed issue #11386: How to add out_grad in 
Regression_output-inl.h?
URL: https://github.com/apache/incubator-mxnet/issues/11386
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on issue #11386: How to add out_grad in Regression_output-inl.h?

2018-06-25 Thread GitBox
sandeep-krishnamurthy commented on issue #11386: How to add out_grad in 
Regression_output-inl.h?
URL: 
https://github.com/apache/incubator-mxnet/issues/11386#issuecomment-46426
 
 
   Hello @nihilityworld - For usage related question, please you 
https://discuss.mxnet.io/ forum for a response from wider mxnet community and 
also helps in the search for future similar questions.
   
   Closing the issue here, please reopen if you face any issues.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [MXNET-533] MXNet-ONNX export (#11213)

2018-06-25 Thread skm
This is an automated email from the ASF dual-hosted git repository.

skm pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 7d91602  [MXNET-533] MXNet-ONNX export (#11213)
7d91602 is described below

commit 7d91602ba771d973360f8a0c66c976c67f700aa3
Author: Roshani Nagmote 
AuthorDate: Mon Jun 25 09:43:20 2018 -0700

[MXNET-533] MXNet-ONNX export (#11213)

* Resolve conflicts

* Export module Test Framework

* refactoring export to work with pretrained models

* comments added

* 1. Refactored export module.
2. Refactored test framework to support ONNX backened tests.
2. Added Operator support:
   - Convolution2D
   - BatchNorm
   - Add

* Added Arithmetic operators:
- Add, Sub, Mul, Div, Sum

* Added operator support:
- sigmoid, relu, pad( constant, edge, reflect), tanh
- enabled corresponding ONNX backend tests.

* Enabled ONNX tests: test_conv, test_basic_conv

Added Operators :
Ceil, Floor

* Added support for:
MaxPool, AvgPool, GlobalMaxPool, GlobalAvgPool, matmul

* adding more operators

* Added Operator support:
ArgMax, ArgMin, maximum, minimum

* Enabled more BASIC_MODEL tests

* Added power operator tests

* Added support for reshape. ONNX only supports 0, -1  special values. 
Added only for these.
Fixed logic error with convert_string_to_list()

* some tests enabled

* enabling squeezenet

* LRN Op support

* mul_scalar modified to take scalar input

* cleaning some code

* Resolving conlicts on rebase

* Resolving rebase conflicts

* id mapping updated for all operators

* save onnx models added, some code cleanup

* enabled more tests

* conv pad calc fixed

* reshape op fix

* Added support for elu, leakyRelu, prelu

* Cleanup
- Removed run_node, not needed anymore.
- Used correct get_metadata api

* valueinfoproto fix, googlenet test added

* Removed redundant code.
- run_node
- Using correct get_metadata_api

* dilation added

* Lint fixes

* lint fixes

* some fixes to make export work with onx1.2.1

* enabled more tests

* mxnet_export_test file added

* duplicate file deleted

* reduce ops added

* some small fixes

* some lint fixes

* Add tests for inception_v1 and inception_v2

* Add CI runs for export module

* docstring added

* lint fixes, pooling attr fix

* fix

* fix global_pool

* CI  run fix

* code cleanup

* lint fix

* some code cleanup

* pad in pooling added

* slicechannel notimplementederror raised

* Added required license comments

* Lint fixes

* lint fix

* lint fix

* lint fix

* lint fix

* Correct license statement

* Adding onnx a runtime dependency

* Fix import module error for string_types

* Making ONNX runtime dependency

* fixing some comments

* addressing some comments

* params rename

* lint fixes

* fixes

* spatial disabled, path fixed

* fixing some comments

* Added support for remaining act_type(softsign, sigmoid, softrelu) in 
Activation operator

* changing import

* adding some comments

* Add squeeze op

* Refactored logic to handle extra node(output label node) for saved mxnet 
model
Added comments

* minor fix for squeeze operator.
Also, added error handling

* identity operator added

* scalar ops added

* Renamed onnx support folders to mark it public folders
Changed underline files public or private as per usage

Resolved conflicts with the latest

* Added support L2Normalization op
Added some error checking

* added comments and warning

* added comments and warning

* doc API ref added
---
 LICENSE|   52 +-
 ci/docker/runtime_functions.sh |2 +
 docs/api/python/contrib/onnx.md|2 +
 python/mxnet/contrib/onnx/__init__.py  |5 +-
 python/mxnet/contrib/onnx/mx2onnx/LICENSE  |   44 +
 .../contrib/onnx/{_import => mx2onnx}/__init__.py  |   10 +-
 .../mxnet/contrib/onnx/mx2onnx/_export_helper.py   |   65 +
 .../mxnet/contrib/onnx/mx2onnx/_op_translations.py | 1863 
 python/mxnet/contrib/onnx/mx2onnx/export_model.py  |   95 +
 python/mxnet/contrib/onnx/mx2onnx/export_onnx.py   |  347 
 .../contrib/onnx/{_import => onnx2mx}/__init__.py  |  

[GitHub] sandeep-krishnamurthy closed pull request #11213: [MXNET-533] MXNet-ONNX export

2018-06-25 Thread GitBox
sandeep-krishnamurthy closed pull request #11213: [MXNET-533] MXNet-ONNX export
URL: https://github.com/apache/incubator-mxnet/pull/11213
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/LICENSE b/LICENSE
index 158bd37f278..a8b57e58376 100644
--- a/LICENSE
+++ b/LICENSE
@@ -298,8 +298,6 @@
 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF 
THIS
 SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
-
-
 
===
 Other Licenses
 
===
@@ -512,3 +510,53 @@
 For details, see, 3rdparty/dmlc-core/include/dmlc/concurrentqueue.h
 
 
===
+
+11. ONNX Export module
+For details, see, python/mxnet/contrib/onnx/_export/LICENSE
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+# Based on
+# https://github.com/NVIDIA/mxnet_to_onnx/blob/master/mx2onnx_converter/#
+#  Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
+#
+#  Redistribution and use in source and binary forms, with or without
+#  modification, are permitted provided that the following conditions
+#  are met:
+#  * Redistributions of source code must retain the above copyright
+#notice, this list of conditions and the following disclaimer.
+#  * Redistributions in binary form must reproduce the above copyright
+#notice, this list of conditions and the following disclaimer in the
+#documentation and/or other materials provided with the distribution.
+#  * Neither the name of NVIDIA CORPORATION nor the names of its
+#contributors may be used to endorse or promote products derived
+#from this software without specific prior written permission.
+#
+#  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
+#  EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+#  IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+#  PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+#  CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+#  EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+#  PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+#  PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+#  OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
diff --git a/ci/docker/runtime_functions.sh b/ci/docker/runtime_functions.sh
index 293ac64fff8..fa84a7cb2fd 100755
--- a/ci/docker/runtime_functions.sh
+++ b/ci/docker/runtime_functions.sh
@@ -601,6 +601,8 @@ integrationtest_ubuntu_cpu_onnx() {
pytest tests/python-pytest/onnx/import/mxnet_backend_test.py
pytest tests/python-pytest/onnx/import/onnx_import_test.py
pytest tests/python-pytest/onnx/import/gluon_backend_test.py
+   pytest tests/python-pytest/onnx/export/onnx_backend_test.py
+   python tests/python-pytest/onnx/export/mxnet_export_test.py
 }
 
 integrationtest_ubuntu_gpu_python() {
diff --git a/docs/api/python/contrib/onnx.md b/docs/api/python/contrib/onnx.md
index 6fb546fc2b4..8cd619809c1 100644
--- a/docs/api/python/contrib/onnx.md
+++ b/docs/api/python/contrib/onnx.md
@@ -24,6 +24,7 @@ This document describes all the ONNX-MXNet APIs.
 
 mxnet.contrib.onnx.import_model
 mxnet.contrib.onnx.get_model_metadata
+mxnet.contrib.onnx.export_model
 ```
 
 ## ONNX Tutorials
@@ -46,6 +47,7 @@ This document describes all the ONNX-MXNet APIs.
 .. automodule:: mxnet.contrib.onnx
 :members: import_model
 

[GitHub] kurman commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
kurman commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r194590775
 
 

 ##
 File path: contrib/clojure-package/.gitignore
 ##
 @@ -0,0 +1,40 @@
+/target
+/classes
+/checkouts
+pom.xml
+pom.xml.asc
+*.jar
+*.class
+/.lein-*
+/.nrepl-port
+.hgignore
+.hg/
+data/*
+model/*
+*~
+*.params
+*.states
+*.json
+examples/module/data/*
+examples/module/target/*
+examples/rnn/data/char_lstm.zip
+examples/rnn/data/obama.txt
+examples/pre-trained-models/caltech-256/caltech-256-60-train.rec
+examples/pre-trained-models/caltech-256/caltech-256-60-val.rec
+examples/pre-trained-models/model/synset.txt
+examples/pre-trained-models/test-image.jpg
+examples/imclassification/data/*
+examples/gan/data/*
+examples/gan/results/*
+examples/cnn-text-classification/data/glove/*
+examples/cnn-text-classification/data/mr-data/*
+examples/multi-label/data/mnist.zip
+examples/multi-label/data/t10k-images-idx3-ubyte
+examples/multi-label/data/t10k-labels-idx1-ubyte
+examples/multi-label/data/train-images-idx3-ubyte
+examples/multi-label/data/train-labels-idx1-ubyte
+examples/visualization/test-vis/*
+examples/visualization/test-vis.pdf
+.DS_Store
+src/.DS_Store
+src/org/.DS_Store
 
 Review comment:
   Newline?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kurman commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
kurman commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197868019
 
 

 ##
 File path: 
contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj
 ##
 @@ -0,0 +1,112 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns cnn-text-classification.classifier
+  (:require [cnn-text-classification.data-helper :as data-helper]
+[org.apache.clojure-mxnet.eval-metric :as eval-metric]
+[org.apache.clojure-mxnet.io :as mx-io]
+[org.apache.clojure-mxnet.module :as m]
+[org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.optimizer :as optimizer]
+[org.apache.clojure-mxnet.symbol :as sym]
+[org.apache.clojure-mxnet.context :as context])
+  (:gen-class))
+
+(def mr-dataset-path "data/mr-data") ;; the MR polarity dataset path
+(def glove-file-path "data/glove/glove.6B.50d.txt")
+
+(defn shuffle-data [test-num {:keys [data label sentence-count sentence-size 
embedding-size]}]
+  (println "Shuffling the data and splitting into training and test sets")
+  (println {:sentence-count sentence-count
+:sentence-size sentence-size
+:embedding-size embedding-size})
+  (let [shuffled (shuffle (map (fn [d l] [d l]) data label))
+train-num (- (count shuffled) test-num)
+training (into [] (take train-num shuffled))
+test (into [] (drop train-num shuffled))]
+{:training {:data  (ndarray/array (into [] (flatten (mapv (fn [v] (first 
v)) training)))
 
 Review comment:
   You don't need fn here, `(mapv first training)` would be fine. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kurman commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
kurman commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197854812
 
 

 ##
 File path: contrib/clojure-package/README.md
 ##
 @@ -0,0 +1,203 @@
+# Clojure MXNet
+
+A clojure package to the MXNet Deep Learning library
+
+## Introduction
+
+MXNet is a first class, modern deep learning library that AWS has officially 
picked as its chosen library. It supports multiple languages on a first class 
basis and is incubating as an Apache project.
+
+The motivation for creating a Clojure package is to be able to open the deep 
learning library to the Clojure ecosystem and build bridges for future 
development and innovation for the community. It provides all the needed tools 
including low level and high level apis, dynamic graphs, and things like GAN 
and natural language support.
+
+For high leverage, the Clojure package has been built on the existing Scala 
package using interop. This has allowed rapid development and close parity with 
the Scala functionality. This also leaves the door open to directly developing 
code against the jni-bindings with Clojure in the future in an incremental 
fashion, using the test suites as a refactoring guide.
+
+## Current State and Plans
+
+The Clojure package is nearing the end of its first development milestone 
which is to achieve a close parity with the Scala package and to potentially be 
included into the main project for official Clojure language support.
+
+What is needed now is alpha testing on both OSX and Linux to discover any 
bugs, rough edges, and generally harden it before an official PR is opened on 
the main project.
+
+Help with this effort is greatly appreciated and contributors will be 
recognized in the project README.
+
+Testing instructions can be found in the Testing.md
+
+## Getting Started
+
+The following systems are supported:
+
+- OSX cpu
+- Linux cpu
+- Linux gpu
+
+There are two ways of getting going. The first way is the easiest and that is 
to use the pre-built jars from Maven. The second way is to build from source. 
In both cases, you will need to load the prereqs and dependencies, (like 
opencv).
+
+It's been tested on AWS Deep Learning AMI and OSX High Sierra 10.13.4
+
+
+### Prerequisites
+
+**If you are using the AWS Deep Learning Ubuntu or Linux AMI you should be 
good to go without doing anything on this step.**
+
+
+Follow the instructions from 
https://mxnet.incubator.apache.org/install/osx_setup.html or 
https://mxnet.incubator.apache.org/install/ubuntu_setup.html
+about _Prepare Environment for GPU Installation_
+and _Install MXNet dependencies_
+
+
+ Cloning the repo and running from source
+
+To use the prebuilt jars, you will need to replace the native version of the 
line in the project dependencies with your configuration.
+
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-gpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-cpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.2.0"]`
+
+
+```clojure
+
+(ns tutorial.ndarray
+  (:require [org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.context :as context]))
+
+;;Create NDArray
+(def a (ndarray/zeros [100 50])) ;;all zero arrray of dimension 100 x 50
+(def b (ndarray/ones [256 32 128 1])) ;; all one array of dimension
+(def c (ndarray/array [1 2 3 4 5 6] [2 3])) ;; array with contents of a shape 
2 x 3
+
+;;; There are also ways to convert to a vec or get the shape as an object or 
vec
+(ndarray/->vec c) ;=> [1.0 2.0 3.0 4.0 5.0 6.0]
+```
+
+See the examples/tutorial section for more.
+
+
+The jars from maven with the needed MXNet native binaries in it. On startup, 
the native libraries are extracted from the jar and copied into a temporary 
location on your path. On termination, they are deleted.
+
+If you want details on the flags (opencv verison and cuda version of the 
jars), they are documented here 
https://cwiki.apache.org/confluence/display/MXNET/MXNet-Scala+Release+Process
+
+
+### Build from MXNET Source
+
+Checkout the latest sha from the main package
+
+`git clone --recursive https://github.com/dmlc/mxnet ~/mxnet`
+`cd ~/mxnet`
+
+
+`git checkout tags/1.2.0 -b release-1.2.0`
+
+`git submodule update --init --recursive`
+
+Sometimes it useful to use this script to clean hard
+https://gist.github.com/nicktoumpelis/11214362
+
+
+Go here to do the base package installation 
https://mxnet.incubator.apache.org/install/index.html
+
+ Run `make scalapkg` then `make scalainstall`
+
+then replace the correct jar for your architecture in the project.clj, example 
`[ml.dmlc.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.0.1-SNAPSHOT"]`
+
+ Test your installation
+
+To test your installation, you should run `lein test`. This will run the test 
suite (CPU) for the clojure package.
+
+
+ Generation of NDArray and Symbol apis
+
+The bulk of the ndarray and symbol apis are generated via java reflection 

[GitHub] kurman commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
kurman commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197852862
 
 

 ##
 File path: .gitignore
 ##
 @@ -170,3 +170,5 @@ tests/mxnet_unit_tests
 # generated wrappers for ccache
 cc
 cxx
+contrib/clojure-package/test/test-ndarray.clj
+contrib/clojure-package/test/test-symbol.clj
 
 Review comment:
   newline?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kurman commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
kurman commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r194591348
 
 

 ##
 File path: contrib/clojure-package/scripts/get_cifar_data.sh
 ##
 @@ -0,0 +1,38 @@
+#!/bin/bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+set -e
 
 Review comment:
   I'd add -vx options to bash scripts in addition to -e


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kurman commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
kurman commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197870886
 
 

 ##
 File path: 
contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj
 ##
 @@ -0,0 +1,112 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns cnn-text-classification.classifier
+  (:require [cnn-text-classification.data-helper :as data-helper]
+[org.apache.clojure-mxnet.eval-metric :as eval-metric]
+[org.apache.clojure-mxnet.io :as mx-io]
+[org.apache.clojure-mxnet.module :as m]
+[org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.optimizer :as optimizer]
+[org.apache.clojure-mxnet.symbol :as sym]
+[org.apache.clojure-mxnet.context :as context])
+  (:gen-class))
+
+(def mr-dataset-path "data/mr-data") ;; the MR polarity dataset path
+(def glove-file-path "data/glove/glove.6B.50d.txt")
+
+(defn shuffle-data [test-num {:keys [data label sentence-count sentence-size 
embedding-size]}]
+  (println "Shuffling the data and splitting into training and test sets")
+  (println {:sentence-count sentence-count
+:sentence-size sentence-size
+:embedding-size embedding-size})
+  (let [shuffled (shuffle (map (fn [d l] [d l]) data label))
+train-num (- (count shuffled) test-num)
+training (into [] (take train-num shuffled))
+test (into [] (drop train-num shuffled))]
+{:training {:data  (ndarray/array (into [] (flatten (mapv (fn [v] (first 
v)) training)))
+  [train-num 1 sentence-size 
embedding-size]) ;; has to be channel x y
+:label (ndarray/array (into [] (flatten (mapv (fn [v] (last v) 
) training)))
+  [train-num])}
+ :test {:data  (ndarray/array (into [] (flatten (mapv (fn [v] (first v)) 
test)))
+  [test-num 1 sentence-size embedding-size]) 
;; has to be channel x y
+:label (ndarray/array (into [] (flatten (mapv (fn [v] (last v) ) 
test)))
+  [test-num])}}))
+
+;;; convnet with multiple filter sizes
+;; from Convolutional Neural Networks for Sentence Classification by Yoon Kim
+(defn get-multi-filter-convnet [num-embed sentence-size batch-size]
+  (let [filter-list [3 4 5]
+num-filter 100
+num-label 2
+dropout 0.5
+input-x (sym/variable "data")
+polled-outputs (mapv (fn [filter-size]
+   (as-> (sym/convolution {:data input-x
+   :kernel [filter-size 
num-embed]
+   :num-filter 
num-filter}) data
+ (sym/activation {:data data :act-type "relu"})
+ (sym/pooling {:data data
+   :pool-type "max"
+   :kernel [(inc (- sentence-size 
filter-size)) 1]
+   :stride [1 1]})))
+ filter-list)
+total-filters (* num-filter (count filter-list))
+concat (sym/concat "concat" nil polled-outputs {:dim 1})
+hpool (sym/reshape "hpool" {:data concat :target-shape [batch-size 
total-filters]})
+hdrop (if (> dropout 0) (sym/dropout "hdrop" {:data hpool :p dropout}) 
hpool)
 
 Review comment:
   `(> dropout 0)` ->  `(pos? dropout)`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kurman commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
kurman commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197853479
 
 

 ##
 File path: contrib/clojure-package/Testing.md
 ##
 @@ -0,0 +1,23 @@
+## Help with Testing
 
 Review comment:
   nitpick: can the filename be all lowercase or all uppercase?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kurman commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
kurman commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197869011
 
 

 ##
 File path: 
contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj
 ##
 @@ -0,0 +1,112 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns cnn-text-classification.classifier
+  (:require [cnn-text-classification.data-helper :as data-helper]
+[org.apache.clojure-mxnet.eval-metric :as eval-metric]
+[org.apache.clojure-mxnet.io :as mx-io]
+[org.apache.clojure-mxnet.module :as m]
+[org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.optimizer :as optimizer]
+[org.apache.clojure-mxnet.symbol :as sym]
+[org.apache.clojure-mxnet.context :as context])
+  (:gen-class))
+
+(def mr-dataset-path "data/mr-data") ;; the MR polarity dataset path
+(def glove-file-path "data/glove/glove.6B.50d.txt")
+
+(defn shuffle-data [test-num {:keys [data label sentence-count sentence-size 
embedding-size]}]
+  (println "Shuffling the data and splitting into training and test sets")
+  (println {:sentence-count sentence-count
+:sentence-size sentence-size
+:embedding-size embedding-size})
+  (let [shuffled (shuffle (map (fn [d l] [d l]) data label))
+train-num (- (count shuffled) test-num)
+training (into [] (take train-num shuffled))
+test (into [] (drop train-num shuffled))]
+{:training {:data  (ndarray/array (into [] (flatten (mapv (fn [v] (first 
v)) training)))
+  [train-num 1 sentence-size 
embedding-size]) ;; has to be channel x y
+:label (ndarray/array (into [] (flatten (mapv (fn [v] (last v) 
) training)))
+  [train-num])}
+ :test {:data  (ndarray/array (into [] (flatten (mapv (fn [v] (first v)) 
test)))
+  [test-num 1 sentence-size embedding-size]) 
;; has to be channel x y
+:label (ndarray/array (into [] (flatten (mapv (fn [v] (last v) ) 
test)))
+  [test-num])}}))
+
+;;; convnet with multiple filter sizes
+;; from Convolutional Neural Networks for Sentence Classification by Yoon Kim
+(defn get-multi-filter-convnet [num-embed sentence-size batch-size]
+  (let [filter-list [3 4 5]
+num-filter 100
+num-label 2
+dropout 0.5
+input-x (sym/variable "data")
+polled-outputs (mapv (fn [filter-size]
 
 Review comment:
   Can we extract named fn that we use in mapv?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kurman commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
kurman commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197866693
 
 

 ##
 File path: 
contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj
 ##
 @@ -0,0 +1,112 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns cnn-text-classification.classifier
+  (:require [cnn-text-classification.data-helper :as data-helper]
+[org.apache.clojure-mxnet.eval-metric :as eval-metric]
+[org.apache.clojure-mxnet.io :as mx-io]
+[org.apache.clojure-mxnet.module :as m]
+[org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.optimizer :as optimizer]
+[org.apache.clojure-mxnet.symbol :as sym]
+[org.apache.clojure-mxnet.context :as context])
+  (:gen-class))
+
+(def mr-dataset-path "data/mr-data") ;; the MR polarity dataset path
+(def glove-file-path "data/glove/glove.6B.50d.txt")
+
+(defn shuffle-data [test-num {:keys [data label sentence-count sentence-size 
embedding-size]}]
+  (println "Shuffling the data and splitting into training and test sets")
+  (println {:sentence-count sentence-count
+:sentence-size sentence-size
+:embedding-size embedding-size})
+  (let [shuffled (shuffle (map (fn [d l] [d l]) data label))
+train-num (- (count shuffled) test-num)
+training (into [] (take train-num shuffled))
 
 Review comment:
   Might be worth having specific coding convention. I would suggest 
https://github.com/bbatsov/clojure-style-guide


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 commented on issue #9989: Cannot train example gluon style transfer

2018-06-25 Thread GitBox
zhanghang1989 commented on issue #9989: Cannot train example gluon style 
transfer
URL: 
https://github.com/apache/incubator-mxnet/issues/9989#issuecomment-400034208
 
 
   The gluon interface enables applying to image with any size. This model is 
not hybridizable due to lack of some operations.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thirdwing commented on issue #11374: [MXNET-563] Refactor R optimizers to fix memory leak

2018-06-25 Thread GitBox
thirdwing commented on issue #11374: [MXNET-563] Refactor R optimizers to fix 
memory leak
URL: https://github.com/apache/incubator-mxnet/pull/11374#issuecomment-400023911
 
 
   @hetong007 Please take a look at this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #11223: Allow specifying AdaGrad initial accumulator value

2018-06-25 Thread GitBox
eric-haibin-lin commented on issue #11223: Allow specifying AdaGrad initial 
accumulator value
URL: https://github.com/apache/incubator-mxnet/pull/11223#issuecomment-400028969
 
 
   I mean setting eps to `1+1e-7` seems equivalent to setting 
initial_accumulator_value to 1


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid commented on issue #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
gigasquid commented on issue #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#issuecomment-400029200
 
 
   Thanks @kurman - I appreciate you going through it, I realize that it is a 
_big_ PR. I'll wait until you are done to address the feedback  


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin edited a comment on issue #11223: Allow specifying AdaGrad initial accumulator value

2018-06-25 Thread GitBox
eric-haibin-lin edited a comment on issue #11223: Allow specifying AdaGrad 
initial accumulator value
URL: https://github.com/apache/incubator-mxnet/pull/11223#issuecomment-400028969
 
 
   I mean setting eps to `1+1e-7` (without changing initial state explicitly) 
is equivalent to setting initial_accumulator_value to 1


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #11326: [MXNET-381] Enhancement of take operator

2018-06-25 Thread GitBox
haojin2 commented on issue #11326: [MXNET-381] Enhancement of take operator
URL: https://github.com/apache/incubator-mxnet/pull/11326#issuecomment-400042643
 
 
   @piiswrong @reminisce @anirudh2290 @rahul003 ping for review


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new issue #11388: flaky test: test_operator_gpu.test_conv

2018-06-25 Thread GitBox
eric-haibin-lin opened a new issue #11388: flaky test: 
test_operator_gpu.test_conv
URL: https://github.com/apache/incubator-mxnet/issues/11388
 
 
   ```
   test_operator_gpu.test_conv ... [00:29:53] 
c:\jenkins_slave\workspace\build-gpu\src\operator\nn\cudnn\./cudnn_algoreg-inl.h:107:
 Running performance tests to find the best convolution algorithm, this can 
take a while... (setting env variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to 
disable)
   [00:29:53] 
C:/jenkins_slave/workspace/build-gpu/src/operator/nn/convolution.cu:148: This 
convolution is not supported by cudnn, MXNET convolution is applied.
   [00:29:53] 
C:/jenkins_slave/workspace/build-gpu/src/operator/nn/convolution.cu:227: This 
convolution is not supported by cudnn, MXNET convolution is applied.
   [00:29:53] 
C:/jenkins_slave/workspace/build-gpu/src/operator/nn/convolution.cu:148: This 
convolution is not supported by cudnn, MXNET convolution is applied.
   [00:29:53] 
C:/jenkins_slave/workspace/build-gpu/src/operator/nn/convolution.cu:227: This 
convolution is not supported by cudnn, MXNET convolution is applied.
   [INFO] Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=1207962136 to reproduce.
   [INFO] Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=2083930564 to reproduce.
   ERROR
   ```
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/master/1043/pipeline
   
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-11360/17/pipeline
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vrakesh commented on issue #11385: About loss

2018-06-25 Thread GitBox
vrakesh commented on issue #11385: About loss
URL: 
https://github.com/apache/incubator-mxnet/issues/11385#issuecomment-43504
 
 
   Hi @yyl199655 thanks for the question, @sandeep-krishnamurthy requesting 
this be labeled under Question


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11363: CI error: cannot stat 'nosetests_mkl.xml': No such file or directory

2018-06-25 Thread GitBox
marcoabreu commented on issue #11363: CI error: cannot stat 
'nosetests_mkl.xml': No such file or directory
URL: 
https://github.com/apache/incubator-mxnet/issues/11363#issuecomment-400041148
 
 
   Hello,
   this is caused by the tests failing. Sometimes, we run multiple test suits 
in sequence. if the first suite fails to run, the following suits are not 
generated and thus no report is being created. Everything is working as 
intended. But I agree that this is misleading - I will see whether I can make 
the message about storing the test result appear as "Skipped" instead of 
"error". 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu closed issue #11363: CI error: cannot stat 'nosetests_mkl.xml': No such file or directory

2018-06-25 Thread GitBox
marcoabreu closed issue #11363: CI error: cannot stat 'nosetests_mkl.xml': No 
such file or directory
URL: https://github.com/apache/incubator-mxnet/issues/11363
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on issue #11049: Add linux and macos MKLDNN Building Instruction

2018-06-25 Thread GitBox
xinyu-intel commented on issue #11049: Add linux and macos MKLDNN Building 
Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#issuecomment-46286
 
 
   @zheng-da @szha Please take a review again, if any questions please let me 
know. Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy closed issue #6312: Running image-classification example failed with opencv issue

2018-06-25 Thread GitBox
sandeep-krishnamurthy closed issue #6312: Running image-classification example 
failed with opencv issue
URL: https://github.com/apache/incubator-mxnet/issues/6312
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on issue #11213: [MXNET-533] MXNet-ONNX export

2018-06-25 Thread GitBox
sandeep-krishnamurthy commented on issue #11213: [MXNET-533] MXNet-ONNX export
URL: https://github.com/apache/incubator-mxnet/pull/11213#issuecomment-400017836
 
 
   LGTM. Merging the changes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #11383: [MXNET-565]change wget into java download

2018-06-25 Thread GitBox
marcoabreu commented on a change in pull request #11383: [MXNET-565]change wget 
into java download
URL: https://github.com/apache/incubator-mxnet/pull/11383#discussion_r197851003
 
 

 ##
 File path: 
scala-package/examples/src/test/scala/org/apache/mxnetexamples/infer/imageclassifier/ImageClassifierExampleSuite.scala
 ##
 @@ -38,18 +40,30 @@ class ImageClassifierExampleSuite extends FunSuite with 
BeforeAndAfterAll {
 val tempDirPath = System.getProperty("java.io.tmpdir")
 logger.info("tempDirPath: %s".format(tempDirPath))
 
-Process("wget https://s3.us-east-2.amazonaws.com/scala-infer-models; +
-  "/resnet-18/resnet-18-symbol.json " + "-P " + tempDirPath + "/resnet18/ 
-q") !
+val baseUrl = "https://s3.us-east-2.amazonaws.com/scala-infer-models;
 
-Process("wget https://s3.us-east-2.amazonaws.com/scala-infer-models;
-  + "/resnet-18/resnet-18-.params " + "-P " + tempDirPath + 
"/resnet18/ -q") !
-
-Process("wget https://s3.us-east-2.amazonaws.com/scala-infer-models; +
-  "/resnet-18/synset.txt -P " + tempDirPath + "/resnet18/ -q") !
-
-Process("wget " +
-  "https://s3.amazonaws.com/model-server/inputs/Pug-Cookie.jpg " +
-  "-P " + tempDirPath + "/inputImages/") !
+var tmpFile = new File(tempDirPath + "/resnet18/resnet-18-symbol.json")
+if (!tmpFile.exists()) {
 
 Review comment:
   In the past, we had problems with corrupt files being downloaded, so this 
check could cause problems. What do you think about moving this into a helper 
function which has this logic as well as hash-validation? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #11385: About loss

2018-06-25 Thread GitBox
szha commented on issue #11385: About loss
URL: 
https://github.com/apache/incubator-mxnet/issues/11385#issuecomment-46788
 
 
   @yyl199655 if you use gluon, you can just write whatever penalty term you 
want using ndarray interface inside training loop. Only forward computation is 
needed, as long as the function is differentiable


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy closed issue #8203: How to release the GPU?!

2018-06-25 Thread GitBox
sandeep-krishnamurthy closed issue #8203: How to release the GPU?!
URL: https://github.com/apache/incubator-mxnet/issues/8203
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on issue #9507: Segmentation Fault

2018-06-25 Thread GitBox
larroy commented on issue #9507: Segmentation Fault
URL: 
https://github.com/apache/incubator-mxnet/issues/9507#issuecomment-400013857
 
 
We also encountered this issue. Thanks for the backtrace!If you use 
naive engine I think it won't crash: 
https://mxnet.incubator.apache.org/faq/env_var.html
   
   Could you confirm?
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vrakesh commented on issue #11386: How to add out_grad in Regression_output-inl.h?

2018-06-25 Thread GitBox
vrakesh commented on issue #11386: How to add out_grad in 
Regression_output-inl.h?
URL: 
https://github.com/apache/incubator-mxnet/issues/11386#issuecomment-44545
 
 
   @nihilityworld thank you for your question, requesting this be labeled under 
questions @sandeep-krishnamurthy 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mkolod commented on issue #11380: Add ability to query cuDNN BatchNorm min. epsilon. Allow ONNX importer to use cuDNN BN if chosen eps >= cuDNN min. eps.

2018-06-25 Thread GitBox
mkolod commented on issue #11380: Add ability to query cuDNN BatchNorm min. 
epsilon. Allow ONNX importer to use cuDNN BN if chosen eps >= cuDNN min. eps.
URL: https://github.com/apache/incubator-mxnet/pull/11380#issuecomment-400014395
 
 
   @Roshrini ^^ in case you're interested.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kurman commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
kurman commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197855350
 
 

 ##
 File path: contrib/clojure-package/README.md
 ##
 @@ -0,0 +1,203 @@
+# Clojure MXNet
+
+A clojure package to the MXNet Deep Learning library
+
+## Introduction
+
+MXNet is a first class, modern deep learning library that AWS has officially 
picked as its chosen library. It supports multiple languages on a first class 
basis and is incubating as an Apache project.
+
+The motivation for creating a Clojure package is to be able to open the deep 
learning library to the Clojure ecosystem and build bridges for future 
development and innovation for the community. It provides all the needed tools 
including low level and high level apis, dynamic graphs, and things like GAN 
and natural language support.
+
+For high leverage, the Clojure package has been built on the existing Scala 
package using interop. This has allowed rapid development and close parity with 
the Scala functionality. This also leaves the door open to directly developing 
code against the jni-bindings with Clojure in the future in an incremental 
fashion, using the test suites as a refactoring guide.
+
+## Current State and Plans
+
+The Clojure package is nearing the end of its first development milestone 
which is to achieve a close parity with the Scala package and to potentially be 
included into the main project for official Clojure language support.
+
+What is needed now is alpha testing on both OSX and Linux to discover any 
bugs, rough edges, and generally harden it before an official PR is opened on 
the main project.
+
+Help with this effort is greatly appreciated and contributors will be 
recognized in the project README.
+
+Testing instructions can be found in the Testing.md
+
+## Getting Started
+
+The following systems are supported:
+
+- OSX cpu
+- Linux cpu
+- Linux gpu
+
+There are two ways of getting going. The first way is the easiest and that is 
to use the pre-built jars from Maven. The second way is to build from source. 
In both cases, you will need to load the prereqs and dependencies, (like 
opencv).
+
+It's been tested on AWS Deep Learning AMI and OSX High Sierra 10.13.4
+
+
+### Prerequisites
+
+**If you are using the AWS Deep Learning Ubuntu or Linux AMI you should be 
good to go without doing anything on this step.**
+
+
+Follow the instructions from 
https://mxnet.incubator.apache.org/install/osx_setup.html or 
https://mxnet.incubator.apache.org/install/ubuntu_setup.html
+about _Prepare Environment for GPU Installation_
+and _Install MXNet dependencies_
+
+
+ Cloning the repo and running from source
+
+To use the prebuilt jars, you will need to replace the native version of the 
line in the project dependencies with your configuration.
+
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-gpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-cpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.2.0"]`
+
+
+```clojure
+
+(ns tutorial.ndarray
+  (:require [org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.context :as context]))
+
+;;Create NDArray
+(def a (ndarray/zeros [100 50])) ;;all zero arrray of dimension 100 x 50
+(def b (ndarray/ones [256 32 128 1])) ;; all one array of dimension
+(def c (ndarray/array [1 2 3 4 5 6] [2 3])) ;; array with contents of a shape 
2 x 3
+
+;;; There are also ways to convert to a vec or get the shape as an object or 
vec
+(ndarray/->vec c) ;=> [1.0 2.0 3.0 4.0 5.0 6.0]
+```
+
+See the examples/tutorial section for more.
+
+
+The jars from maven with the needed MXNet native binaries in it. On startup, 
the native libraries are extracted from the jar and copied into a temporary 
location on your path. On termination, they are deleted.
+
+If you want details on the flags (opencv verison and cuda version of the 
jars), they are documented here 
https://cwiki.apache.org/confluence/display/MXNET/MXNet-Scala+Release+Process
+
+
+### Build from MXNET Source
+
+Checkout the latest sha from the main package
+
+`git clone --recursive https://github.com/dmlc/mxnet ~/mxnet`
+`cd ~/mxnet`
+
+
+`git checkout tags/1.2.0 -b release-1.2.0`
+
+`git submodule update --init --recursive`
+
+Sometimes it useful to use this script to clean hard
+https://gist.github.com/nicktoumpelis/11214362
+
+
+Go here to do the base package installation 
https://mxnet.incubator.apache.org/install/index.html
+
+ Run `make scalapkg` then `make scalainstall`
+
+then replace the correct jar for your architecture in the project.clj, example 
`[ml.dmlc.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.0.1-SNAPSHOT"]`
+
+ Test your installation
+
+To test your installation, you should run `lein test`. This will run the test 
suite (CPU) for the clojure package.
+
+
+ Generation of NDArray and Symbol apis
+
+The bulk of the ndarray and symbol apis are generated via java reflection 

[GitHub] kurman commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
kurman commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197854610
 
 

 ##
 File path: contrib/clojure-package/README.md
 ##
 @@ -0,0 +1,203 @@
+# Clojure MXNet
+
+A clojure package to the MXNet Deep Learning library
+
+## Introduction
+
+MXNet is a first class, modern deep learning library that AWS has officially 
picked as its chosen library. It supports multiple languages on a first class 
basis and is incubating as an Apache project.
+
+The motivation for creating a Clojure package is to be able to open the deep 
learning library to the Clojure ecosystem and build bridges for future 
development and innovation for the community. It provides all the needed tools 
including low level and high level apis, dynamic graphs, and things like GAN 
and natural language support.
+
+For high leverage, the Clojure package has been built on the existing Scala 
package using interop. This has allowed rapid development and close parity with 
the Scala functionality. This also leaves the door open to directly developing 
code against the jni-bindings with Clojure in the future in an incremental 
fashion, using the test suites as a refactoring guide.
+
+## Current State and Plans
+
+The Clojure package is nearing the end of its first development milestone 
which is to achieve a close parity with the Scala package and to potentially be 
included into the main project for official Clojure language support.
+
+What is needed now is alpha testing on both OSX and Linux to discover any 
bugs, rough edges, and generally harden it before an official PR is opened on 
the main project.
+
+Help with this effort is greatly appreciated and contributors will be 
recognized in the project README.
+
+Testing instructions can be found in the Testing.md
+
+## Getting Started
+
+The following systems are supported:
+
+- OSX cpu
+- Linux cpu
+- Linux gpu
+
+There are two ways of getting going. The first way is the easiest and that is 
to use the pre-built jars from Maven. The second way is to build from source. 
In both cases, you will need to load the prereqs and dependencies, (like 
opencv).
+
+It's been tested on AWS Deep Learning AMI and OSX High Sierra 10.13.4
+
+
+### Prerequisites
+
+**If you are using the AWS Deep Learning Ubuntu or Linux AMI you should be 
good to go without doing anything on this step.**
+
+
+Follow the instructions from 
https://mxnet.incubator.apache.org/install/osx_setup.html or 
https://mxnet.incubator.apache.org/install/ubuntu_setup.html
+about _Prepare Environment for GPU Installation_
+and _Install MXNet dependencies_
+
+
+ Cloning the repo and running from source
+
+To use the prebuilt jars, you will need to replace the native version of the 
line in the project dependencies with your configuration.
+
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-gpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-cpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.2.0"]`
+
+
+```clojure
+
+(ns tutorial.ndarray
+  (:require [org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.context :as context]))
+
+;;Create NDArray
+(def a (ndarray/zeros [100 50])) ;;all zero arrray of dimension 100 x 50
+(def b (ndarray/ones [256 32 128 1])) ;; all one array of dimension
+(def c (ndarray/array [1 2 3 4 5 6] [2 3])) ;; array with contents of a shape 
2 x 3
+
+;;; There are also ways to convert to a vec or get the shape as an object or 
vec
+(ndarray/->vec c) ;=> [1.0 2.0 3.0 4.0 5.0 6.0]
+```
+
+See the examples/tutorial section for more.
+
+
+The jars from maven with the needed MXNet native binaries in it. On startup, 
the native libraries are extracted from the jar and copied into a temporary 
location on your path. On termination, they are deleted.
+
+If you want details on the flags (opencv verison and cuda version of the 
jars), they are documented here 
https://cwiki.apache.org/confluence/display/MXNET/MXNet-Scala+Release+Process
+
+
+### Build from MXNET Source
+
+Checkout the latest sha from the main package
+
+`git clone --recursive https://github.com/dmlc/mxnet ~/mxnet`
+`cd ~/mxnet`
+
+
+`git checkout tags/1.2.0 -b release-1.2.0`
+
+`git submodule update --init --recursive`
+
+Sometimes it useful to use this script to clean hard
+https://gist.github.com/nicktoumpelis/11214362
+
+
+Go here to do the base package installation 
https://mxnet.incubator.apache.org/install/index.html
+
+ Run `make scalapkg` then `make scalainstall`
+
+then replace the correct jar for your architecture in the project.clj, example 
`[ml.dmlc.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.0.1-SNAPSHOT"]`
+
+ Test your installation
+
+To test your installation, you should run `lein test`. This will run the test 
suite (CPU) for the clojure package.
+
+
+ Generation of NDArray and Symbol apis
+
+The bulk of the ndarray and symbol apis are generated via java reflection 

[GitHub] kurman commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
kurman commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197868359
 
 

 ##
 File path: 
contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj
 ##
 @@ -0,0 +1,112 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns cnn-text-classification.classifier
+  (:require [cnn-text-classification.data-helper :as data-helper]
+[org.apache.clojure-mxnet.eval-metric :as eval-metric]
+[org.apache.clojure-mxnet.io :as mx-io]
+[org.apache.clojure-mxnet.module :as m]
+[org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.optimizer :as optimizer]
+[org.apache.clojure-mxnet.symbol :as sym]
+[org.apache.clojure-mxnet.context :as context])
+  (:gen-class))
+
+(def mr-dataset-path "data/mr-data") ;; the MR polarity dataset path
+(def glove-file-path "data/glove/glove.6B.50d.txt")
+
+(defn shuffle-data [test-num {:keys [data label sentence-count sentence-size 
embedding-size]}]
+  (println "Shuffling the data and splitting into training and test sets")
+  (println {:sentence-count sentence-count
+:sentence-size sentence-size
+:embedding-size embedding-size})
+  (let [shuffled (shuffle (map (fn [d l] [d l]) data label))
+train-num (- (count shuffled) test-num)
+training (into [] (take train-num shuffled))
+test (into [] (drop train-num shuffled))]
+{:training {:data  (ndarray/array (into [] (flatten (mapv (fn [v] (first 
v)) training)))
 
 Review comment:
   And I think we should be generally clear on coding preferences whether `#()` 
form is preferable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kurman commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
kurman commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r194593837
 
 

 ##
 File path: contrib/clojure-package/src/dev/generator.clj
 ##
 @@ -0,0 +1,329 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns dev.generator
+  (:require [t6.from-scala.core :as $]
 
 Review comment:
   Dollar sign is overloaded here, can we use `scala` instead. Alternatively 
:refer decode-scala-symbol and drop the namespace in usages.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kurman commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
kurman commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197857545
 
 

 ##
 File path: contrib/clojure-package/examples/cnn-text-classification/README.md
 ##
 @@ -0,0 +1,38 @@
+# cnn-text-classification
+
+An example of text classification using CNN
+
+To use you must download the MR polarity dataset and put it in the path 
specified in the mr-dataset-path
+The dataset can be obtained here: 
[https://github.com/yoonkim/CNN_sentence](https://github.com/yoonkim/CNN_sentence).
 The two files `rt-polarity.neg`
+and `rt-polarity.pos` must be put in a directory. For example, 
`data/mr-data/rt-polarity.neg`.
+
+You also must download the glove word embeddings. The suggested one to use is 
the smaller 50 dimension one
 
 Review comment:
   Curious, why Glove over Word2vec?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11383: [MXNET-565]change wget into java download

2018-06-25 Thread GitBox
marcoabreu commented on issue #11383: [MXNET-565]change wget into java download
URL: https://github.com/apache/incubator-mxnet/pull/11383#issuecomment-44999
 
 
   Great effort towards getting Scala working on Windows!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kurman commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
kurman commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197869647
 
 

 ##
 File path: 
contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj
 ##
 @@ -0,0 +1,112 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns cnn-text-classification.classifier
+  (:require [cnn-text-classification.data-helper :as data-helper]
+[org.apache.clojure-mxnet.eval-metric :as eval-metric]
+[org.apache.clojure-mxnet.io :as mx-io]
+[org.apache.clojure-mxnet.module :as m]
+[org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.optimizer :as optimizer]
+[org.apache.clojure-mxnet.symbol :as sym]
+[org.apache.clojure-mxnet.context :as context])
+  (:gen-class))
+
+(def mr-dataset-path "data/mr-data") ;; the MR polarity dataset path
+(def glove-file-path "data/glove/glove.6B.50d.txt")
+
+(defn shuffle-data [test-num {:keys [data label sentence-count sentence-size 
embedding-size]}]
+  (println "Shuffling the data and splitting into training and test sets")
+  (println {:sentence-count sentence-count
+:sentence-size sentence-size
+:embedding-size embedding-size})
+  (let [shuffled (shuffle (map (fn [d l] [d l]) data label))
+train-num (- (count shuffled) test-num)
+training (into [] (take train-num shuffled))
+test (into [] (drop train-num shuffled))]
+{:training {:data  (ndarray/array (into [] (flatten (mapv (fn [v] (first 
v)) training)))
+  [train-num 1 sentence-size 
embedding-size]) ;; has to be channel x y
+:label (ndarray/array (into [] (flatten (mapv (fn [v] (last v) 
) training)))
+  [train-num])}
+ :test {:data  (ndarray/array (into [] (flatten (mapv (fn [v] (first v)) 
test)))
+  [test-num 1 sentence-size embedding-size]) 
;; has to be channel x y
+:label (ndarray/array (into [] (flatten (mapv (fn [v] (last v) ) 
test)))
+  [test-num])}}))
+
+;;; convnet with multiple filter sizes
+;; from Convolutional Neural Networks for Sentence Classification by Yoon Kim
+(defn get-multi-filter-convnet [num-embed sentence-size batch-size]
+  (let [filter-list [3 4 5]
+num-filter 100
+num-label 2
+dropout 0.5
+input-x (sym/variable "data")
+polled-outputs (mapv (fn [filter-size]
+   (as-> (sym/convolution {:data input-x
 
 Review comment:
   I think the alignment is messed up here, I expect:
   `
   (as->  exp
   
  name
   
  form1
   
  form2)
   `


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kurman commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
kurman commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197870642
 
 

 ##
 File path: 
contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj
 ##
 @@ -0,0 +1,112 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns cnn-text-classification.classifier
+  (:require [cnn-text-classification.data-helper :as data-helper]
+[org.apache.clojure-mxnet.eval-metric :as eval-metric]
+[org.apache.clojure-mxnet.io :as mx-io]
+[org.apache.clojure-mxnet.module :as m]
+[org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.optimizer :as optimizer]
+[org.apache.clojure-mxnet.symbol :as sym]
+[org.apache.clojure-mxnet.context :as context])
+  (:gen-class))
+
+(def mr-dataset-path "data/mr-data") ;; the MR polarity dataset path
+(def glove-file-path "data/glove/glove.6B.50d.txt")
+
+(defn shuffle-data [test-num {:keys [data label sentence-count sentence-size 
embedding-size]}]
+  (println "Shuffling the data and splitting into training and test sets")
+  (println {:sentence-count sentence-count
+:sentence-size sentence-size
+:embedding-size embedding-size})
+  (let [shuffled (shuffle (map (fn [d l] [d l]) data label))
+train-num (- (count shuffled) test-num)
+training (into [] (take train-num shuffled))
+test (into [] (drop train-num shuffled))]
+{:training {:data  (ndarray/array (into [] (flatten (mapv (fn [v] (first 
v)) training)))
+  [train-num 1 sentence-size 
embedding-size]) ;; has to be channel x y
+:label (ndarray/array (into [] (flatten (mapv (fn [v] (last v) 
) training)))
+  [train-num])}
+ :test {:data  (ndarray/array (into [] (flatten (mapv (fn [v] (first v)) 
test)))
+  [test-num 1 sentence-size embedding-size]) 
;; has to be channel x y
+:label (ndarray/array (into [] (flatten (mapv (fn [v] (last v) ) 
test)))
+  [test-num])}}))
+
+;;; convnet with multiple filter sizes
+;; from Convolutional Neural Networks for Sentence Classification by Yoon Kim
+(defn get-multi-filter-convnet [num-embed sentence-size batch-size]
+  (let [filter-list [3 4 5]
+num-filter 100
+num-label 2
+dropout 0.5
+input-x (sym/variable "data")
+polled-outputs (mapv (fn [filter-size]
+   (as-> (sym/convolution {:data input-x
+   :kernel [filter-size 
num-embed]
+   :num-filter 
num-filter}) data
+ (sym/activation {:data data :act-type "relu"})
+ (sym/pooling {:data data
+   :pool-type "max"
 
 Review comment:
   Is it worth having separate const namespace?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kurman commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
kurman commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197871096
 
 

 ##
 File path: 
contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj
 ##
 @@ -0,0 +1,112 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns cnn-text-classification.classifier
+  (:require [cnn-text-classification.data-helper :as data-helper]
+[org.apache.clojure-mxnet.eval-metric :as eval-metric]
+[org.apache.clojure-mxnet.io :as mx-io]
+[org.apache.clojure-mxnet.module :as m]
+[org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.optimizer :as optimizer]
+[org.apache.clojure-mxnet.symbol :as sym]
+[org.apache.clojure-mxnet.context :as context])
+  (:gen-class))
+
+(def mr-dataset-path "data/mr-data") ;; the MR polarity dataset path
+(def glove-file-path "data/glove/glove.6B.50d.txt")
+
+(defn shuffle-data [test-num {:keys [data label sentence-count sentence-size 
embedding-size]}]
+  (println "Shuffling the data and splitting into training and test sets")
+  (println {:sentence-count sentence-count
+:sentence-size sentence-size
+:embedding-size embedding-size})
+  (let [shuffled (shuffle (map (fn [d l] [d l]) data label))
+train-num (- (count shuffled) test-num)
+training (into [] (take train-num shuffled))
+test (into [] (drop train-num shuffled))]
+{:training {:data  (ndarray/array (into [] (flatten (mapv (fn [v] (first 
v)) training)))
+  [train-num 1 sentence-size 
embedding-size]) ;; has to be channel x y
+:label (ndarray/array (into [] (flatten (mapv (fn [v] (last v) 
) training)))
+  [train-num])}
+ :test {:data  (ndarray/array (into [] (flatten (mapv (fn [v] (first v)) 
test)))
+  [test-num 1 sentence-size embedding-size]) 
;; has to be channel x y
+:label (ndarray/array (into [] (flatten (mapv (fn [v] (last v) ) 
test)))
+  [test-num])}}))
+
+;;; convnet with multiple filter sizes
+;; from Convolutional Neural Networks for Sentence Classification by Yoon Kim
+(defn get-multi-filter-convnet [num-embed sentence-size batch-size]
+  (let [filter-list [3 4 5]
+num-filter 100
+num-label 2
+dropout 0.5
+input-x (sym/variable "data")
+polled-outputs (mapv (fn [filter-size]
+   (as-> (sym/convolution {:data input-x
+   :kernel [filter-size 
num-embed]
+   :num-filter 
num-filter}) data
+ (sym/activation {:data data :act-type "relu"})
+ (sym/pooling {:data data
+   :pool-type "max"
+   :kernel [(inc (- sentence-size 
filter-size)) 1]
+   :stride [1 1]})))
+ filter-list)
+total-filters (* num-filter (count filter-list))
+concat (sym/concat "concat" nil polled-outputs {:dim 1})
+hpool (sym/reshape "hpool" {:data concat :target-shape [batch-size 
total-filters]})
+hdrop (if (> dropout 0) (sym/dropout "hdrop" {:data hpool :p dropout}) 
hpool)
+fc (sym/fully-connected  "fc1" {:data hdrop :num-hidden num-label})]
+(sym/softmax-output "softmax" {:data fc})))
+
+(defn train-convnet [{:keys [devs embedding-size batch-size test-size 
num-epoch max-examples]}]
+  (let [glove (data-helper/load-glove glove-file-path) ;; you can also use 
word2vec
+ms-dataset (data-helper/load-ms-with-embeddings mr-dataset-path 
embedding-size glove max-examples)
+sentence-size (:sentence-size ms-dataset)
+shuffled (shuffle-data test-size ms-dataset)
+train-data (mx-io/ndarray-iter [(get-in shuffled [:training :data])]
+   {:label[(get-in 

[GitHub] kurman commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
kurman commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197856261
 
 

 ##
 File path: contrib/clojure-package/README.md
 ##
 @@ -0,0 +1,203 @@
+# Clojure MXNet
+
+A clojure package to the MXNet Deep Learning library
+
+## Introduction
+
+MXNet is a first class, modern deep learning library that AWS has officially 
picked as its chosen library. It supports multiple languages on a first class 
basis and is incubating as an Apache project.
+
+The motivation for creating a Clojure package is to be able to open the deep 
learning library to the Clojure ecosystem and build bridges for future 
development and innovation for the community. It provides all the needed tools 
including low level and high level apis, dynamic graphs, and things like GAN 
and natural language support.
+
+For high leverage, the Clojure package has been built on the existing Scala 
package using interop. This has allowed rapid development and close parity with 
the Scala functionality. This also leaves the door open to directly developing 
code against the jni-bindings with Clojure in the future in an incremental 
fashion, using the test suites as a refactoring guide.
+
+## Current State and Plans
+
+The Clojure package is nearing the end of its first development milestone 
which is to achieve a close parity with the Scala package and to potentially be 
included into the main project for official Clojure language support.
+
+What is needed now is alpha testing on both OSX and Linux to discover any 
bugs, rough edges, and generally harden it before an official PR is opened on 
the main project.
+
+Help with this effort is greatly appreciated and contributors will be 
recognized in the project README.
+
+Testing instructions can be found in the Testing.md
+
+## Getting Started
+
+The following systems are supported:
+
+- OSX cpu
+- Linux cpu
+- Linux gpu
+
+There are two ways of getting going. The first way is the easiest and that is 
to use the pre-built jars from Maven. The second way is to build from source. 
In both cases, you will need to load the prereqs and dependencies, (like 
opencv).
+
+It's been tested on AWS Deep Learning AMI and OSX High Sierra 10.13.4
+
+
+### Prerequisites
+
+**If you are using the AWS Deep Learning Ubuntu or Linux AMI you should be 
good to go without doing anything on this step.**
+
+
+Follow the instructions from 
https://mxnet.incubator.apache.org/install/osx_setup.html or 
https://mxnet.incubator.apache.org/install/ubuntu_setup.html
+about _Prepare Environment for GPU Installation_
+and _Install MXNet dependencies_
+
+
+ Cloning the repo and running from source
+
+To use the prebuilt jars, you will need to replace the native version of the 
line in the project dependencies with your configuration.
+
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-gpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-cpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.2.0"]`
+
+
+```clojure
+
+(ns tutorial.ndarray
+  (:require [org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.context :as context]))
+
+;;Create NDArray
+(def a (ndarray/zeros [100 50])) ;;all zero arrray of dimension 100 x 50
+(def b (ndarray/ones [256 32 128 1])) ;; all one array of dimension
+(def c (ndarray/array [1 2 3 4 5 6] [2 3])) ;; array with contents of a shape 
2 x 3
+
+;;; There are also ways to convert to a vec or get the shape as an object or 
vec
+(ndarray/->vec c) ;=> [1.0 2.0 3.0 4.0 5.0 6.0]
+```
+
+See the examples/tutorial section for more.
+
+
+The jars from maven with the needed MXNet native binaries in it. On startup, 
the native libraries are extracted from the jar and copied into a temporary 
location on your path. On termination, they are deleted.
+
+If you want details on the flags (opencv verison and cuda version of the 
jars), they are documented here 
https://cwiki.apache.org/confluence/display/MXNET/MXNet-Scala+Release+Process
+
+
+### Build from MXNET Source
+
+Checkout the latest sha from the main package
+
+`git clone --recursive https://github.com/dmlc/mxnet ~/mxnet`
+`cd ~/mxnet`
+
+
+`git checkout tags/1.2.0 -b release-1.2.0`
+
+`git submodule update --init --recursive`
+
+Sometimes it useful to use this script to clean hard
+https://gist.github.com/nicktoumpelis/11214362
+
+
+Go here to do the base package installation 
https://mxnet.incubator.apache.org/install/index.html
+
+ Run `make scalapkg` then `make scalainstall`
+
+then replace the correct jar for your architecture in the project.clj, example 
`[ml.dmlc.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.0.1-SNAPSHOT"]`
+
+ Test your installation
+
+To test your installation, you should run `lein test`. This will run the test 
suite (CPU) for the clojure package.
+
+
+ Generation of NDArray and Symbol apis
+
+The bulk of the ndarray and symbol apis are generated via java reflection 

[GitHub] kurman commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
kurman commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197857831
 
 

 ##
 File path: contrib/clojure-package/examples/cnn-text-classification/README.md
 ##
 @@ -0,0 +1,38 @@
+# cnn-text-classification
+
+An example of text classification using CNN
+
+To use you must download the MR polarity dataset and put it in the path 
specified in the mr-dataset-path
+The dataset can be obtained here: 
[https://github.com/yoonkim/CNN_sentence](https://github.com/yoonkim/CNN_sentence).
 The two files `rt-polarity.neg`
+and `rt-polarity.pos` must be put in a directory. For example, 
`data/mr-data/rt-polarity.neg`.
+
+You also must download the glove word embeddings. The suggested one to use is 
the smaller 50 dimension one
+`glove.6B.50d.txt` which is contained in the download file here 
[https://nlp.stanford.edu/projects/glove/](https://nlp.stanford.edu/projects/glove/)
+
+## Usage
+
+You can run through the repl with
+`(train-convnet {:embedding-size 50 :batch-size 100 :test-size 100 :num-epoch 
10 :max-examples 1000})`
+
+or
+`JVM_OPTS="Xmx1g" lein run` (cpu)
+
+You can control the devices you run on by doing:
+
+`lein run :cpu 2` - This will run on 2 cpu devices
+`lein run :gpu 1` - This will run on 1 gpu device
+`lein run :gpu 2` - This will run on 2 gpu devices
+
+
+The max-examples only loads 1000 each of the dataset to keep the time and 
memory down. To run all the examples, 
+change the main to be (train-convnet {:embedding-size 50 :batch-size 100 
:test-size 1000 :num-epoch 10)
+
+and then run
+
+- `lein uberjar`
+- `java -Xms1024m -Xmx2048m -jar 
target/cnn-text-classification-0.1.0-SNAPSHOT-standalone.jar`
+
+
 
 Review comment:
   Extra lines here


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kurman commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
kurman commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197856623
 
 

 ##
 File path: contrib/clojure-package/README.md
 ##
 @@ -0,0 +1,203 @@
+# Clojure MXNet
+
+A clojure package to the MXNet Deep Learning library
+
+## Introduction
+
+MXNet is a first class, modern deep learning library that AWS has officially 
picked as its chosen library. It supports multiple languages on a first class 
basis and is incubating as an Apache project.
+
+The motivation for creating a Clojure package is to be able to open the deep 
learning library to the Clojure ecosystem and build bridges for future 
development and innovation for the community. It provides all the needed tools 
including low level and high level apis, dynamic graphs, and things like GAN 
and natural language support.
+
+For high leverage, the Clojure package has been built on the existing Scala 
package using interop. This has allowed rapid development and close parity with 
the Scala functionality. This also leaves the door open to directly developing 
code against the jni-bindings with Clojure in the future in an incremental 
fashion, using the test suites as a refactoring guide.
+
+## Current State and Plans
+
+The Clojure package is nearing the end of its first development milestone 
which is to achieve a close parity with the Scala package and to potentially be 
included into the main project for official Clojure language support.
+
+What is needed now is alpha testing on both OSX and Linux to discover any 
bugs, rough edges, and generally harden it before an official PR is opened on 
the main project.
+
+Help with this effort is greatly appreciated and contributors will be 
recognized in the project README.
+
+Testing instructions can be found in the Testing.md
+
+## Getting Started
+
+The following systems are supported:
+
+- OSX cpu
+- Linux cpu
+- Linux gpu
+
+There are two ways of getting going. The first way is the easiest and that is 
to use the pre-built jars from Maven. The second way is to build from source. 
In both cases, you will need to load the prereqs and dependencies, (like 
opencv).
+
+It's been tested on AWS Deep Learning AMI and OSX High Sierra 10.13.4
+
+
+### Prerequisites
+
+**If you are using the AWS Deep Learning Ubuntu or Linux AMI you should be 
good to go without doing anything on this step.**
+
+
+Follow the instructions from 
https://mxnet.incubator.apache.org/install/osx_setup.html or 
https://mxnet.incubator.apache.org/install/ubuntu_setup.html
+about _Prepare Environment for GPU Installation_
+and _Install MXNet dependencies_
+
+
+ Cloning the repo and running from source
+
+To use the prebuilt jars, you will need to replace the native version of the 
line in the project dependencies with your configuration.
+
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-gpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-cpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.2.0"]`
+
+
+```clojure
+
+(ns tutorial.ndarray
+  (:require [org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.context :as context]))
+
+;;Create NDArray
+(def a (ndarray/zeros [100 50])) ;;all zero arrray of dimension 100 x 50
+(def b (ndarray/ones [256 32 128 1])) ;; all one array of dimension
+(def c (ndarray/array [1 2 3 4 5 6] [2 3])) ;; array with contents of a shape 
2 x 3
+
+;;; There are also ways to convert to a vec or get the shape as an object or 
vec
+(ndarray/->vec c) ;=> [1.0 2.0 3.0 4.0 5.0 6.0]
+```
+
+See the examples/tutorial section for more.
+
+
+The jars from maven with the needed MXNet native binaries in it. On startup, 
the native libraries are extracted from the jar and copied into a temporary 
location on your path. On termination, they are deleted.
+
+If you want details on the flags (opencv verison and cuda version of the 
jars), they are documented here 
https://cwiki.apache.org/confluence/display/MXNET/MXNet-Scala+Release+Process
+
+
+### Build from MXNET Source
+
+Checkout the latest sha from the main package
+
+`git clone --recursive https://github.com/dmlc/mxnet ~/mxnet`
+`cd ~/mxnet`
+
+
+`git checkout tags/1.2.0 -b release-1.2.0`
+
+`git submodule update --init --recursive`
+
+Sometimes it useful to use this script to clean hard
+https://gist.github.com/nicktoumpelis/11214362
+
+
+Go here to do the base package installation 
https://mxnet.incubator.apache.org/install/index.html
+
+ Run `make scalapkg` then `make scalainstall`
+
+then replace the correct jar for your architecture in the project.clj, example 
`[ml.dmlc.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.0.1-SNAPSHOT"]`
+
+ Test your installation
+
+To test your installation, you should run `lein test`. This will run the test 
suite (CPU) for the clojure package.
+
+
+ Generation of NDArray and Symbol apis
+
+The bulk of the ndarray and symbol apis are generated via java reflection 

[GitHub] leezu opened a new pull request #11392: Document AdaGrad eps as initial history accumulator value

2018-06-25 Thread GitBox
leezu opened a new pull request #11392: Document AdaGrad eps as initial history 
accumulator value
URL: https://github.com/apache/incubator-mxnet/pull/11392
 
 
   See  #11223


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11359: Flaky test test_io:test_ImageRecordIter_seed_augmentation

2018-06-25 Thread GitBox
marcoabreu commented on issue #11359: Flaky test 
test_io:test_ImageRecordIter_seed_augmentation
URL: 
https://github.com/apache/incubator-mxnet/issues/11359#issuecomment-400069936
 
 
   I would have to check, but that could be a good start for further 
investigation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wenyangchu edited a comment on issue #11341: Deterministic cudnn algorithms

2018-06-25 Thread GitBox
wenyangchu edited a comment on issue #11341: Deterministic cudnn algorithms
URL: 
https://github.com/apache/incubator-mxnet/issues/11341#issuecomment-400075286
 
 
   Hi @DickJC123 , I have little knowledge on the regression test in mxnet. 
Could you please let me know how you ran the test? Thank you!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu closed issue #11064: Flaky test: test_operator.test_op_roi_align

2018-06-25 Thread GitBox
marcoabreu closed issue #11064: Flaky test: test_operator.test_op_roi_align
URL: https://github.com/apache/incubator-mxnet/issues/11064
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu removed a comment on issue #11394: Flaky test on Python2 Windows

2018-06-25 Thread GitBox
marcoabreu removed a comment on issue #11394: Flaky test on Python2 Windows
URL: 
https://github.com/apache/incubator-mxnet/issues/11394#issuecomment-400093569
 
 
   Oh sorry, that run is a duplicate of 
https://github.com/apache/incubator-mxnet/issues/11064 
   
   I have reopened the issue for you. Please document your findings there.
   
   P.S. In future, please paste the log into the ticket to prevent people from 
having to access our website.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197972135
 
 

 ##
 File path: src/executor/tensorrt_pass.cc
 ##
 @@ -0,0 +1,583 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2018 by Contributors
+ * \file tensorrt_pass.cc
+ * \brief Replace TRT compatible subgraphs by TRT engines
+ * \author Clement Fuji Tsang
+ */
+
+#if MXNET_USE_TENSORRT
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./onnx_to_tensorrt.h"
+#include "./exec_pass.h"
+#include "../operator/contrib/nnvm_to_onnx-inl.h"
+
+namespace mxnet {
+namespace exec {
+
+using NodePtr = nnvm::NodePtr;
+
+/*!
+ * \brief Custom graph class, which will contain bi-directional nodes
+ * we need to compute DFS and reverse DFS for graph partitioning
+ */
+class BidirectionalGraph {
+ public:
+  struct Node {
+nnvm::Node* nnvmptr;
+std::vector inputs;
+std::vector outputs;
+  };
+  std::vector nodes;
+  std::unordered_map nnvm2nid;
+  std::vector outputs;
+  static const std::unordered_set unconditionalTRTop;
+
+  explicit BidirectionalGraph(const Graph ) {
+auto& idx = g.indexed_graph();
+auto num_nodes = idx.num_nodes();
+nodes.reserve(num_nodes);
+nnvm2nid.reserve(num_nodes);
+outputs.reserve(idx.outputs().size());
+DFSVisit(g.outputs, [this](const nnvm::NodePtr& n) {
+  BidirectionalGraph::Node new_node;
+  new_node.nnvmptr = n.get();
+  nnvm2nid[n.get()] = static_cast(nodes.size());
+  nodes.emplace_back(std::move(new_node));
+});
+for (const auto& it : nnvm2nid) {
+  nnvm::Node* nnvmnode = it.first;
+  uint32_t nid = it.second;
+  for (auto& n : nnvmnode->inputs) {
+uint32_t input_nid = nnvm2nid[n.node.get()];
+nodes[input_nid].outputs.emplace_back([nid]);
+nodes[nid].inputs.emplace_back([input_nid]);
+  }
+}
+for (auto& e : g.outputs) {
+  uint32_t nid = nnvm2nid[e.node.get()];
+  outputs.emplace_back([nid]);
+}
+  }
+
+  template 
+  void DFS(const std::vector& heads, bool reverse, FVisit fvisit) {
+std::unordered_set visited;
+std::deque stack(heads.begin(), heads.end());
+visited.reserve(heads.size());
+while (!stack.empty()) {
+  Node* vertex = stack.back();
+  stack.pop_back();
+  if (visited.count(vertex) == 0) {
+visited.insert(vertex);
+fvisit(vertex);
+std::vector nexts = reverse ? vertex->inputs : vertex->outputs;
+for (Node* node : nexts) {
+  if (visited.count(node) == 0) {
+stack.emplace_back(node);
+  }
+}
+  }
+}
+  }
+
+  using t_pairset = std::pair, 
std::unordered_set>;
+  using t_pairvec = std::pair, std::vector>;
+  using t_uncomp_map = std::unordered_map>;
+
+  std::unordered_set naive_grow_subgraph(Node* head,
+std::unordered_set* 
set_unused,
+t_uncomp_map* uncomp_map) {
+std::unordered_set subgraph;
+std::unordered_set uncomp_set;
+std::deque stack;
+stack.emplace_back(head);
+while (!stack.empty()) {
+  Node* vertex = stack.back();
+  stack.pop_back();
+  if (set_unused->count(vertex) && !uncomp_set.count(vertex)) {
+set_unused->erase(vertex);
+subgraph.insert(vertex);
+uncomp_set.insert((*uncomp_map)[vertex].begin(), 
(*uncomp_map)[vertex].end());
+for (Node* input : vertex->inputs) {
+  if (set_unused->count(input) && !uncomp_set.count(input)) {
+stack.emplace_back(input);
+  }
+}
+for (Node* output : vertex->outputs) {
+  if (set_unused->count(output) && !uncomp_set.count(output)) {
+stack.emplace_back(output);
+  }
+}
+  }
+}
+return subgraph;
+  }
+
+  std::vector> get_subsets(
+std::unordered_map* const params_map) {
+std::vector> subgraphs;
+std::unordered_set set_nonTRTnodes;
+std::unordered_set set_allnodes(nodes.size());
+std::vector separation_sets;
+for (Node& node : nodes) 

[GitHub] gigasquid commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
gigasquid commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197973457
 
 

 ##
 File path: contrib/clojure-package/README.md
 ##
 @@ -0,0 +1,203 @@
+# Clojure MXNet
+
+A clojure package to the MXNet Deep Learning library
+
+## Introduction
+
+MXNet is a first class, modern deep learning library that AWS has officially 
picked as its chosen library. It supports multiple languages on a first class 
basis and is incubating as an Apache project.
+
+The motivation for creating a Clojure package is to be able to open the deep 
learning library to the Clojure ecosystem and build bridges for future 
development and innovation for the community. It provides all the needed tools 
including low level and high level apis, dynamic graphs, and things like GAN 
and natural language support.
+
+For high leverage, the Clojure package has been built on the existing Scala 
package using interop. This has allowed rapid development and close parity with 
the Scala functionality. This also leaves the door open to directly developing 
code against the jni-bindings with Clojure in the future in an incremental 
fashion, using the test suites as a refactoring guide.
+
+## Current State and Plans
+
+The Clojure package is nearing the end of its first development milestone 
which is to achieve a close parity with the Scala package and to potentially be 
included into the main project for official Clojure language support.
+
+What is needed now is alpha testing on both OSX and Linux to discover any 
bugs, rough edges, and generally harden it before an official PR is opened on 
the main project.
+
+Help with this effort is greatly appreciated and contributors will be 
recognized in the project README.
+
+Testing instructions can be found in the Testing.md
+
+## Getting Started
+
+The following systems are supported:
+
+- OSX cpu
+- Linux cpu
+- Linux gpu
+
+There are two ways of getting going. The first way is the easiest and that is 
to use the pre-built jars from Maven. The second way is to build from source. 
In both cases, you will need to load the prereqs and dependencies, (like 
opencv).
+
+It's been tested on AWS Deep Learning AMI and OSX High Sierra 10.13.4
+
+
+### Prerequisites
+
+**If you are using the AWS Deep Learning Ubuntu or Linux AMI you should be 
good to go without doing anything on this step.**
+
+
+Follow the instructions from 
https://mxnet.incubator.apache.org/install/osx_setup.html or 
https://mxnet.incubator.apache.org/install/ubuntu_setup.html
+about _Prepare Environment for GPU Installation_
+and _Install MXNet dependencies_
+
+
+ Cloning the repo and running from source
+
+To use the prebuilt jars, you will need to replace the native version of the 
line in the project dependencies with your configuration.
+
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-gpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-cpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.2.0"]`
+
+
+```clojure
+
+(ns tutorial.ndarray
+  (:require [org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.context :as context]))
+
+;;Create NDArray
+(def a (ndarray/zeros [100 50])) ;;all zero arrray of dimension 100 x 50
+(def b (ndarray/ones [256 32 128 1])) ;; all one array of dimension
+(def c (ndarray/array [1 2 3 4 5 6] [2 3])) ;; array with contents of a shape 
2 x 3
+
+;;; There are also ways to convert to a vec or get the shape as an object or 
vec
+(ndarray/->vec c) ;=> [1.0 2.0 3.0 4.0 5.0 6.0]
+```
+
+See the examples/tutorial section for more.
+
+
+The jars from maven with the needed MXNet native binaries in it. On startup, 
the native libraries are extracted from the jar and copied into a temporary 
location on your path. On termination, they are deleted.
+
+If you want details on the flags (opencv verison and cuda version of the 
jars), they are documented here 
https://cwiki.apache.org/confluence/display/MXNET/MXNet-Scala+Release+Process
+
+
+### Build from MXNET Source
+
+Checkout the latest sha from the main package
+
+`git clone --recursive https://github.com/dmlc/mxnet ~/mxnet`
+`cd ~/mxnet`
+
+
+`git checkout tags/1.2.0 -b release-1.2.0`
+
+`git submodule update --init --recursive`
+
+Sometimes it useful to use this script to clean hard
+https://gist.github.com/nicktoumpelis/11214362
+
+
+Go here to do the base package installation 
https://mxnet.incubator.apache.org/install/index.html
+
+ Run `make scalapkg` then `make scalainstall`
+
+then replace the correct jar for your architecture in the project.clj, example 
`[ml.dmlc.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.0.1-SNAPSHOT"]`
+
+ Test your installation
+
+To test your installation, you should run `lein test`. This will run the test 
suite (CPU) for the clojure package.
+
+
+ Generation of NDArray and Symbol apis
+
+The bulk of the ndarray and symbol apis are generated via java reflection 

[GitHub] toddsundsted opened a new pull request #11397: Check Shape

2018-06-25 Thread GitBox
toddsundsted opened a new pull request #11397: Check Shape
URL: https://github.com/apache/incubator-mxnet/pull/11397
 
 
   ## Description ##
   The `NDArray` constructors do not ensure that shape dimensions are all 
positive numbers. In Python, at least, expressions like 
`nd.random_uniform(shape=[5, 5, -2])` and `nd.random_uniform(shape=[5, 5, 0])` 
cause the runtime to crash.
   
   ## Checklist ##
   ### Essentials ###
   - [X] Changes are complete (i.e. I finished coding on this PR)
   - [X] Code is well-documented: 
   - [X] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - Create a method `CheckShape()` and invoke it in every constructor.
   
   ## Comments ##
   I didn't see unit tests for `NDArray`. I'd be happy to use or create unit 
tests, if that is desired.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
gigasquid commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197979725
 
 

 ##
 File path: contrib/clojure-package/README.md
 ##
 @@ -0,0 +1,203 @@
+# Clojure MXNet
+
+A clojure package to the MXNet Deep Learning library
+
+## Introduction
+
+MXNet is a first class, modern deep learning library that AWS has officially 
picked as its chosen library. It supports multiple languages on a first class 
basis and is incubating as an Apache project.
+
+The motivation for creating a Clojure package is to be able to open the deep 
learning library to the Clojure ecosystem and build bridges for future 
development and innovation for the community. It provides all the needed tools 
including low level and high level apis, dynamic graphs, and things like GAN 
and natural language support.
+
+For high leverage, the Clojure package has been built on the existing Scala 
package using interop. This has allowed rapid development and close parity with 
the Scala functionality. This also leaves the door open to directly developing 
code against the jni-bindings with Clojure in the future in an incremental 
fashion, using the test suites as a refactoring guide.
+
+## Current State and Plans
+
+The Clojure package is nearing the end of its first development milestone 
which is to achieve a close parity with the Scala package and to potentially be 
included into the main project for official Clojure language support.
+
+What is needed now is alpha testing on both OSX and Linux to discover any 
bugs, rough edges, and generally harden it before an official PR is opened on 
the main project.
+
+Help with this effort is greatly appreciated and contributors will be 
recognized in the project README.
+
+Testing instructions can be found in the Testing.md
+
+## Getting Started
+
+The following systems are supported:
+
+- OSX cpu
+- Linux cpu
+- Linux gpu
+
+There are two ways of getting going. The first way is the easiest and that is 
to use the pre-built jars from Maven. The second way is to build from source. 
In both cases, you will need to load the prereqs and dependencies, (like 
opencv).
+
+It's been tested on AWS Deep Learning AMI and OSX High Sierra 10.13.4
+
+
+### Prerequisites
+
+**If you are using the AWS Deep Learning Ubuntu or Linux AMI you should be 
good to go without doing anything on this step.**
+
+
+Follow the instructions from 
https://mxnet.incubator.apache.org/install/osx_setup.html or 
https://mxnet.incubator.apache.org/install/ubuntu_setup.html
+about _Prepare Environment for GPU Installation_
+and _Install MXNet dependencies_
+
+
+ Cloning the repo and running from source
+
+To use the prebuilt jars, you will need to replace the native version of the 
line in the project dependencies with your configuration.
+
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-gpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-cpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.2.0"]`
+
+
+```clojure
+
+(ns tutorial.ndarray
+  (:require [org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.context :as context]))
+
+;;Create NDArray
+(def a (ndarray/zeros [100 50])) ;;all zero arrray of dimension 100 x 50
+(def b (ndarray/ones [256 32 128 1])) ;; all one array of dimension
+(def c (ndarray/array [1 2 3 4 5 6] [2 3])) ;; array with contents of a shape 
2 x 3
+
+;;; There are also ways to convert to a vec or get the shape as an object or 
vec
+(ndarray/->vec c) ;=> [1.0 2.0 3.0 4.0 5.0 6.0]
+```
+
+See the examples/tutorial section for more.
+
+
+The jars from maven with the needed MXNet native binaries in it. On startup, 
the native libraries are extracted from the jar and copied into a temporary 
location on your path. On termination, they are deleted.
+
+If you want details on the flags (opencv verison and cuda version of the 
jars), they are documented here 
https://cwiki.apache.org/confluence/display/MXNET/MXNet-Scala+Release+Process
+
+
+### Build from MXNET Source
+
+Checkout the latest sha from the main package
+
+`git clone --recursive https://github.com/dmlc/mxnet ~/mxnet`
+`cd ~/mxnet`
+
+
+`git checkout tags/1.2.0 -b release-1.2.0`
+
+`git submodule update --init --recursive`
+
+Sometimes it useful to use this script to clean hard
+https://gist.github.com/nicktoumpelis/11214362
+
+
+Go here to do the base package installation 
https://mxnet.incubator.apache.org/install/index.html
+
+ Run `make scalapkg` then `make scalainstall`
+
+then replace the correct jar for your architecture in the project.clj, example 
`[ml.dmlc.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.0.1-SNAPSHOT"]`
+
+ Test your installation
+
+To test your installation, you should run `lein test`. This will run the test 
suite (CPU) for the clojure package.
+
+
+ Generation of NDArray and Symbol apis
+
+The bulk of the ndarray and symbol apis are generated via java reflection 

[GitHub] ankkhedia commented on issue #10274: test_ndarray.test_reduce fails in v1.0.0

2018-06-25 Thread GitBox
ankkhedia commented on issue #10274: test_ndarray.test_reduce fails in v1.0.0
URL: 
https://github.com/apache/incubator-mxnet/issues/10274#issuecomment-400113596
 
 
   Tested on master for d6813efa2206afb5be98c2da16dd6e2efaf44cda using gcc-6 
(Ubuntu 6.4.0-17ubuntu1~16.04) 6.4.0 20180424, could not reproduce 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Don't fail storing test results if test suite got aborted (#11363) (#11391)

2018-06-25 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new cdb01fc  Don't fail storing test results if test suite got aborted 
(#11363) (#11391)
cdb01fc is described below

commit cdb01fc72ec5c8973a5ed48076380721db50ffa8
Author: Marco de Abreu 
AuthorDate: Tue Jun 26 00:26:41 2018 +0200

Don't fail storing test results if test suite got aborted (#11363) (#11391)

* Dont fail during artifact storage

* Update Jenkinsfile

* Update Jenkinsfile
---
 Jenkinsfile | 21 +
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 44aad8e..10fdf1d 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -97,18 +97,23 @@ def publish_test_coverage() {
 }
 
 def collect_test_results_unix(original_file_name, new_file_name) {
-echo 'Saving python test results for ' + new_file_name
-// Rename file to make it distinguishable. Unfortunately, it's not 
possible to get STAGE_NAME in a parallel stage
-sh 'cp ' + original_file_name + ' ' + new_file_name
-archiveArtifacts artifacts: new_file_name
+if (fileExists(original_file_name)) {
+// Rename file to make it distinguishable. Unfortunately, it's not 
possible to get STAGE_NAME in a parallel stage
+// Thus, we have to pick a name manually and rename the files so that 
they can be stored separately.
+sh 'cp ' + original_file_name + ' ' + new_file_name
+archiveArtifacts artifacts: new_file_name
+}
 }
 
 def collect_test_results_windows(original_file_name, new_file_name) {
-echo 'Saving python test results for ' + new_file_name
 // Rename file to make it distinguishable. Unfortunately, it's not 
possible to get STAGE_NAME in a parallel stage
-bat 'xcopy ' + original_file_name + ' ' + new_file_name + '*'
-archiveArtifacts artifacts: new_file_name
-} 
+// Thus, we have to pick a name manually and rename the files so that they 
can be stored separately.
+if (fileExists(original_file_name)) {
+bat 'xcopy ' + original_file_name + ' ' + new_file_name + '*'
+archiveArtifacts artifacts: new_file_name
+}
+}
+
 
 def docker_run(platform, function_name, use_nvidia, shared_mem = '500m') {
   def command = "ci/build.py --docker-registry ${env.DOCKER_CACHE_REGISTRY} 
%USE_NVIDIA% --platform %PLATFORM% --shm-size %SHARED_MEM% 
/work/runtime_functions.sh %FUNCTION_NAME%"



[GitHub] szha closed pull request #11391: Don't fail storing test results if test suite got aborted (#11363)

2018-06-25 Thread GitBox
szha closed pull request #11391: Don't fail storing test results if test suite 
got aborted (#11363)
URL: https://github.com/apache/incubator-mxnet/pull/11391
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/Jenkinsfile b/Jenkinsfile
index 44aad8e006e..10fdf1d6cfa 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -97,18 +97,23 @@ def publish_test_coverage() {
 }
 
 def collect_test_results_unix(original_file_name, new_file_name) {
-echo 'Saving python test results for ' + new_file_name
-// Rename file to make it distinguishable. Unfortunately, it's not 
possible to get STAGE_NAME in a parallel stage
-sh 'cp ' + original_file_name + ' ' + new_file_name
-archiveArtifacts artifacts: new_file_name
+if (fileExists(original_file_name)) {
+// Rename file to make it distinguishable. Unfortunately, it's not 
possible to get STAGE_NAME in a parallel stage
+// Thus, we have to pick a name manually and rename the files so that 
they can be stored separately.
+sh 'cp ' + original_file_name + ' ' + new_file_name
+archiveArtifacts artifacts: new_file_name
+}
 }
 
 def collect_test_results_windows(original_file_name, new_file_name) {
-echo 'Saving python test results for ' + new_file_name
 // Rename file to make it distinguishable. Unfortunately, it's not 
possible to get STAGE_NAME in a parallel stage
-bat 'xcopy ' + original_file_name + ' ' + new_file_name + '*'
-archiveArtifacts artifacts: new_file_name
-} 
+// Thus, we have to pick a name manually and rename the files so that they 
can be stored separately.
+if (fileExists(original_file_name)) {
+bat 'xcopy ' + original_file_name + ' ' + new_file_name + '*'
+archiveArtifacts artifacts: new_file_name
+}
+}
+
 
 def docker_run(platform, function_name, use_nvidia, shared_mem = '500m') {
   def command = "ci/build.py --docker-registry ${env.DOCKER_CACHE_REGISTRY} 
%USE_NVIDIA% --platform %PLATFORM% --shm-size %SHARED_MEM% 
/work/runtime_functions.sh %FUNCTION_NAME%"


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed issue #11353: Flaky test test_gluon_trainer.test_trainer_reset_kv

2018-06-25 Thread GitBox
szha closed issue #11353: Flaky test test_gluon_trainer.test_trainer_reset_kv
URL: https://github.com/apache/incubator-mxnet/issues/11353
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Fix #11353 (#11360)

2018-06-25 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 619e4bd  Fix #11353 (#11360)
619e4bd is described below

commit 619e4bded058e4bf77029bc30b395275d72f7907
Author: Haibin Lin 
AuthorDate: Mon Jun 25 15:34:54 2018 -0700

Fix #11353 (#11360)

* Update test_gluon_trainer.py

* Update test_gluon_trainer.py

* Update test_gluon_trainer.py

* Update test_gluon_trainer.py

* Update test_gluon_trainer.py

* trigger

* Run 10 times

* Update test_gluon_trainer.py

* run 10K times

* test_trainer_reset_kv didn't fail for 10K time . 2nd Trigger.

* test_trainer_reset_kv didn't fail for 10K times. 3rd Trigger.

* remove for loop
---
 tests/python/unittest/test_gluon_trainer.py | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tests/python/unittest/test_gluon_trainer.py 
b/tests/python/unittest/test_gluon_trainer.py
index 1c59cea..eac9fad 100644
--- a/tests/python/unittest/test_gluon_trainer.py
+++ b/tests/python/unittest/test_gluon_trainer.py
@@ -190,6 +190,7 @@ def test_trainer_reset_kv():
 trainer.step(1)
 assert trainer._kvstore.type == kv
 # load would reset kvstore
+mx.nd.waitall()
 params.load('test_trainer_reset_kv.params')
 assert trainer._kvstore is None
 assert trainer._kv_initialized is False



[GitHub] szha closed pull request #11360: Fix #11353

2018-06-25 Thread GitBox
szha closed pull request #11360: Fix #11353
URL: https://github.com/apache/incubator-mxnet/pull/11360
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/tests/python/unittest/test_gluon_trainer.py 
b/tests/python/unittest/test_gluon_trainer.py
index 1c59ceaa093..eac9fad45f5 100644
--- a/tests/python/unittest/test_gluon_trainer.py
+++ b/tests/python/unittest/test_gluon_trainer.py
@@ -190,6 +190,7 @@ def check_trainer_reset_kv(kv):
 trainer.step(1)
 assert trainer._kvstore.type == kv
 # load would reset kvstore
+mx.nd.waitall()
 params.load('test_trainer_reset_kv.params')
 assert trainer._kvstore is None
 assert trainer._kv_initialized is False


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197966899
 
 

 ##
 File path: docs/api/python/contrib/tensorrt.md
 ##
 @@ -0,0 +1,117 @@
+# MxNet-TensorRT Runtime Integration
+## What is this?
+
+This document described how to use the 
[MxNet](http://mxnet.incubator.apache.org/)-[TensorRT](https://developer.nvidia.com/tensorrt)
 runtime integration to accelerate model inference.
+
+## Why is TensorRT integration useful? 
+
+TensorRT can greatly speed up inference of deep learning models. One 
experiment on a Titan V (V100) GPU shows that with MxNet 1.2, we can get an 
approximately 3x speed-up when running inference of the ResNet-50 model on the 
CIFAR-10 dataset in single precision (fp32). As batch sizes and image sizes go 
up (for CNN inference), the benefit may be less, but in general, TensorRT helps 
especially in cases which have:
+- many bandwidth-bound layers (e.g. pointwise operations) that benefit from 
GPU kernel fusion
+- inference use cases which have tight latency requirements and where the 
client application can't wait for large batches to be queued up
+- embedded systems, where memory constraints are tighter than on servers
+- when performing inference in reduced precision, especially for integer (e.g. 
int8) inference. 
+
+In the past, the main hindrance for the user wishing to benefit from TensorRT 
was the fact that the model needed to be exported from the framework first. 
Once the model got exported through some means (NNVM to TensorRT graph rewrite, 
via ONNX, etc.), one had to then write a TensorRT client application, which 
would feed the data into the TensorRT engine. Since at that point the model was 
independent of the original framework, and since TensorRT could only compute 
the neural network layers but the user had to bring their own data pipeline, 
this increased the burden on the user and reduced the likelihood of 
reproducibility (e.g. different frameworks may have slightly different data 
pipelines, or flexibility of data pipeline operation ordering). Moreover, since 
frameworks typically support more operators than TensorRT, one could have to 
resort to TensorRT plugins for operations that aren't already available via the 
TensorRT graph API.  
+
+The current experimental runtime integration of TensorRT with MxNet resolves 
the above concerns by ensuring that:
+- the graph is still executed by MxNet
+- the MxNet data pipeline is preserved
+- the TensorRT runtime integration logic partitions the graph into subgraphs 
that are either TensorRT compatible or incompatible
+- the graph partitioner collects the TensorRT-compatible subgraphs, hands them 
over to TensorRT, and substitutes the TensorRT compatible subgraph with a 
TensorRT library call, represented as a TensorRT node in NNVM.
+- if a node is not TensorRT compatible, it won't be extracted and substituted 
with a TensorRT call, and will still execute within MxNet
+
+The above points ensure that we find a compromise between the flexibility of 
MxNet, and fast inference in TensorRT, without putting a burden on the user to 
learn how TensorRT APIs work, without the need to write one's own client 
application and data pipeline, etc.
+
+## How do I build MxNet with TensorRT integration?
+
+Building MxNet together with TensorRT is somewhat complex. The recipe will 
hopefully be simplified in the near future, but for now, it's easiest to build 
a Docker container with a Ubuntu 16.04 base. This Dockerfile can be found under 
the ci subdirectory of the MxNet repository. You can build the container as 
follows:
+
+```
+docker build -t ci/docker/Dockerfile.build.ubuntu_gpu_tensorrt 
mxnet_with_tensorrt
+```
+
+Next, we can run this container as follows (don't forget to install 
[nvidia-docker](https://github.com/NVIDIA/nvidia-docker)):
+
+```no-highlight
+nvidia-docker run -ti --rm mxnet_with_tensorrt
+```
+
+After starting the container, you will find yourself in the /opt/mxnet 
directory by default.
+
+## Running a "hello, world" model / unit test:
+
+You can then run the LeNet-5 unit test, which will train LeNet-5 on MNIST, and 
subsequently run inference in MxNet, as well as using the MxNet-TensorRT 
runtime integration, and compare the results. The test can be run as follows:
+
+```no-highlight
+python tests/python/tensorrt/test_tensorrt_lenet5.py
+```
+
+You should get a result similar to the following:
+
+```no-highlight
+Running inference in MxNet
+[03:31:18] src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:107: Running 
performance tests to find the best convolution algorithm, this can take a 
while... (setting env variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
+Running inference in MxNet-TensorRT
+[03:31:18] src/operator/contrib/nnvm_to_onnx.cc:152: ONNX graph construction 
complete.
+Building TensorRT engine, FP16 available:1
+Max batch size: 1024
+Max workspace size: 1024 

[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197966796
 
 

 ##
 File path: src/operator/contrib/nnvm_to_onnx-inl.h
 ##
 @@ -0,0 +1,156 @@
+#ifndef MXNET_OPERATOR_CONTRIB_NNVM_TO_ONNX_INL_H_
+#define MXNET_OPERATOR_CONTRIB_NNVM_TO_ONNX_INL_H_
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2018 by Contributors
+ * \file tensorrt-inl.h
+ * \brief TensorRT Operator
+ * \author Marek Kolodziej, Clement Fuji Tsang
+*/
+
+#if MXNET_USE_TENSORRT
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./tensorrt-inl.h"
+#include "../operator_common.h"
+#include "../../common/utils.h"
+#include "../../common/serialization.h"
+
+namespace mxnet {
+namespace op {
+namespace nnvm_to_onnx {
+
+using namespace nnvm;
+using namespace ::onnx;
+using int64 = ::google::protobuf::int64;
+
+std::unordered_map GetPlaceholderShapes(const 
ShapeVector& shape_inputs,
+const nnvm::IndexedGraph& ig);
+
+std::unordered_map GetOutputLookup(const 
nnvm::IndexedGraph& ig);
+
+void ConvertPlaceholder(
+  const std::string& node_name,
+  const std::unordered_map& placeholder_shapes,
+  GraphProto* const graph_proto);
+
+void ConvertConstant(GraphProto* const graph_proto,
+  const std::string& node_name,
+  std::unordered_map* const shared_buffer);
+
+void ConvertOutput(op::tensorrt::InferenceMap_t* const trt_output_map,
+   GraphProto* const graph_proto,
+   const std::unordered_map::iterator& 
out_iter,
+   const std::string& node_name,
+   const nnvm::Graph& g,
+   const StorageTypeVector& storage_types,
+   const DTypeVector& dtypes);
+
+typedef void (*ConverterFunction)(NodeProto *node_proto,
+  const NodeAttrs ,
+  const nnvm::IndexedGraph ,
+  const array_view 
);
+
+
+// Forward declarations
+void ConvertConvolution(
+NodeProto *node_proto,
+const NodeAttrs ,
+const nnvm::IndexedGraph ,
+const array_view );
+
+
+void ConvertPooling(NodeProto *node_proto,
+const NodeAttrs ,
+const nnvm::IndexedGraph ,
+const array_view );
+
+void ConvertActivation(NodeProto *node_proto,
+   const NodeAttrs ,
+   const nnvm::IndexedGraph ,
+   const array_view );
+
+void ConvertFullyConnected(NodeProto *node_proto,
+   const NodeAttrs ,
+   const nnvm::IndexedGraph ,
+   const array_view );
+
+void ConvertSoftmaxOutput(NodeProto *node_proto,
+  const NodeAttrs ,
+  const nnvm::IndexedGraph ,
+  const array_view );
+
+void ConvertFlatten(NodeProto *node_proto,
+const NodeAttrs ,
+const nnvm::IndexedGraph ,
+const array_view );
+
+void ConvertBatchNorm(NodeProto *node_proto,
+const NodeAttrs ,
+const nnvm::IndexedGraph ,
+const array_view );
+
+void ConvertElementwiseAdd(NodeProto *node_proto,
+const NodeAttrs ,
+const nnvm::IndexedGraph ,
+const array_view );
+
+TRTParam ConvertNnvmGraphToOnnx(
+const nnvm::Graph ,
+std::unordered_map *const shared_buffer);
+
+static const std::unordered_map converter_map 
= {
 
 Review comment:
   @eric-haibin-lin Yes, so far. TensorRT supports more operators, so the list 
will be expanded once the initial integration is in place.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the

[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197967101
 
 

 ##
 File path: src/common/serialization.h
 ##
 @@ -0,0 +1,526 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2015 by Contributors
+ * \file serialization.h
+ * \brief Serialization of some STL and nnvm data-structures
+ * \author Clement Fuji Tsang
+ */
+
+#ifndef MXNET_COMMON_SERIALIZATION_H_
+#define MXNET_COMMON_SERIALIZATION_H_
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+
+namespace mxnet {
+namespace common {
+
+template
+inline size_t serialized_size(const T& obj);
+
+template
+inline size_t serialized_size(const nnvm::Tuple& obj);
+
+template
+inline size_t serialized_size(const std::vector& obj);
+
+template
+inline size_t serialized_size(const std::pair& obj);
+
+template
+inline size_t serialized_size(const std::map& obj);
+
+template
+inline size_t serialized_size(const std::unordered_map& obj);
+
+template
+inline size_t serialized_size(const std::set& obj);
+
+template
+inline size_t serialized_size(const std::unordered_set& obj);
+
+template<>
+inline size_t serialized_size(const std::string& obj);
+
+template
+inline size_t serialized_size(const std::tuple& obj);
+
+template
+inline void serialize(const T& obj, char** buffer);
+
+template
+inline void serialize(const nnvm::Tuple& obj, char** buffer);
+
+template
+inline void serialize(const std::vector& obj, char** buffer);
+
+template
+inline void serialize(const std::pair& obj, char** buffer);
+
+template
+inline void serialize(const std::map& obj, char** buffer);
+
+template
+inline void serialize(const std::unordered_map& obj, char** buffer);
+
+template
+inline void serialize(const std::set& obj, char** buffer);
+
+template
+inline void serialize(const std::unordered_set& obj, char** buffer);
+
+template<>
+inline void serialize(const std::string& obj, char** buffer);
+
+template
+inline void serialize(const std::tuple& obj, char** buffer);
+
+template
+inline void deserialize(T* obj, const std::string& buffer, size_t* curr_pos);
+
+template
+inline void deserialize(nnvm::Tuple* obj, const std::string& buffer, 
size_t* curr_pos);
+
+template
+inline void deserialize(std::vector* obj, const std::string& buffer, 
size_t* curr_pos);
+
+template
+inline void deserialize(std::pair* obj, const std::string& buffer, 
size_t* curr_pos);
+
+template
+inline void deserialize(std::map* obj, const std::string& buffer, 
size_t* curr_pos);
+
+template
+inline void deserialize(std::unordered_map* obj, const std::string& 
buffer, size_t* curr_pos);
+
+template
+inline void deserialize(std::set* obj, const std::string& buffer, size_t* 
curr_pos);
+
+template
+inline void deserialize(std::unordered_set* obj, const std::string& buffer, 
size_t* curr_pos);
+
+template<>
+inline void deserialize(std::string* obj, const std::string& buffer, size_t* 
curr_pos);
+
+template
+inline void deserialize(std::tuple* obj, const std::string& buffer, 
size_t* curr_pos);
+
+
+template
+struct is_cont {
+  static const bool value = !std::is_pod::value;
+};
+
+template
+inline size_t serialized_size(const T& obj) {
+  return sizeof(T);
+}
+
+template
+inline size_t serialized_size(const nnvm::Tuple& obj) {
+  if (is_cont::value) {
+size_t sum_val = 4;
+for (auto& el : obj) {
+  sum_val += serialized_size(el);
+}
+return sum_val;
+  } else {
+return 4 + (obj.ndim() * sizeof(T));
+  }
+}
+
+template
+inline size_t serialized_size(const std::vector& obj) {
+  if (is_cont::value) {
+size_t sum_val = 4;
+for (T i : obj) {
+  sum_val += serialized_size(i);
+}
+return sum_val;
+  } else {
+return sizeof(T) * obj.size() + 4;
+  }
+}
+
+template
+inline size_t serialized_size(const std::pair& obj) {
+  return serialized_size(obj.first) + serialized_size(obj.second);
+}
+
+template
+inline size_t serialized_size(const std::map& obj) {
+  size_t sum_val = 4;
+  if (is_cont::value && is_cont::value) {
+for (auto p : obj) {
+  sum_val += 

[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197969124
 
 

 ##
 File path: include/mxnet/executor.h
 ##
 @@ -152,19 +152,19 @@ class Executor {
   static Executor* SimpleBind(nnvm::Symbol symbol,
   const Context& default_ctx,
   const std::map& group2ctx,
-  const std::vector& in_arg_ctxes,
-  const std::vector& arg_grad_ctxes,
-  const std::vector& aux_state_ctxes,
-  const std::unordered_map& 
arg_shape_map,
-  const std::unordered_map& 
arg_dtype_map,
-  const std::unordered_map& 
arg_stype_map,
-  const std::vector& grad_req_types,
-  const std::unordered_set& 
param_names,
+  std::vector* in_arg_ctxes,
+  std::vector* arg_grad_ctxes,
+  std::vector* aux_state_ctxes,
+  std::unordered_map* 
arg_shape_map,
+  std::unordered_map* 
arg_dtype_map,
+  std::unordered_map* 
arg_stype_map,
+  std::vector* grad_req_types,
+  std::unordered_set* param_names,
   std::vector* in_args,
   std::vector* arg_grads,
   std::vector* aux_states,
   std::unordered_map*
-shared_data_arrays = nullptr,
+  shared_data_arrays = nullptr,
 
 Review comment:
   OK


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197972245
 
 

 ##
 File path: src/executor/tensorrt_pass.cc
 ##
 @@ -0,0 +1,583 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2018 by Contributors
+ * \file tensorrt_pass.cc
+ * \brief Replace TRT compatible subgraphs by TRT engines
+ * \author Clement Fuji Tsang
+ */
+
+#if MXNET_USE_TENSORRT
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./onnx_to_tensorrt.h"
+#include "./exec_pass.h"
+#include "../operator/contrib/nnvm_to_onnx-inl.h"
+
+namespace mxnet {
+namespace exec {
+
+using NodePtr = nnvm::NodePtr;
+
+/*!
+ * \brief Custom graph class, which will contain bi-directional nodes
+ * we need to compute DFS and reverse DFS for graph partitioning
+ */
+class BidirectionalGraph {
+ public:
+  struct Node {
+nnvm::Node* nnvmptr;
+std::vector inputs;
+std::vector outputs;
+  };
+  std::vector nodes;
+  std::unordered_map nnvm2nid;
+  std::vector outputs;
+  static const std::unordered_set unconditionalTRTop;
+
+  explicit BidirectionalGraph(const Graph ) {
+auto& idx = g.indexed_graph();
+auto num_nodes = idx.num_nodes();
+nodes.reserve(num_nodes);
+nnvm2nid.reserve(num_nodes);
+outputs.reserve(idx.outputs().size());
+DFSVisit(g.outputs, [this](const nnvm::NodePtr& n) {
+  BidirectionalGraph::Node new_node;
+  new_node.nnvmptr = n.get();
+  nnvm2nid[n.get()] = static_cast(nodes.size());
+  nodes.emplace_back(std::move(new_node));
+});
+for (const auto& it : nnvm2nid) {
+  nnvm::Node* nnvmnode = it.first;
+  uint32_t nid = it.second;
+  for (auto& n : nnvmnode->inputs) {
+uint32_t input_nid = nnvm2nid[n.node.get()];
+nodes[input_nid].outputs.emplace_back([nid]);
+nodes[nid].inputs.emplace_back([input_nid]);
+  }
+}
+for (auto& e : g.outputs) {
+  uint32_t nid = nnvm2nid[e.node.get()];
+  outputs.emplace_back([nid]);
+}
+  }
+
+  template 
+  void DFS(const std::vector& heads, bool reverse, FVisit fvisit) {
+std::unordered_set visited;
+std::deque stack(heads.begin(), heads.end());
+visited.reserve(heads.size());
+while (!stack.empty()) {
+  Node* vertex = stack.back();
+  stack.pop_back();
+  if (visited.count(vertex) == 0) {
+visited.insert(vertex);
+fvisit(vertex);
+std::vector nexts = reverse ? vertex->inputs : vertex->outputs;
+for (Node* node : nexts) {
+  if (visited.count(node) == 0) {
+stack.emplace_back(node);
+  }
+}
+  }
+}
+  }
+
+  using t_pairset = std::pair, 
std::unordered_set>;
+  using t_pairvec = std::pair, std::vector>;
+  using t_uncomp_map = std::unordered_map>;
+
+  std::unordered_set naive_grow_subgraph(Node* head,
+std::unordered_set* 
set_unused,
+t_uncomp_map* uncomp_map) {
+std::unordered_set subgraph;
+std::unordered_set uncomp_set;
+std::deque stack;
+stack.emplace_back(head);
+while (!stack.empty()) {
+  Node* vertex = stack.back();
+  stack.pop_back();
+  if (set_unused->count(vertex) && !uncomp_set.count(vertex)) {
+set_unused->erase(vertex);
+subgraph.insert(vertex);
+uncomp_set.insert((*uncomp_map)[vertex].begin(), 
(*uncomp_map)[vertex].end());
+for (Node* input : vertex->inputs) {
+  if (set_unused->count(input) && !uncomp_set.count(input)) {
+stack.emplace_back(input);
+  }
+}
+for (Node* output : vertex->outputs) {
+  if (set_unused->count(output) && !uncomp_set.count(output)) {
+stack.emplace_back(output);
+  }
+}
+  }
+}
+return subgraph;
+  }
+
+  std::vector> get_subsets(
+std::unordered_map* const params_map) {
+std::vector> subgraphs;
+std::unordered_set set_nonTRTnodes;
+std::unordered_set set_allnodes(nodes.size());
+std::vector separation_sets;
+for (Node& node : nodes) 

[GitHub] szha closed pull request #11259: [MXNET-184] Fix flaky test test_operator.test_binary_op due to numerical errors

2018-06-25 Thread GitBox
szha closed pull request #11259: [MXNET-184] Fix flaky test 
test_operator.test_binary_op due to numerical errors
URL: https://github.com/apache/incubator-mxnet/pull/11259
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/tests/python/unittest/test_operator.py 
b/tests/python/unittest/test_operator.py
index 67426693436..c1f6ba0dcf9 100644
--- a/tests/python/unittest/test_operator.py
+++ b/tests/python/unittest/test_operator.py
@@ -60,14 +60,14 @@ def check_rnn_consistency(cell1, cell2, T, N, I, H, 
grad_req):
 
 dy = mx.random.uniform(shape=mod1.get_outputs()[0].shape)
 mod1.backward(out_grads=[dy])
-mod2.backward(out_grads=[dy])
+mod2.backward(out_grads=[dy])
 if grad_req != 'null':
 assert_allclose(mod1.get_input_grads()[0].asnumpy(), 
mod2.get_input_grads()[0].asnumpy(), rtol=1e-2, atol=1e-4)
 else:
 assert(mod1.get_input_grads()[0] == None)
 assert(mod2.get_input_grads()[0] == None)
-
-
+
+
 
 @with_seed()
 def test_lstm_sym():
@@ -77,7 +77,7 @@ def test_lstm_sym():
 stack.add(mx.rnn.LSTMCell(H, prefix='l0_'))
 stack.add(mx.rnn.LSTMCell(H, prefix='l1_'))
 stack.add(mx.rnn.LSTMCell(H, prefix='l2_'))
-
+
 check_rnn_consistency(fused, stack, T, N, I, H, 'write')
 check_rnn_consistency(fused, stack, T, N, I, H, 'add')
 check_rnn_consistency(fused, stack, T, N, I, H, 'null')
@@ -120,21 +120,21 @@ def test_gru_sym():
 @with_seed()
 def test_gru_bidirectional():
 T, N, I, H = 5, 20, 800, 800
-
+
 fused = mx.rnn.FusedRNNCell(H, num_layers=2, mode='gru',
 bidirectional=True, get_next_state=True, 
prefix='')
-
+
 stack = mx.rnn.SequentialRNNCell()
 stack.add(mx.rnn.BidirectionalCell(
 mx.rnn.GRUCell(H, prefix='l0_'),
 mx.rnn.GRUCell(H, prefix='r0_'),
-output_prefix='bi_gru_0_'))
-
+output_prefix='bi_gru_0_'))
+
 stack.add(mx.rnn.BidirectionalCell(
 mx.rnn.GRUCell(H, prefix='l1_'),
 mx.rnn.GRUCell(H, prefix='r1_'),
 output_prefix='bi_gru_1_'))
-
+
 check_rnn_consistency(fused, stack, T, N, I, H, 'write')
 check_rnn_consistency(fused, stack, T, N, I, H, 'add')
 check_rnn_consistency(fused, stack, T, N, I, H, 'null')
@@ -1553,6 +1553,7 @@ def gen_broadcast_data_int(idx):
 def gen_binary_data(dummy):
 ndim = np.random.randint(1, 6)
 shape = np.random.randint(1, 6, size=(ndim,))
+#print("gen shape {}".format(shape))
 return [np.random.random(shape), np.random.random(shape)]
 
 
@@ -1565,27 +1566,46 @@ def check_binary_op_forward(symbol, baseline, gen_data, 
rtol=1e-3, atol=1e-5, mx
 sample_num = 200
 for i in range(sample_num):
 d = gen_data(i)
-x = baseline(d[0], d[1])
 y = symbol.bind(default_context(), args={'a': mx.nd.array(d[0]), 'b': 
mx.nd.array(d[1])})
 y.forward(is_train=True)
 y = y.outputs[0].asnumpy()
+x = baseline(d[0], d[1]).astype(y.dtype)
+
+#np.set_printoptions(precision=20)
+
+a = d[0]
+b = d[1]
+#print("a: {} {}".format(a.dtype, a))
+#print("a: {} {}".format(b.dtype, b))
+
+#print("x: {} {}".format(x.dtype, x))
+#print("y: {} {}".format(y.dtype, y))
 if mx_nd_func is not None:
 d0 = mx.nd.array(d[0], dtype=d[0].dtype)
 d1 = mx.nd.array(d[1], dtype=d[1].dtype)
 assert_almost_equal(y, mx_nd_func(d0, d1).asnumpy(), rtol=rtol, 
atol=atol)
 idx = np.abs(x-y) > atol+rtol*np.abs(x)
 if idx.any():
-print('found precision problem')
+import binascii
+np.set_printoptions(precision=20)
+logging.error('found precision problem:')
 d[0] = np.broadcast_to(d[0], x.shape)
 d[1] = np.broadcast_to(d[1], x.shape)
-print('a: {}'.format(d[0][idx]))
-print('b: {}'.format(d[1][idx]))
-import struct
-print('a hex: {}'.format(struct.pack('d', 
d[0][idx]).encode('hex')))
-print('b hex: {}'.format(struct.pack('d', np.broadcast_to(d[1], 
x.shape)[idx]).encode('hex')))
-print('in baseline(a, b): {}'.format(x[idx]))
-print('in symbol(a, b): {}'.format(y[idx]))
-print('diff: {}'.format(np.abs(x-y)[idx] - 
atol-rtol*np.abs(x)[idx]))
+logging.error('input a: {}'.format(d[0][idx]))
+logging.error('input b: {}'.format(d[1][idx]))
+logging.error("output x: {} {}".format(x.dtype, x))
+logging.error("output y: {} {}".format(y.dtype, y))
+def ftohex(xs):
+import struct
+return 

[GitHub] szha closed issue #9853: Flaky test_operator.test_binary_op

2018-06-25 Thread GitBox
szha closed issue #9853: Flaky test_operator.test_binary_op
URL: https://github.com/apache/incubator-mxnet/issues/9853
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Fix flaky test test_operator.test_binary_op due to numerical errors (#11259)

2018-06-25 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 21ff36b  Fix flaky test test_operator.test_binary_op due to numerical 
errors (#11259)
21ff36b is described below

commit 21ff36b06bf47ff2ac4145ce60ec1fe5dd14ce1d
Author: Pedro Larroy <928489+lar...@users.noreply.github.com>
AuthorDate: Mon Jun 25 16:35:23 2018 -0700

Fix flaky test test_operator.test_binary_op due to numerical errors (#11259)

Use float64 computations as the reference numpy implementation operates in 
double and not float.
f64(f32(f64(.))) % f64(f32(f64(.))) is not the same as f64(.) % f64(.) due 
to limited precission.

fixes #9853
---
 tests/python/unittest/test_operator.py | 50 --
 1 file changed, 36 insertions(+), 14 deletions(-)

diff --git a/tests/python/unittest/test_operator.py 
b/tests/python/unittest/test_operator.py
index fbd3886..287d830 100644
--- a/tests/python/unittest/test_operator.py
+++ b/tests/python/unittest/test_operator.py
@@ -1550,6 +1550,7 @@ def gen_broadcast_data_int(idx):
 def gen_binary_data(dummy):
 ndim = np.random.randint(1, 6)
 shape = np.random.randint(1, 6, size=(ndim,))
+#print("gen shape {}".format(shape))
 return [np.random.random(shape), np.random.random(shape)]
 
 
@@ -1562,27 +1563,46 @@ def check_binary_op_forward(symbol, baseline, gen_data, 
rtol=1e-3, atol=1e-5, mx
 sample_num = 200
 for i in range(sample_num):
 d = gen_data(i)
-x = baseline(d[0], d[1])
 y = symbol.bind(default_context(), args={'a': mx.nd.array(d[0]), 'b': 
mx.nd.array(d[1])})
 y.forward(is_train=True)
 y = y.outputs[0].asnumpy()
+x = baseline(d[0], d[1]).astype(y.dtype)
+
+#np.set_printoptions(precision=20)
+
+a = d[0]
+b = d[1]
+#print("a: {} {}".format(a.dtype, a))
+#print("a: {} {}".format(b.dtype, b))
+
+#print("x: {} {}".format(x.dtype, x))
+#print("y: {} {}".format(y.dtype, y))
 if mx_nd_func is not None:
 d0 = mx.nd.array(d[0], dtype=d[0].dtype)
 d1 = mx.nd.array(d[1], dtype=d[1].dtype)
 assert_almost_equal(y, mx_nd_func(d0, d1).asnumpy(), rtol=rtol, 
atol=atol)
 idx = np.abs(x-y) > atol+rtol*np.abs(x)
 if idx.any():
-print('found precision problem')
+import binascii
+np.set_printoptions(precision=20)
+logging.error('found precision problem:')
 d[0] = np.broadcast_to(d[0], x.shape)
 d[1] = np.broadcast_to(d[1], x.shape)
-print('a: {}'.format(d[0][idx]))
-print('b: {}'.format(d[1][idx]))
-import struct
-print('a hex: {}'.format(struct.pack('d', 
d[0][idx]).encode('hex')))
-print('b hex: {}'.format(struct.pack('d', np.broadcast_to(d[1], 
x.shape)[idx]).encode('hex')))
-print('in baseline(a, b): {}'.format(x[idx]))
-print('in symbol(a, b): {}'.format(y[idx]))
-print('diff: {}'.format(np.abs(x-y)[idx] - 
atol-rtol*np.abs(x)[idx]))
+logging.error('input a: {}'.format(d[0][idx]))
+logging.error('input b: {}'.format(d[1][idx]))
+logging.error("output x: {} {}".format(x.dtype, x))
+logging.error("output y: {} {}".format(y.dtype, y))
+def ftohex(xs):
+import struct
+return list(map(lambda x: binascii.hexlify(struct.pack('d', 
x)), xs.flatten()))
+logging.error('output x in baseline(a, b): {}'.format(x[idx]))
+logging.error('output y in symbol(a, b): {}'.format(y[idx]))
+logging.error('output x in baseline(a,b) hex: 
{}'.format(ftohex(x[idx])))
+logging.error('output y in symbol(a,b) hex: 
{}'.format(ftohex(y[idx])))
+logging.error('input a hex: {}'.format(ftohex(d[0][idx])))
+logging.error('input a hex: {}'.format(ftohex(d[1][idx])))
+
+logging.error('diff: {}'.format(np.abs(x-y)[idx] - 
atol-rtol*np.abs(x)[idx]))
 assert_allclose(y, x, rtol=rtol, atol=atol)
 
 
@@ -1641,10 +1661,13 @@ def test_binary_op():
 check_binary_op_backward(c, lambda g_out, a, b: (g_out / b, - g_out * 
a / (b * b)), gen_binary_data)
 
 def test_bmod(a, b):
-c = a % b
+# Python and numpy operate only in double so to avoid numerical errors 
we have to use
+# doubles as well. This was a flaky test before when using float32. 
seed 1688524483, 1768433044
+#c = a % b
+c = mx.sym.cast(a, dtype='float64') % mx.sym.cast(b, dtype='float64')
 # '%' is sensitive to the precision of the calculation.  Force numpy 
to match mxnet's float32.
-# Issue exposed with seed 1768433044
-

[GitHub] gigasquid commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
gigasquid commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197979462
 
 

 ##
 File path: contrib/clojure-package/README.md
 ##
 @@ -0,0 +1,203 @@
+# Clojure MXNet
+
+A clojure package to the MXNet Deep Learning library
+
+## Introduction
+
+MXNet is a first class, modern deep learning library that AWS has officially 
picked as its chosen library. It supports multiple languages on a first class 
basis and is incubating as an Apache project.
+
+The motivation for creating a Clojure package is to be able to open the deep 
learning library to the Clojure ecosystem and build bridges for future 
development and innovation for the community. It provides all the needed tools 
including low level and high level apis, dynamic graphs, and things like GAN 
and natural language support.
+
+For high leverage, the Clojure package has been built on the existing Scala 
package using interop. This has allowed rapid development and close parity with 
the Scala functionality. This also leaves the door open to directly developing 
code against the jni-bindings with Clojure in the future in an incremental 
fashion, using the test suites as a refactoring guide.
+
+## Current State and Plans
+
+The Clojure package is nearing the end of its first development milestone 
which is to achieve a close parity with the Scala package and to potentially be 
included into the main project for official Clojure language support.
+
+What is needed now is alpha testing on both OSX and Linux to discover any 
bugs, rough edges, and generally harden it before an official PR is opened on 
the main project.
+
+Help with this effort is greatly appreciated and contributors will be 
recognized in the project README.
+
+Testing instructions can be found in the Testing.md
+
+## Getting Started
+
+The following systems are supported:
+
+- OSX cpu
+- Linux cpu
+- Linux gpu
+
+There are two ways of getting going. The first way is the easiest and that is 
to use the pre-built jars from Maven. The second way is to build from source. 
In both cases, you will need to load the prereqs and dependencies, (like 
opencv).
+
+It's been tested on AWS Deep Learning AMI and OSX High Sierra 10.13.4
+
+
+### Prerequisites
+
+**If you are using the AWS Deep Learning Ubuntu or Linux AMI you should be 
good to go without doing anything on this step.**
+
+
+Follow the instructions from 
https://mxnet.incubator.apache.org/install/osx_setup.html or 
https://mxnet.incubator.apache.org/install/ubuntu_setup.html
+about _Prepare Environment for GPU Installation_
+and _Install MXNet dependencies_
+
+
+ Cloning the repo and running from source
+
+To use the prebuilt jars, you will need to replace the native version of the 
line in the project dependencies with your configuration.
+
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-gpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-cpu "1.2.0"]`
+or
+`[org.apache.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.2.0"]`
+
+
+```clojure
+
+(ns tutorial.ndarray
+  (:require [org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.context :as context]))
+
+;;Create NDArray
+(def a (ndarray/zeros [100 50])) ;;all zero arrray of dimension 100 x 50
+(def b (ndarray/ones [256 32 128 1])) ;; all one array of dimension
+(def c (ndarray/array [1 2 3 4 5 6] [2 3])) ;; array with contents of a shape 
2 x 3
+
+;;; There are also ways to convert to a vec or get the shape as an object or 
vec
+(ndarray/->vec c) ;=> [1.0 2.0 3.0 4.0 5.0 6.0]
+```
+
+See the examples/tutorial section for more.
+
+
+The jars from maven with the needed MXNet native binaries in it. On startup, 
the native libraries are extracted from the jar and copied into a temporary 
location on your path. On termination, they are deleted.
+
+If you want details on the flags (opencv verison and cuda version of the 
jars), they are documented here 
https://cwiki.apache.org/confluence/display/MXNET/MXNet-Scala+Release+Process
+
+
+### Build from MXNET Source
+
+Checkout the latest sha from the main package
+
+`git clone --recursive https://github.com/dmlc/mxnet ~/mxnet`
+`cd ~/mxnet`
+
+
+`git checkout tags/1.2.0 -b release-1.2.0`
+
+`git submodule update --init --recursive`
+
+Sometimes it useful to use this script to clean hard
+https://gist.github.com/nicktoumpelis/11214362
+
+
+Go here to do the base package installation 
https://mxnet.incubator.apache.org/install/index.html
+
+ Run `make scalapkg` then `make scalainstall`
+
+then replace the correct jar for your architecture in the project.clj, example 
`[ml.dmlc.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.0.1-SNAPSHOT"]`
+
+ Test your installation
+
+To test your installation, you should run `lein test`. This will run the test 
suite (CPU) for the clojure package.
+
+
+ Generation of NDArray and Symbol apis
+
+The bulk of the ndarray and symbol apis are generated via java reflection 

[GitHub] gigasquid commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
gigasquid commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197981330
 
 

 ##
 File path: 
contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj
 ##
 @@ -0,0 +1,112 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns cnn-text-classification.classifier
+  (:require [cnn-text-classification.data-helper :as data-helper]
+[org.apache.clojure-mxnet.eval-metric :as eval-metric]
+[org.apache.clojure-mxnet.io :as mx-io]
+[org.apache.clojure-mxnet.module :as m]
+[org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.optimizer :as optimizer]
+[org.apache.clojure-mxnet.symbol :as sym]
+[org.apache.clojure-mxnet.context :as context])
+  (:gen-class))
+
+(def mr-dataset-path "data/mr-data") ;; the MR polarity dataset path
+(def glove-file-path "data/glove/glove.6B.50d.txt")
+
+(defn shuffle-data [test-num {:keys [data label sentence-count sentence-size 
embedding-size]}]
+  (println "Shuffling the data and splitting into training and test sets")
+  (println {:sentence-count sentence-count
+:sentence-size sentence-size
+:embedding-size embedding-size})
+  (let [shuffled (shuffle (map (fn [d l] [d l]) data label))
+train-num (- (count shuffled) test-num)
+training (into [] (take train-num shuffled))
+test (into [] (drop train-num shuffled))]
+{:training {:data  (ndarray/array (into [] (flatten (mapv (fn [v] (first 
v)) training)))
 
 Review comment:
   I think `#()` here is preferable. Sometimes, when I spent too much time 
translating Scala code to Clojure, my brain got a bit fuzzy - will fix   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] toddsundsted opened a new issue #11398: Floating Point Exception after Array Creation

2018-06-25 Thread GitBox
toddsundsted opened a new issue #11398: Floating Point Exception after Array 
Creation
URL: https://github.com/apache/incubator-mxnet/issues/11398
 
 
   ## Description
   Expressions like `nd.random_uniform(shape=[5, 5, -2])` and 
`nd.random_uniform(shape=[5, 5, 0])` cause the runtime to crash (the former 
with `std::bad_alloc`, the latter with `Floating point exception: 8`. It's a 
problem in versions 1.1.0 to 1.1.3 (master).
   
   ## Environment info (Required)
   ```
   --Python Info--
   Version  : 3.6.4
   Compiler : GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)
   Build: ('default', 'Jan 16 2018 12:04:33')
   Arch : ('64bit', '')
   Pip Info---
   Version  : 10.0.1
   Directory: /Users/tsundsted/miniconda3/lib/python3.6/site-packages/pip
   --MXNet Info---
   Version  : 1.1.0
   Directory: /Users/tsundsted/miniconda3/lib/python3.6/site-packages/mxnet
   Hashtag not found. Not installed from pre-built package.
   --System Info--
   Platform : Darwin-16.7.0-x86_64-i386-64bit
   system   : Darwin
   node : Todds-MacBook-Pro.local
   release  : 16.7.0
   version  : Darwin Kernel Version 16.7.0: Fri Apr 27 17:59:46 PDT 2018; 
root:xnu-3789.73.13~1/RELEASE_X86_64
   --Hardware Info--
   machine  : x86_64
   processor: i386
   b'machdep.cpu.extfeatures: SYSCALL XD 1GBPAGE EM64T LAHF LZCNT RDTSCP TSCI'
   b'machdep.cpu.leaf7_features: SMEP ERMS RDWRFSGS TSC_THREAD_OFFSET BMI1 AVX2 
BMI2 INVPCID FPU_CSDS'
   b'machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE 
MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ 
DTES64 MON DSCPL VMX SMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC 
MOVBE POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C'
   b'machdep.cpu.brand_string: Intel(R) Core(TM) i7-4960HQ CPU @ 2.60GHz'
   --Network Test--
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0176 
sec, LOAD: 0.4752 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.4092 sec, LOAD: 
0.6132 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 3.4790 sec, LOAD: 
0.7560 sec.
   Timing for FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 DNS: 0.1225 sec, LOAD: 0.8626 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0179 sec, LOAD: 
0.4299 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0157 sec, 
LOAD: 0.3484 sec.
   ```
   
   I'm using Python.
   
   ## Error Message:
   ```
   $ python
   Python 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 12:04:33)
   [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin
   Type "help", "copyright", "credits" or "license" for more information.
   >>> import mxnet as mx
   >>> import mxnet.ndarray as nd
   >>> nd.random_uniform(shape=[5, 5, -2])
   Traceback (most recent call last):
 File "", line 1, in 
 File 
"/Users/tsundsted/miniconda3/lib/python3.6/site-packages/mxnet/ndarray/ndarray.py",
 line 182, in __repr__
   return '\n%s\n<%s %s @%s>' % (str(self.asnumpy()),
 File 
"/Users/tsundsted/miniconda3/lib/python3.6/site-packages/mxnet/ndarray/ndarray.py",
 line 1793, in asnumpy
   libc++abi.dylib: terminating with uncaught exception of type std::bad_alloc: 
std::bad_alloc
   Abort trap: 6
   ```
   ```
   $ python
   Python 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 12:04:33)
   [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin
   Type "help", "copyright", "credits" or "license" for more information.
   >>> import mxnet as mx
   >>> import mxnet.ndarray as nd
   >>> nd.random_uniform(shape=[5, 5, 0])
   Floating point exception: 8
   ```
   ## Minimum reproducible example
   Any shape with a non-positive dimension size: for example, 
`nd.random_uniform(shape=[5, 5, -2])` and `nd.random_uniform(shape=[5, 5, 0])`.
   
   ## Steps to reproduce
   1. run python
   2. `import mxnet as mx`
   3. `import mxnet.ndarray as nd`
   4. `nd.random_uniform(shape=[5, 5, 0])`
   
   ## What have you tried to solve it?
   Created https://github.com/apache/incubator-mxnet/pull/11397
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #11399: [WIP] Add Fused Vanilla RNN and dropout

2018-06-25 Thread GitBox
TaoLv commented on issue #11399: [WIP] Add Fused Vanilla RNN and dropout
URL: https://github.com/apache/incubator-mxnet/pull/11399#issuecomment-400145788
 
 
   Please remove [WIP] from the title and add the JIRA number to it. 
https://issues.apache.org/jira/browse/MXNET-107


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] azai91 commented on issue #11129: [MXNET-497] fix bugs in MKLDNN operators to handle the kAddTo request

2018-06-25 Thread GitBox
azai91 commented on issue #11129: [MXNET-497] fix bugs in MKLDNN operators to 
handle the kAddTo request
URL: https://github.com/apache/incubator-mxnet/pull/11129#issuecomment-400145869
 
 
   @zheng-da @pengzhao-intel updated PR. please take a look when you have time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hcho3 opened a new pull request #11396: Fix flaky test test_operator_gpu.test_batchnorm_with_type

2018-06-25 Thread GitBox
hcho3 opened a new pull request #11396: Fix flaky test 
test_operator_gpu.test_batchnorm_with_type
URL: https://github.com/apache/incubator-mxnet/pull/11396
 
 
   ## Description ##
   Addresses #10087. See [#9916 
(comment-371736378)](https://github.com/apache/incubator-mxnet/issues/9916#issuecomment-371736378)
 for a justification for this change.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ankkhedia edited a comment on issue #10274: test_ndarray.test_reduce fails in v1.0.0

2018-06-25 Thread GitBox
ankkhedia edited a comment on issue #10274: test_ndarray.test_reduce fails in 
v1.0.0
URL: 
https://github.com/apache/incubator-mxnet/issues/10274#issuecomment-400113596
 
 
   Tested on master for d6813efa2206afb5be98c2da16dd6e2efaf44cda using gcc-6 
(Ubuntu 6.4.0-17ubuntu1~16.04) 6.4.0 20180424, could not reproduce. It's 
probably ARM specific issue on edge devices(RaspBerry PI)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
gigasquid commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197979848
 
 

 ##
 File path: contrib/clojure-package/examples/cnn-text-classification/README.md
 ##
 @@ -0,0 +1,38 @@
+# cnn-text-classification
+
+An example of text classification using CNN
+
+To use you must download the MR polarity dataset and put it in the path 
specified in the mr-dataset-path
+The dataset can be obtained here: 
[https://github.com/yoonkim/CNN_sentence](https://github.com/yoonkim/CNN_sentence).
 The two files `rt-polarity.neg`
+and `rt-polarity.pos` must be put in a directory. For example, 
`data/mr-data/rt-polarity.neg`.
+
+You also must download the glove word embeddings. The suggested one to use is 
the smaller 50 dimension one
 
 Review comment:
   It was smaller and could fit into my laptop memory :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #11340: [MXNET-559] Scripts for running the Broken link checker job

2018-06-25 Thread GitBox
szha commented on issue #11340: [MXNET-559] Scripts for running the  Broken 
link checker job
URL: https://github.com/apache/incubator-mxnet/pull/11340#issuecomment-400118911
 
 
   @marcoabreu CI passed. Should this be merged?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197971193
 
 

 ##
 File path: src/executor/tensorrt_pass.cc
 ##
 @@ -0,0 +1,583 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2018 by Contributors
+ * \file tensorrt_pass.cc
+ * \brief Replace TRT compatible subgraphs by TRT engines
+ * \author Clement Fuji Tsang
+ */
+
+#if MXNET_USE_TENSORRT
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./onnx_to_tensorrt.h"
+#include "./exec_pass.h"
+#include "../operator/contrib/nnvm_to_onnx-inl.h"
+
+namespace mxnet {
+namespace exec {
+
+using NodePtr = nnvm::NodePtr;
+
+/*!
+ * \brief Custom graph class, which will contain bi-directional nodes
+ * we need to compute DFS and reverse DFS for graph partitioning
+ */
+class BidirectionalGraph {
+ public:
+  struct Node {
+nnvm::Node* nnvmptr;
+std::vector inputs;
+std::vector outputs;
+  };
+  std::vector nodes;
+  std::unordered_map nnvm2nid;
+  std::vector outputs;
+  static const std::unordered_set unconditionalTRTop;
+
+  explicit BidirectionalGraph(const Graph ) {
+auto& idx = g.indexed_graph();
+auto num_nodes = idx.num_nodes();
+nodes.reserve(num_nodes);
+nnvm2nid.reserve(num_nodes);
+outputs.reserve(idx.outputs().size());
+DFSVisit(g.outputs, [this](const nnvm::NodePtr& n) {
+  BidirectionalGraph::Node new_node;
+  new_node.nnvmptr = n.get();
+  nnvm2nid[n.get()] = static_cast(nodes.size());
+  nodes.emplace_back(std::move(new_node));
+});
+for (const auto& it : nnvm2nid) {
+  nnvm::Node* nnvmnode = it.first;
+  uint32_t nid = it.second;
+  for (auto& n : nnvmnode->inputs) {
+uint32_t input_nid = nnvm2nid[n.node.get()];
+nodes[input_nid].outputs.emplace_back([nid]);
+nodes[nid].inputs.emplace_back([input_nid]);
+  }
+}
+for (auto& e : g.outputs) {
+  uint32_t nid = nnvm2nid[e.node.get()];
+  outputs.emplace_back([nid]);
+}
+  }
+
+  template 
+  void DFS(const std::vector& heads, bool reverse, FVisit fvisit) {
+std::unordered_set visited;
+std::deque stack(heads.begin(), heads.end());
+visited.reserve(heads.size());
+while (!stack.empty()) {
+  Node* vertex = stack.back();
+  stack.pop_back();
+  if (visited.count(vertex) == 0) {
+visited.insert(vertex);
+fvisit(vertex);
+std::vector nexts = reverse ? vertex->inputs : vertex->outputs;
+for (Node* node : nexts) {
+  if (visited.count(node) == 0) {
+stack.emplace_back(node);
+  }
+}
+  }
+}
+  }
+
+  using t_pairset = std::pair, 
std::unordered_set>;
+  using t_pairvec = std::pair, std::vector>;
+  using t_uncomp_map = std::unordered_map>;
+
+  std::unordered_set naive_grow_subgraph(Node* head,
+std::unordered_set* 
set_unused,
+t_uncomp_map* uncomp_map) {
+std::unordered_set subgraph;
+std::unordered_set uncomp_set;
+std::deque stack;
+stack.emplace_back(head);
+while (!stack.empty()) {
+  Node* vertex = stack.back();
+  stack.pop_back();
+  if (set_unused->count(vertex) && !uncomp_set.count(vertex)) {
+set_unused->erase(vertex);
+subgraph.insert(vertex);
+uncomp_set.insert((*uncomp_map)[vertex].begin(), 
(*uncomp_map)[vertex].end());
+for (Node* input : vertex->inputs) {
+  if (set_unused->count(input) && !uncomp_set.count(input)) {
+stack.emplace_back(input);
+  }
+}
+for (Node* output : vertex->outputs) {
+  if (set_unused->count(output) && !uncomp_set.count(output)) {
+stack.emplace_back(output);
+  }
+}
+  }
+}
+return subgraph;
+  }
+
+  std::vector> get_subsets(
+std::unordered_map* const params_map) {
+std::vector> subgraphs;
+std::unordered_set set_nonTRTnodes;
+std::unordered_set set_allnodes(nodes.size());
+std::vector separation_sets;
+for (Node& node : nodes) 

[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197972947
 
 

 ##
 File path: Makefile
 ##
 @@ -94,6 +94,14 @@ else
 endif
 CFLAGS += -I$(TPARTYDIR)/mshadow/ -I$(TPARTYDIR)/dmlc-core/include -fPIC 
-I$(NNVM_PATH)/include -I$(DLPACK_PATH)/include -I$(TPARTYDIR)/tvm/include 
-Iinclude $(MSHADOW_CFLAGS)
 LDFLAGS = -pthread $(MSHADOW_LDFLAGS) $(DMLC_LDFLAGS)
+
+
+ifeq ($(USE_TENSORRT), 1)
 
 Review comment:
   @KellenSunderland I agree. Should the CMake build be part of the initial PR 
or a subsequent one?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong closed pull request #10931: [MXNET-349] Histogram Operator

2018-06-25 Thread GitBox
piiswrong closed pull request #10931: [MXNET-349] Histogram Operator
URL: https://github.com/apache/incubator-mxnet/pull/10931
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/ndarray/ndarray.py b/python/mxnet/ndarray/ndarray.py
index f017d7e65e7..002ce3ebbc2 100644
--- a/python/mxnet/ndarray/ndarray.py
+++ b/python/mxnet/ndarray/ndarray.py
@@ -46,7 +46,7 @@
"ones", "add", "arange", "eye", "divide", "equal", "full", 
"greater", "greater_equal",
"imdecode", "lesser", "lesser_equal", "logical_and", "logical_or", 
"logical_xor",
"maximum", "minimum", "moveaxis", "modulo", "multiply", 
"not_equal", "onehot_encode",
-   "power", "subtract", "true_divide", "waitall", "_new_empty_handle"]
+   "power", "subtract", "true_divide", "waitall", "_new_empty_handle", 
"histogram"]
 
 _STORAGE_TYPE_UNDEFINED = -1
 _STORAGE_TYPE_DEFAULT = 0
@@ -3740,3 +3740,36 @@ def empty(shape, ctx=None, dtype=None):
 if dtype is None:
 dtype = mx_real_t
 return NDArray(handle=_new_alloc_handle(shape, ctx, False, dtype))
+
+
+# pylint: disable= redefined-builtin
+def histogram(a, bins=10, range=None):
+"""Compute the histogram of the input data.
+
+Parameters
+--
+a : NDArray
+Input data. The histogram is computed over the flattened array.
+bins : int or sequence of scalars
+If bins is an int, it defines the number of equal-width bins in the
+given range (10, by default). If bins is a sequence, it defines the 
bin edges,
+including the rightmost edge, allowing for non-uniform bin widths.
+range : (float, float), optional
+The lower and upper range of the bins. If not provided, range is 
simply (a.min(), a.max()).
+Values outside the range are ignored. The first element of the range 
must be less than or
+equal to the second. range affects the automatic bin computation as 
well, the range will
+be equally divided by the number of bins.
+"""
+
+# pylint: disable= no-member, protected-access
+if isinstance(bins, NDArray):
+return _internal._histogram(data=a, bins=bins)
+elif isinstance(bins, integer_types):
+if range is None:
+warnings.warn("range is not specified, using numpy's result "
+  "to ensure consistency with numpy")
+res, bin_bounds = np.histogram(a.asnumpy(), bins=bins)
+return array(res), array(bin_bounds)
+return _internal._histogram(data=a, bin_cnt=bins, range=range)
+raise ValueError("bins argument should be either an integer or an NDArray")
+# pylint: enable= no-member, protected-access, redefined-builtin
diff --git a/python/mxnet/symbol/symbol.py b/python/mxnet/symbol/symbol.py
index 7e5b52770fe..c5e2f5cb77d 100644
--- a/python/mxnet/symbol/symbol.py
+++ b/python/mxnet/symbol/symbol.py
@@ -34,7 +34,7 @@
 
 from ..attribute import AttrScope
 from ..base import _LIB, numeric_types, c_array, c_array_buf, c_str, 
c_str_array, c_handle_array
-from ..base import mx_uint, py_str, string_types
+from ..base import mx_uint, py_str, string_types, integer_types
 from ..base import NDArrayHandle, ExecutorHandle, SymbolHandle
 from ..base import check_call, MXNetError, NotImplementedForSymbol
 from ..context import Context, current_context
@@ -47,7 +47,8 @@
 from ._internal import SymbolBase, _set_symbol_class
 
 __all__ = ["Symbol", "var", "Variable", "Group", "load", "load_json",
-   "pow", "maximum", "minimum", "hypot", "eye", "zeros", "ones", 
"full", "arange"]
+   "pow", "maximum", "minimum", "hypot", "eye", "zeros", "ones", 
"full", "arange",
+   "histogram"]
 
 
 class Symbol(SymbolBase):
@@ -2864,4 +2865,29 @@ def arange(start, stop=None, step=1.0, repeat=1, 
name=None, dtype=None):
 return _internal._arange(start=start, stop=stop, step=step, repeat=repeat,
  name=name, dtype=dtype)
 
+def histogram(a, bins=10, range=None, **kwargs):
+"""Compute the histogram of the input data.
+
+Parameters
+--
+a : NDArray
+Input data. The histogram is computed over the flattened array.
+bins : int or sequence of scalars
+If bins is an int, it defines the number of equal-width bins in the
+given range (10, by default). If bins is a sequence, it defines the 
bin edges,
+including the rightmost edge, allowing for non-uniform bin widths.
+range : (float, float), required if bins is an integer
+The lower and upper range of the bins. If not provided, range is 
simply (a.min(), a.max()).
+Values outside the range are ignored. The first element of the range 
must be less than or
+equal to the 

[GitHub] spidyDev commented on issue #9857: C++ test Core dump DROPOUT_PERF.TimingGPU

2018-06-25 Thread GitBox
spidyDev commented on issue #9857: C++ test Core dump DROPOUT_PERF.TimingGPU
URL: 
https://github.com/apache/incubator-mxnet/issues/9857#issuecomment-400130297
 
 
   @marcoabreu  I ran this test ~1000 times, couldnt replicate the failure. Can 
we close this issue ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
gigasquid commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197980344
 
 

 ##
 File path: contrib/clojure-package/examples/cnn-text-classification/README.md
 ##
 @@ -0,0 +1,38 @@
+# cnn-text-classification
+
+An example of text classification using CNN
+
+To use you must download the MR polarity dataset and put it in the path 
specified in the mr-dataset-path
+The dataset can be obtained here: 
[https://github.com/yoonkim/CNN_sentence](https://github.com/yoonkim/CNN_sentence).
 The two files `rt-polarity.neg`
+and `rt-polarity.pos` must be put in a directory. For example, 
`data/mr-data/rt-polarity.neg`.
+
+You also must download the glove word embeddings. The suggested one to use is 
the smaller 50 dimension one
 
 Review comment:
   Word2vec is available in the demo as well - but I haven't been able to test 
that yet. I can put that on the Needs Help page 
https://cwiki.apache.org/confluence/display/MXNET/Clojure+Package+Contribution+Needs


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lihaofd opened a new pull request #11399: [WIP] Add Fused Vanilla RNN and dropout

2018-06-25 Thread GitBox
lihaofd opened a new pull request #11399: [WIP] Add Fused Vanilla RNN and 
dropout
URL: https://github.com/apache/incubator-mxnet/pull/11399
 
 
   ## Description ##
   In this PR, it creates Fused Vanilla RNN(tanh/relu) operator and dropout of 
GRU/LSTM/vRNN for CPU.
   @pengzhao-intel, @TaoLv 
   
   ## Feature changes ##
   ### New features ###
   - Single layer/Multiple layer and unidirectional/bidirectional Vanilla 
RNN(tanh/relu), including both forward and backward computation.
   - Support dropout of GRU/LSTM/vRNN
   
   ### Unit-test changes ###
   - Create new testcase in tests/python/unittests/test_operator.py.
   - update testcase in example/rnn/bucketing/cudnn_rnn_bucketing.py
   - Check consistency with original RNNCell implementation.
   
   ### Performance ###
   We have tested performance of FusedRNN and NonFused RNNCell on our local 
Skylake-8180 with 2 Sockets and 56 cores. Use MKL as blas lib in this 
performance test.
   Test input size is from DS2 default parameters(seq_length = 300, batch_size 
= 20, input_size = 800, hidden_size = 800).
   
   Layer=1 bidirectional = False
   
   | API | Inference time(fwd, samples/sec)  |  Training time(fwd + bwd, 
samples/sec)   |
   |    | :-:  | :: 
  |
   | rnn.RNNCell - NoFusedRNN(Tanh, CPU) | 492.61   |  
198.02|
   | this PR - FusedRNN(Tanh, CPU)  | 952.38  | 
 318.98 |
   | speedup | 1.93x  |  1.61x  
  | 
   
   
   | API | Inference time(fwd, samples/sec)  |  Training time(fwd + bwd, 
samples/sec)   |
   |    | :-:  | :: 
  |
   | rnn.RNNCell - NoFusedRNN(Relu, CPU) | 277.78   |  
104.17|
   | this PR - FusedRNN(Relu, CPU)  | 740.74  | 
 177 |
   | speedup | 2.67x  |  1.7x   
 | 
   
   Layer=5 bidirectional = True
   
   | API | Inference time(fwd, samples/sec)  |  Training time(fwd + bwd, 
samples/sec)   |
   |    | :-:  | :: 
  |
   | rnn.RNNCell - NoFusedRNN(Tanh, CPU) | 38.91   |  22.73 
   |
   | rnn.RNNCell (Tanh, cuda)  | 47.85   |  26.95   
 |
   | rnn.RNNCell (Tanh, cudnn)  | 208.33   |  81.63 
  |
   | this PR - FusedRNN(Tanh, CPU)  | 104.17  | 
 34.01 |
   | speedup -this PR/RNNCell (Tanh, CPU)  | 267.7% 
 |  149.7%| 
   | speedup -this PR/RNNCell  (Tanh, cuda)| 217.7% 
 |  126.2%| 
   | speedup -this PR/RNNCell  (Tanh, cudnn)| 50%   
   |  41.7%   | 
   
   
   | API | Inference time(fwd, samples/sec)  |  Training time(fwd + bwd, 
samples/sec)   |
   |    | :-:  | :: 
  |
   | rnn.RNNCell - NoFusedRNN(Relu, CPU) | 40.73   |  22.6  
  |
   | rnn.RNNCell (Relu, cuda)  | 52.91   |  26.81   
 |
   | rnn.RNNCell (Relu, cudnn)  | 206.83   |  82.64 
 |
   | this PR - FusedRNN(Relu, CPU)  | 134.23  | 
 35.97 |
   | speedup -this PR/RNNCell (Relu, CPU)  | 329.5% 
 |  159.2%| 
   | speedup -this PR/RNNCell  (Relu, cuda)| 253.7% 
 |  134.2%| 
   | speedup -this PR/RNNCell  (Relu, cudnn)| 64.9% 
 |  43.5%   | 
   
   ### Convergency Curve ###
   We have tested Convergency of FusedGRU/LSTM(dropout = 0.5) on our 
CPU-Skylake-8180 with 2 Sockets and 56 cores and GPU-P100  by using 
example/rnn/bucketing/cudnn_rnn_bucketing.py
   Test input size is layer = 3, batch_size = 32, num-embed = 800, num-hidden = 
800, num-epochs 20
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:

[GitHub] larroy commented on issue #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
larroy commented on issue #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#issuecomment-400152945
 
 
   Hi @gigasquid   CI runs stuff in docker containers.
   
   The steps to add your code to CI would be the following:
   
   1. Add your clojure dependencies to  `ci/docker/install/ubuntu_clojure.sh`   
  (check how it's done with scala)
   
   2. You need to add one function to
   `ci/docker/runtime_functions.sh` that builds your package
   
   and then add a stage into the Jenkinsfile:
   
   See for example this PR which adds an additional Android stage:
   
   
https://github.com/apache/incubator-mxnet/pull/11382/files#diff-58231b16fdee45a03a4ee3cf94a9f2c3L486
   
   To test locally run:
   
   ci/build.py --platform ubuntu_cpu --shm-size 500m /work/runtime_functions.sh 
unittest_ubuntu_cpu_scala
   
   but just use the function for clojure that was just created.
   
   ci/build.py --platform ubuntu_cpu -i  will put you inside the container, 
so you can check what steps you need to take and add to the 
runtime_functions.sh script.  
   
   Let me know if you have any issues. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy edited a comment on issue #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
larroy edited a comment on issue #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#issuecomment-400152945
 
 
   Hi @gigasquid   CI runs stuff in docker containers.
   
   The steps to add your code to CI would be the following:
   
   1. Add your clojure dependencies to  `ci/docker/install/ubuntu_clojure.sh`   
  (check how it's done with scala)
   
   2. You need to add one function to
   `ci/docker/runtime_functions.sh` that builds your package
   
   and then add a stage into the Jenkinsfile:
   
   See for example this PR which adds an additional Android stage:
   
   
https://github.com/apache/incubator-mxnet/pull/11382/files#diff-58231b16fdee45a03a4ee3cf94a9f2c3L486
   
   To test locally run:
   
   ci/build.py --platform ubuntu_cpu --shm-size 500m /work/runtime_functions.sh 
unittest_ubuntu_cpu_scala
   
   but just use the function for clojure that was just created.
   
   ci/build.py --platform ubuntu_cpu -i  will put you inside the container, 
so you can check what steps you need to take and add to the 
runtime_functions.sh script.  
   
   Let me know if you have any issues. 
   
   
   PD: to install docker in an ubuntu machine you can do the following:
   ```
   function install_docker_ubuntu() {
 #apt-get -y install docker docker.io
 export DEBIAN_FRONTEND=noninteractive
 curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add 
-
 add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
  stable"
 apt-get update
 apt-get -y install docker-ce
 # Nvidia docker
 wget -P /tmp 
https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
 dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb
   
 # Restart docker
 service docker restart
   
 # Add ubuntu to docker group
 usermod -a -G docker ubuntu
   }
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197966380
 
 

 ##
 File path: python/mxnet/cuda_utils.py
 ##
 @@ -0,0 +1,90 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# Copyright (c) 2015 by Contributors
+# File: serialization.h
+# Purpose: Functions to query GPU count, arch, etc.
+# Author: Dick Carter
+
+"""Provides information on the visible CUDA GPUs on the system."""
+# pylint: disable=broad-except
+# As a stand-alone program, it prints a list of unique cuda SM architectures
+import ctypes as C
+from ctypes.util import find_library
+
+def cint(init_val=0):
 
 Review comment:
   @eric-haibin-lin Good point, the Ctypes utils could just be moved to base, 
and then reused in cuda_utils.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197966636
 
 

 ##
 File path: src/common/serialization.h
 ##
 @@ -0,0 +1,526 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2015 by Contributors
+ * \file serialization.h
+ * \brief Serialization of some STL and nnvm data-structures
+ * \author Clement Fuji Tsang
+ */
+
+#ifndef MXNET_COMMON_SERIALIZATION_H_
+#define MXNET_COMMON_SERIALIZATION_H_
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+
+namespace mxnet {
+namespace common {
+
+template
+inline size_t serialized_size(const T& obj);
 
 Review comment:
   @eric-haibin-lin It would make sense to increase test coverage for this 
independently. Will add it to the to-do list for polishing up the PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mkolod commented on a change in pull request #11325: Added TensorRT runtime integration

2018-06-25 Thread GitBox
mkolod commented on a change in pull request #11325: Added TensorRT runtime 
integration
URL: https://github.com/apache/incubator-mxnet/pull/11325#discussion_r197969467
 
 

 ##
 File path: include/mxnet/executor.h
 ##
 @@ -152,19 +152,19 @@ class Executor {
   static Executor* SimpleBind(nnvm::Symbol symbol,
   const Context& default_ctx,
   const std::map& group2ctx,
-  const std::vector& in_arg_ctxes,
-  const std::vector& arg_grad_ctxes,
-  const std::vector& aux_state_ctxes,
-  const std::unordered_map& 
arg_shape_map,
-  const std::unordered_map& 
arg_dtype_map,
-  const std::unordered_map& 
arg_stype_map,
-  const std::vector& grad_req_types,
-  const std::unordered_set& 
param_names,
+  std::vector* in_arg_ctxes,
 
 Review comment:
   @reminisce  Because if things are to be mutated, they need to be pointers, 
not non-const references (per the linter rules). Given your earlier comments 
about SimpleBindEx rather than modifying SimpleBind, this will be addressed 
there rather than modifying it here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [MXNET-349] Histogram Operator (#10931)

2018-06-25 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new ed7e360  [MXNET-349] Histogram Operator (#10931)
ed7e360 is described below

commit ed7e3602a8046646582c0c681b70d9556f5fa0a4
Author: Hao Jin 
AuthorDate: Mon Jun 25 16:45:32 2018 -0700

[MXNET-349] Histogram Operator (#10931)

* implementation of histogram operator

* address code reviews and code re-design

* add exception for invalid inputs

* address code reviews

* add symbol and symbolic forward check for histogram
---
 python/mxnet/ndarray/ndarray.py  |  35 +-
 python/mxnet/symbol/symbol.py|  30 -
 src/common/cuda_utils.h  |  30 +
 src/operator/tensor/histogram-inl.h  | 172 +++
 src/operator/tensor/histogram.cc | 159 +
 src/operator/tensor/histogram.cu | 111 +
 src/operator/tensor/util/tensor_util-inl.cuh |   4 +-
 tests/python/unittest/test_operator.py   |  34 ++
 8 files changed, 571 insertions(+), 4 deletions(-)

diff --git a/python/mxnet/ndarray/ndarray.py b/python/mxnet/ndarray/ndarray.py
index f017d7e..002ce3e 100644
--- a/python/mxnet/ndarray/ndarray.py
+++ b/python/mxnet/ndarray/ndarray.py
@@ -46,7 +46,7 @@ __all__ = ["NDArray", "concatenate", "_DTYPE_NP_TO_MX", 
"_DTYPE_MX_TO_NP", "_GRA
"ones", "add", "arange", "eye", "divide", "equal", "full", 
"greater", "greater_equal",
"imdecode", "lesser", "lesser_equal", "logical_and", "logical_or", 
"logical_xor",
"maximum", "minimum", "moveaxis", "modulo", "multiply", 
"not_equal", "onehot_encode",
-   "power", "subtract", "true_divide", "waitall", "_new_empty_handle"]
+   "power", "subtract", "true_divide", "waitall", "_new_empty_handle", 
"histogram"]
 
 _STORAGE_TYPE_UNDEFINED = -1
 _STORAGE_TYPE_DEFAULT = 0
@@ -3740,3 +3740,36 @@ def empty(shape, ctx=None, dtype=None):
 if dtype is None:
 dtype = mx_real_t
 return NDArray(handle=_new_alloc_handle(shape, ctx, False, dtype))
+
+
+# pylint: disable= redefined-builtin
+def histogram(a, bins=10, range=None):
+"""Compute the histogram of the input data.
+
+Parameters
+--
+a : NDArray
+Input data. The histogram is computed over the flattened array.
+bins : int or sequence of scalars
+If bins is an int, it defines the number of equal-width bins in the
+given range (10, by default). If bins is a sequence, it defines the 
bin edges,
+including the rightmost edge, allowing for non-uniform bin widths.
+range : (float, float), optional
+The lower and upper range of the bins. If not provided, range is 
simply (a.min(), a.max()).
+Values outside the range are ignored. The first element of the range 
must be less than or
+equal to the second. range affects the automatic bin computation as 
well, the range will
+be equally divided by the number of bins.
+"""
+
+# pylint: disable= no-member, protected-access
+if isinstance(bins, NDArray):
+return _internal._histogram(data=a, bins=bins)
+elif isinstance(bins, integer_types):
+if range is None:
+warnings.warn("range is not specified, using numpy's result "
+  "to ensure consistency with numpy")
+res, bin_bounds = np.histogram(a.asnumpy(), bins=bins)
+return array(res), array(bin_bounds)
+return _internal._histogram(data=a, bin_cnt=bins, range=range)
+raise ValueError("bins argument should be either an integer or an NDArray")
+# pylint: enable= no-member, protected-access, redefined-builtin
diff --git a/python/mxnet/symbol/symbol.py b/python/mxnet/symbol/symbol.py
index 7e5b527..c5e2f5c 100644
--- a/python/mxnet/symbol/symbol.py
+++ b/python/mxnet/symbol/symbol.py
@@ -34,7 +34,7 @@ import numpy as _numpy
 
 from ..attribute import AttrScope
 from ..base import _LIB, numeric_types, c_array, c_array_buf, c_str, 
c_str_array, c_handle_array
-from ..base import mx_uint, py_str, string_types
+from ..base import mx_uint, py_str, string_types, integer_types
 from ..base import NDArrayHandle, ExecutorHandle, SymbolHandle
 from ..base import check_call, MXNetError, NotImplementedForSymbol
 from ..context import Context, current_context
@@ -47,7 +47,8 @@ from . import op
 from ._internal import SymbolBase, _set_symbol_class
 
 __all__ = ["Symbol", "var", "Variable", "Group", "load", "load_json",
-   "pow", "maximum", "minimum", "hypot", "eye", "zeros", "ones", 
"full", "arange"]
+   "pow", "maximum", "minimum", "hypot", "eye", "zeros", "ones", 
"full", "arange",
+   "histogram"]
 
 
 class Symbol(SymbolBase):
@@ -2864,4 +2865,29 

[GitHub] azai91 edited a comment on issue #11371: [MXNET-486] Create CPP test for concat MKLDNN operator

2018-06-25 Thread GitBox
azai91 edited a comment on issue #11371: [MXNET-486] Create CPP test for concat 
MKLDNN operator
URL: https://github.com/apache/incubator-mxnet/pull/11371#issuecomment-400086783
 
 
   @zheng-da @pengzhao-intel please review when you have time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid commented on a change in pull request #11205: Clojure Contrib Package

2018-06-25 Thread GitBox
gigasquid commented on a change in pull request #11205: Clojure Contrib Package
URL: https://github.com/apache/incubator-mxnet/pull/11205#discussion_r197981747
 
 

 ##
 File path: 
contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj
 ##
 @@ -0,0 +1,112 @@
+;;
+;; Licensed to the Apache Software Foundation (ASF) under one or more
+;; contributor license agreements.  See the NOTICE file distributed with
+;; this work for additional information regarding copyright ownership.
+;; The ASF licenses this file to You under the Apache License, Version 2.0
+;; (the "License"); you may not use this file except in compliance with
+;; the License.  You may obtain a copy of the License at
+;;
+;;http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+;;
+
+(ns cnn-text-classification.classifier
+  (:require [cnn-text-classification.data-helper :as data-helper]
+[org.apache.clojure-mxnet.eval-metric :as eval-metric]
+[org.apache.clojure-mxnet.io :as mx-io]
+[org.apache.clojure-mxnet.module :as m]
+[org.apache.clojure-mxnet.ndarray :as ndarray]
+[org.apache.clojure-mxnet.optimizer :as optimizer]
+[org.apache.clojure-mxnet.symbol :as sym]
+[org.apache.clojure-mxnet.context :as context])
+  (:gen-class))
+
+(def mr-dataset-path "data/mr-data") ;; the MR polarity dataset path
+(def glove-file-path "data/glove/glove.6B.50d.txt")
+
+(defn shuffle-data [test-num {:keys [data label sentence-count sentence-size 
embedding-size]}]
+  (println "Shuffling the data and splitting into training and test sets")
+  (println {:sentence-count sentence-count
+:sentence-size sentence-size
+:embedding-size embedding-size})
+  (let [shuffled (shuffle (map (fn [d l] [d l]) data label))
+train-num (- (count shuffled) test-num)
+training (into [] (take train-num shuffled))
+test (into [] (drop train-num shuffled))]
+{:training {:data  (ndarray/array (into [] (flatten (mapv (fn [v] (first 
v)) training)))
+  [train-num 1 sentence-size 
embedding-size]) ;; has to be channel x y
+:label (ndarray/array (into [] (flatten (mapv (fn [v] (last v) 
) training)))
+  [train-num])}
+ :test {:data  (ndarray/array (into [] (flatten (mapv (fn [v] (first v)) 
test)))
+  [test-num 1 sentence-size embedding-size]) 
;; has to be channel x y
+:label (ndarray/array (into [] (flatten (mapv (fn [v] (last v) ) 
test)))
+  [test-num])}}))
+
+;;; convnet with multiple filter sizes
+;; from Convolutional Neural Networks for Sentence Classification by Yoon Kim
+(defn get-multi-filter-convnet [num-embed sentence-size batch-size]
+  (let [filter-list [3 4 5]
+num-filter 100
+num-label 2
+dropout 0.5
+input-x (sym/variable "data")
+polled-outputs (mapv (fn [filter-size]
+   (as-> (sym/convolution {:data input-x
+   :kernel [filter-size 
num-embed]
+   :num-filter 
num-filter}) data
+ (sym/activation {:data data :act-type "relu"})
+ (sym/pooling {:data data
+   :pool-type "max"
 
 Review comment:
   not sure exactly what you are proposing..


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2018-06-25 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 987250c  Bump the publish timestamp.
987250c is described below

commit 987250c67fabe01012540acd9181ec4dd0992730
Author: mxnet-ci 
AuthorDate: Tue Jun 26 01:37:16 2018 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..6164f23
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Jun 26 01:37:16 UTC 2018



  1   2   >