[GitHub] Ujjalbuet closed issue #9768: Regarding data distribution for multi-GPU implementation

2018-02-12 Thread GitBox
Ujjalbuet closed issue #9768: Regarding data distribution for multi-GPU 
implementation 
URL: https://github.com/apache/incubator-mxnet/issues/9768
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang opened a new pull request #9770: eye operator, for default storage type

2018-02-12 Thread GitBox
ZiyueHuang opened a new pull request #9770: eye operator, for default storage 
type
URL: https://github.com/apache/incubator-mxnet/pull/9770
 
 
   ## Description ##
   eye operator, for default storage type
   
   Required in 
https://discuss.mxnet.io/t/is-there-an-eye-function-in-the-ndarray-api/526/5
   
   cc @eric-haibin-lin 
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: remove the extra @register (#9769)

2018-02-12 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 4a619ba  remove the extra @register (#9769)
4a619ba is described below

commit 4a619ba0e503e3eb6ff0b44c12a3c23bc5cabcc9
Author: lianwj 
AuthorDate: Mon Feb 12 19:25:08 2018 +0800

remove the extra @register (#9769)

Two "@register" lead to warning "UserWarning: WARNING: New optimizer 
mxnet.optimizer.NAG is overriding existing optimizer mxnet.optimizer.NAG 
Optimizer.opt_registry[name].name))" while importing mxnet.
---
 python/mxnet/optimizer.py | 1 -
 1 file changed, 1 deletion(-)

diff --git a/python/mxnet/optimizer.py b/python/mxnet/optimizer.py
index 0856a7f..0652772 100644
--- a/python/mxnet/optimizer.py
+++ b/python/mxnet/optimizer.py
@@ -893,7 +893,6 @@ class DCASGD(Optimizer):
 weight[:] += mom
 
 @register
-@register
 class NAG(Optimizer):
 """Nesterov accelerated SGD.
 

-- 
To stop receiving notification emails like this one, please contact
hai...@apache.org.


[GitHub] eric-haibin-lin closed pull request #9769: remove the extra @register

2018-02-12 Thread GitBox
eric-haibin-lin closed pull request #9769: remove the extra @register
URL: https://github.com/apache/incubator-mxnet/pull/9769
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/optimizer.py b/python/mxnet/optimizer.py
index 0856a7f894..06527723a2 100644
--- a/python/mxnet/optimizer.py
+++ b/python/mxnet/optimizer.py
@@ -892,7 +892,6 @@ def update(self, index, weight, grad, state):
 previous_weight[:] = weight
 weight[:] += mom
 
-@register
 @register
 class NAG(Optimizer):
 """Nesterov accelerated SGD.


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on issue #9769: remove the extra @register

2018-02-12 Thread GitBox
ZiyueHuang commented on issue #9769: remove the extra @register
URL: https://github.com/apache/incubator-mxnet/pull/9769#issuecomment-364881699
 
 
   Thanks for the fix!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8312: Gradient function not returning enough gradients

2018-02-12 Thread GitBox
szha commented on issue #8312: Gradient function not returning enough gradients
URL: 
https://github.com/apache/incubator-mxnet/issues/8312#issuecomment-364908474
 
 
   @apache/mxnet-committers: This issue has been inactive for the past 90 days. 
It has no label and needs triage.
   
   For general "how-to" questions, our [user forum](https://discuss.mxnet.io/) 
(and [Chinese version](https://discuss.gluon.ai/)) is a good place to get help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aseyboldt closed issue #8618: Segfault for custom op without inputs

2018-02-12 Thread GitBox
aseyboldt closed issue #8618: Segfault for custom op without inputs
URL: https://github.com/apache/incubator-mxnet/issues/8618
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aseyboldt commented on issue #8618: Segfault for custom op without inputs

2018-02-12 Thread GitBox
aseyboldt commented on issue #8618: Segfault for custom op without inputs
URL: 
https://github.com/apache/incubator-mxnet/issues/8618#issuecomment-364898725
 
 
   Fixed in #7967.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] parallelgithub opened a new pull request #9771: Modify NDArrayIter constructor to receive tuple (i.e. dict in Python)?

2018-02-12 Thread GitBox
parallelgithub opened a new pull request #9771: Modify NDArrayIter constructor 
to receive tuple (i.e. dict in Python)?
URL: https://github.com/apache/incubator-mxnet/pull/9771
 
 
   ## Description ##
   For multiple inputs or multiple labels in NDArrayIter, they are assigned 
IndexedSeq[NDArray] type, which is not flexible to design neural network like 
Python API because the naming rule here is dataname_0, dataname_1, etc. In more 
complex network you may want to give a meaningful names like "user" and "item" 
in Matrix Factorization example.
   
   We modify the constructor to receive IndexedSeq[(String, NDArray)] type to 
allow assigning custom names. We can initialize the NDArrayIter like this:
   ```scala
   val trainData1 = IndexedSeq(("user", 
NDArray.array(Array(1,2,3,4,5,6,3,2,7,1,6,9), shape = Shape(6,2
   val trainData2 = IndexedSeq(("item", 
NDArray.array(Array(1,2,3,4,5,6,3,2,6,1,6,9), shape = Shape(6,2
   val trainLabel = IndexedSeq(("label", NDArray.array(Array(5,11,17,7,9,24), 
shape = Shape(6
   val trainIter = new NDArrayIter(
 trainData1 ++ trainData2,
 trainLabel, 
 1, 
 false,
 "pad")
   ```
   
   In order to compatible with the old argument version, we overload the 
constructor.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] calumleslie commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-12 Thread GitBox
calumleslie commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r167597820
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/MXNetHandler.scala
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import java.util.concurrent._
+
+trait MXNetHandler {
+
+  def execute[T](f: => T): T
+
+  val executor: ExecutorService
+
+}
+
+object MXNetHandlerType extends Enumeration {
+
+  type MXNetHandlerType = Value
+  val SingleThreadHandler = Value("MXNetSingleThreadHandler")
+  val OneThreadPerModelHandler = Value("MXNetOneThreadPerModelHandler")
 
 Review comment:
   Given that one-thread-per-model is (to the best of my knowledge) currently 
an unsafe operating mode, is there any point in supporting it just yet?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] calumleslie commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-12 Thread GitBox
calumleslie commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r167601431
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/MXNetHandler.scala
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import java.util.concurrent._
+
+trait MXNetHandler {
+
+  def execute[T](f: => T): T
+
+  val executor: ExecutorService
 
 Review comment:
   Arguably this should not be on the trait at all.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] calumleslie commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-12 Thread GitBox
calumleslie commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r167598960
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/MXNetHandler.scala
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import java.util.concurrent._
+
+trait MXNetHandler {
+
+  def execute[T](f: => T): T
+
+  val executor: ExecutorService
+
+}
+
+object MXNetHandlerType extends Enumeration {
+
+  type MXNetHandlerType = Value
+  val SingleThreadHandler = Value("MXNetSingleThreadHandler")
+  val OneThreadPerModelHandler = Value("MXNetOneThreadPerModelHandler")
+}
+
+class MXNetOneThreadPerModelHandler extends MXNetHandler {
+
+  private val threadFactory = new ThreadFactory {
+
+override def newThread(r: Runnable): Thread = new Thread(r) {
+  setName(classOf[MXNetOneThreadPerModelHandler].getCanonicalName)
+}
+  }
+
+  override val executor: ExecutorService = Executors.newFixedThreadPool(10, 
threadFactory)
+
+  override def execute[T](f: => T): T = {
 
 Review comment:
   Do you want to support the recursive case? If you submit a task which in 
turn wants to submit a task to the MXNet thread, it will deadlock in this 
arrangement. You can avoid this by checking whether you're already on the 
managed thread and executing the code inline if so.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] calumleslie commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-12 Thread GitBox
calumleslie commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r167600563
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/PredictBase.scala
 ##
 @@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.io.NDArrayIter
+import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.module.Module
+
+import scala.collection.mutable.ListBuffer
+import org.slf4j.LoggerFactory
+
+/**
+  * Base Trait for MXNNet Predictor classes.
+  */
+trait PredictBase {
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+* @param input: A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+* is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  def predict(input: IndexedSeq[Array[Float]]): IndexedSeq[Array[Float]]
+
+  /**
+* Predict using NDArray as input. This method is useful when the input is 
a batch of data
+* or when multiple operations on the input/output have to performed.
+* Note: User is responsible for managing allocation/deallocation of 
NDArrays.
+* @param input: IndexedSequence NDArrays.
+* @return output of Predictions as NDArrays.
+*/
+  def predictWithNDArray(input: IndexedSeq[NDArray]): IndexedSeq[NDArray]
+
+}
+
+/**
+  * Implementation of predict routines.
+  *
+  * @param modelPathPrefix PathPrefix from where to load the model.
+  *Example: file://model-dir/resnet-152(containing 
resnet-152-symbol.json,
+  *resnet-152-.params and optionally synset.txt).
+  *Supports model loading from various sources like 
local disk,
+  *hdfs, https and s3. file://, hdfs://, https://, 
s3://
+  * @param inputDescriptors Descriptors defining the input node names, shape,
+  * layout and Type parameters
+  * @param outputDescriptors Descriptors defining the output node names, shape,
+  *  layout and Type parameters
+  */
+class Predictor(modelPathPrefix: String,
+ protected val inputDescriptors: IndexedSeq[DataDesc],
+ protected var outputDescriptors:
+ Option[IndexedSeq[DataDesc]] = None) extends PredictBase {
+
+  private val logger = LoggerFactory.getLogger(classOf[Predictor])
+
+  protected var batchIndex = inputDescriptors(0).layout.indexOf('N')
+  protected var batchSize = if (batchIndex != -1 ) 
inputDescriptors(0).shape(batchIndex) else 1
+
+  protected var iDescriptors = inputDescriptors
+
+  inputDescriptors.foreach((f: DataDesc) => require(f.layout.indexOf('N') == 
batchIndex,
+"batch size should be in the same index for all inputs"))
+
+
+  if (batchIndex != -1) {
+inputDescriptors.foreach((f: DataDesc) => require(f.shape(batchIndex) == 
batchSize,
+  "batch size should be same for all inputs"))
+  } else {
+// TODO: this is assuming that the input needs a batch
+iDescriptors = inputDescriptors.map((f : DataDesc) => new DataDesc(f.name,
+Shape(1 +: f.shape.toVector), f.dtype, 'N' +: f.layout) )
+batchIndex = 1
+  }
+
+  protected val mxNetHandler = MXNetHandler()
+
+  protected val mod = loadModule()
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+*
+* @param input : A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+*  is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  override def predict(input: IndexedSeq[Array[Float]]): 
IndexedSeq[Array[Float]] = {
+
+require(input.length == inputDescriptors.length, "number of inputs 
provided: %d" +
+  " do not match number of inputs in inputDescriptors: 
%d".format(input.length,
+ 

[GitHub] calumleslie commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-12 Thread GitBox
calumleslie commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r167599886
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/PredictBase.scala
 ##
 @@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.io.NDArrayIter
+import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.module.Module
+
+import scala.collection.mutable.ListBuffer
+import org.slf4j.LoggerFactory
+
+/**
+  * Base Trait for MXNNet Predictor classes.
+  */
+trait PredictBase {
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+* @param input: A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+* is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  def predict(input: IndexedSeq[Array[Float]]): IndexedSeq[Array[Float]]
+
+  /**
+* Predict using NDArray as input. This method is useful when the input is 
a batch of data
+* or when multiple operations on the input/output have to performed.
+* Note: User is responsible for managing allocation/deallocation of 
NDArrays.
+* @param input: IndexedSequence NDArrays.
+* @return output of Predictions as NDArrays.
+*/
+  def predictWithNDArray(input: IndexedSeq[NDArray]): IndexedSeq[NDArray]
+
+}
+
+/**
+  * Implementation of predict routines.
+  *
+  * @param modelPathPrefix PathPrefix from where to load the model.
+  *Example: file://model-dir/resnet-152(containing 
resnet-152-symbol.json,
+  *resnet-152-.params and optionally synset.txt).
+  *Supports model loading from various sources like 
local disk,
+  *hdfs, https and s3. file://, hdfs://, https://, 
s3://
+  * @param inputDescriptors Descriptors defining the input node names, shape,
+  * layout and Type parameters
+  * @param outputDescriptors Descriptors defining the output node names, shape,
+  *  layout and Type parameters
+  */
+class Predictor(modelPathPrefix: String,
+ protected val inputDescriptors: IndexedSeq[DataDesc],
+ protected var outputDescriptors:
+ Option[IndexedSeq[DataDesc]] = None) extends PredictBase {
+
+  private val logger = LoggerFactory.getLogger(classOf[Predictor])
+
+  protected var batchIndex = inputDescriptors(0).layout.indexOf('N')
+  protected var batchSize = if (batchIndex != -1 ) 
inputDescriptors(0).shape(batchIndex) else 1
+
+  protected var iDescriptors = inputDescriptors
+
+  inputDescriptors.foreach((f: DataDesc) => require(f.layout.indexOf('N') == 
batchIndex,
+"batch size should be in the same index for all inputs"))
+
+
+  if (batchIndex != -1) {
+inputDescriptors.foreach((f: DataDesc) => require(f.shape(batchIndex) == 
batchSize,
+  "batch size should be same for all inputs"))
+  } else {
+// TODO: this is assuming that the input needs a batch
+iDescriptors = inputDescriptors.map((f : DataDesc) => new DataDesc(f.name,
+Shape(1 +: f.shape.toVector), f.dtype, 'N' +: f.layout) )
+batchIndex = 1
+  }
+
+  protected val mxNetHandler = MXNetHandler()
+
+  protected val mod = loadModule()
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+*
+* @param input : A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+*  is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  override def predict(input: IndexedSeq[Array[Float]]): 
IndexedSeq[Array[Float]] = {
+
+require(input.length == inputDescriptors.length, "number of inputs 
provided: %d" +
+  " do not match number of inputs in inputDescriptors: 
%d".format(input.length,
+ 

[GitHub] calumleslie commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-12 Thread GitBox
calumleslie commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r167598332
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/MXNetHandler.scala
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import java.util.concurrent._
+
+trait MXNetHandler {
+
+  def execute[T](f: => T): T
+
+  val executor: ExecutorService
+
+}
+
+object MXNetHandlerType extends Enumeration {
+
+  type MXNetHandlerType = Value
+  val SingleThreadHandler = Value("MXNetSingleThreadHandler")
+  val OneThreadPerModelHandler = Value("MXNetOneThreadPerModelHandler")
+}
+
+class MXNetOneThreadPerModelHandler extends MXNetHandler {
+
+  private val threadFactory = new ThreadFactory {
+
+override def newThread(r: Runnable): Thread = new Thread(r) {
+  setName(classOf[MXNetOneThreadPerModelHandler].getCanonicalName)
+}
+  }
+
+  override val executor: ExecutorService = Executors.newFixedThreadPool(10, 
threadFactory)
+
+  override def execute[T](f: => T): T = {
+val task = new Callable[T] {
+  override def call(): T = {
+// scalastyle:off println
+println("threadId: %s".format(Thread.currentThread().getId()))
+// scalastyle:on println
+f
+  }
+}
+val result = executor.submit(task)
+try {
+  result.get()
+}
+catch {
+  case e: ExecutionException => throw e.getCause()
 
 Review comment:
   @yzhliu What's the purpose of adding a `@throws` annotation? These are 
normally disregarded in Scala.
   
   It would make sense to document _why_ the executionexception is being 
unwrapped here (the answer is "so it looks like the code was called inline").


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] calumleslie commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-12 Thread GitBox
calumleslie commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r167599469
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/MXNetHandler.scala
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import java.util.concurrent._
+
+trait MXNetHandler {
+
+  def execute[T](f: => T): T
+
+  val executor: ExecutorService
+
+}
+
+object MXNetHandlerType extends Enumeration {
+
+  type MXNetHandlerType = Value
+  val SingleThreadHandler = Value("MXNetSingleThreadHandler")
+  val OneThreadPerModelHandler = Value("MXNetOneThreadPerModelHandler")
+}
+
+class MXNetOneThreadPerModelHandler extends MXNetHandler {
+
+  private val threadFactory = new ThreadFactory {
+
+override def newThread(r: Runnable): Thread = new Thread(r) {
+  setName(classOf[MXNetOneThreadPerModelHandler].getCanonicalName)
+}
+  }
+
+  override val executor: ExecutorService = Executors.newFixedThreadPool(10, 
threadFactory)
 
 Review comment:
   A fixed threadpool of 10 does not enforce the invariant of having a single 
thread for all MXNet use, which I believe to be the point of this approach. The 
thread count here should be fixed to 1 (or use 
`Executors.newSingleThreadExecutor`).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] calumleslie commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-12 Thread GitBox
calumleslie commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r167596995
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/MXNetHandler.scala
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import java.util.concurrent._
+
+trait MXNetHandler {
+
+  def execute[T](f: => T): T
+
+  val executor: ExecutorService
+
+}
+
+object MXNetHandlerType extends Enumeration {
+
+  type MXNetHandlerType = Value
+  val SingleThreadHandler = Value("MXNetSingleThreadHandler")
+  val OneThreadPerModelHandler = Value("MXNetOneThreadPerModelHandler")
+}
+
+class MXNetOneThreadPerModelHandler extends MXNetHandler {
+
+  private val threadFactory = new ThreadFactory {
+
+override def newThread(r: Runnable): Thread = new Thread(r) {
+  setName(classOf[MXNetOneThreadPerModelHandler].getCanonicalName)
 
 Review comment:
   The name here is a little confusing given that it is also used for 
MXNetSingleThreadHandler.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #8915: NVLink communication pattern updated

2018-02-12 Thread GitBox
larroy commented on a change in pull request #8915: NVLink communication 
pattern updated 
URL: https://github.com/apache/incubator-mxnet/pull/8915#discussion_r167594127
 
 

 ##
 File path: src/kvstore/comm.h
 ##
 @@ -644,38 +803,44 @@ class CommDevice : public Comm {
 CopyFromTo(src, out, priority);
   } else {
 CHECK_EQ(out->storage_type(), kRowSparseStorage)
- << "BroadcastRowSparse expects row_sparse dst NDArray";
+<< "BroadcastRowSparse expects row_sparse dst NDArray";
 
 const bool is_diff_ctx = out->ctx() != src.ctx();
-NDArray out_gpu = is_diff_ctx? NDArray(kRowSparseStorage, out->shape(),
-src.ctx(), true, out->dtype(), out->aux_types()) : *out;
+NDArray out_gpu =
+is_diff_ctx ? NDArray(kRowSparseStorage, out->shape(), src.ctx(),
+  true, out->dtype(), out->aux_types())
+: *out;
 
 CHECK_EQ(row_id.ctx(), src.ctx())
-<< "row_id and src are expected to be on the same context";
-
-Engine::Get()->PushAsync([=](RunContext rctx, 
Engine::CallbackOnComplete on_complete) {
-NDArray temp = out_gpu;
-const TBlob& indices = row_id.data();
-switch (temp.ctx().dev_mask()) {
-  case cpu::kDevMask: {
-
mxnet::common::SparseRetainOpForwardRspWrapper(rctx.get_stream(),
-src, indices, kWriteTo, );
-break;
-  }
+<< "row_id and src are expected to be on the same context";
+
+Engine::Get()->PushAsync(
+[=](RunContext rctx, Engine::CallbackOnComplete on_complete) {
+  NDArray temp = out_gpu;
+  const TBlob& indices = row_id.data();
+  switch (temp.ctx().dev_mask()) {
+case cpu::kDevMask: {
+  mxnet::common::SparseRetainOpForwardRspWrapper(
+  rctx.get_stream(), src, indices, kWriteTo, );
+  break;
+}
 #if MXNET_USE_CUDA
-  case gpu::kDevMask: {
-
mxnet::common::SparseRetainOpForwardRspWrapper(rctx.get_stream(),
-src, indices, kWriteTo, );
-// wait for GPU operations to complete
-rctx.get_stream()->Wait();
-break;
-  }
+case gpu::kDevMask: {
 
 Review comment:
   Wrong indentation?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #9770: eye operator, for default storage type

2018-02-12 Thread GitBox
piiswrong commented on a change in pull request #9770: eye operator, for 
default storage type
URL: https://github.com/apache/incubator-mxnet/pull/9770#discussion_r167662095
 
 

 ##
 File path: tests/python/unittest/test_ndarray.py
 ##
 @@ -736,6 +736,14 @@ def test_output():
 assert_almost_equal(out.asnumpy(), ones.asnumpy() * 2)
 arange_out = mx.nd.arange(0, 20, dtype='int64')
 assert_almost_equal(arange_out.asnumpy(), np.arange(0, 20))
+N_array = np.random.randint(1, high=3, size=3)
+M_array = np.random.randint(1, high=3, size=3)
+k_array = np.random.randint(-5, high=5, size=3)
 
 Review comment:
   Also I think there is the case where M is 0


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #9770: eye operator, for default storage type

2018-02-12 Thread GitBox
piiswrong commented on a change in pull request #9770: eye operator, for 
default storage type
URL: https://github.com/apache/incubator-mxnet/pull/9770#discussion_r167662010
 
 

 ##
 File path: tests/python/unittest/test_ndarray.py
 ##
 @@ -736,6 +736,14 @@ def test_output():
 assert_almost_equal(out.asnumpy(), ones.asnumpy() * 2)
 arange_out = mx.nd.arange(0, 20, dtype='int64')
 assert_almost_equal(arange_out.asnumpy(), np.arange(0, 20))
+N_array = np.random.randint(1, high=3, size=3)
+M_array = np.random.randint(1, high=3, size=3)
+k_array = np.random.randint(-5, high=5, size=3)
 
 Review comment:
   Could you run some more trials?
   At least locally & with larger ranges. Just to verify if there are edge cases


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ashishlal opened a new issue #9772: ndarray indexing issues

2018-02-12 Thread GitBox
ashishlal opened a new issue #9772: ndarray indexing issues
URL: https://github.com/apache/incubator-mxnet/issues/9772
 
 
   I am new to mxnet. I just installed mxnet 1.0.0 and python 3.5 on a Ubuntu 
14.04 machine with CUDA 8.0 and cudnn 7.0.5.
   
   My code is given below. I am trying to store image data in an ndarray. (see 
https://github.com/ypwhs/DogBreed_gluon/blob/master/get_features_v3.ipynb for 
the original code) -
   
   X_224 = nd.zeros((n, 3, 224, 224))
   X_299 = nd.zeros((n, 3, 299, 299))
   
   mean = np.array([0.485, 0.456, 0.406])
   std = np.array([0.229, 0.224, 0.225])
   
   for i, (fname, breed) in tqdm(df.iterrows(), total=n):
   img = cv2.imread('data/train/%s.jpg' % fname)
   img_224 = ((cv2.resize(img, (224, 224))[:, :, ::-1] / 255.0 - mean) / 
std).transpose((2, 0, 1))
   img_299 = ((cv2.resize(img, (299, 299))[:, :, ::-1] / 255.0 - mean) / 
std).transpose((2, 0, 1))
   
   X_224[i] = nd.array(img_224) <-- I get error in this line
   X_299[i] = nd.array(img_299)
   ValueError: Indexing NDArray with index=0 and type=class 'numpy.int64' is 
not supported.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #9765: Installation with GPU on Fedora 27?

2018-02-12 Thread GitBox
cjolivier01 commented on issue #9765: Installation with GPU on Fedora 27?
URL: 
https://github.com/apache/incubator-mxnet/issues/9765#issuecomment-365043585
 
 
   @marcoabreu 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #9675: Add contrib.compute_accidental_hits operator for candidate sampling

2018-02-12 Thread GitBox
sxjscience commented on issue #9675: Add contrib.compute_accidental_hits 
operator for candidate sampling
URL: https://github.com/apache/incubator-mxnet/pull/9675#issuecomment-365041214
 
 
   Looks good once the problem about hashing is fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #9771: Modify NDArrayIter constructor to receive tuple (i.e. dict in Python)?

2018-02-12 Thread GitBox
marcoabreu commented on issue #9771: Modify NDArrayIter constructor to receive 
tuple (i.e. dict in Python)?
URL: https://github.com/apache/incubator-mxnet/pull/9771#issuecomment-365043493
 
 
   Please consider adding an additional constructor instead of modifying the 
existing constructor to provide backwards compatibility


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #9765: Installation with GPU on Fedora 27?

2018-02-12 Thread GitBox
cjolivier01 commented on issue #9765: Installation with GPU on Fedora 27?
URL: 
https://github.com/apache/incubator-mxnet/issues/9765#issuecomment-365044562
 
 
   Why would it think CUDA_CALL is a template?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] charlieyou commented on issue #7503: log epoch number for tensorboard

2018-02-12 Thread GitBox
charlieyou commented on issue #7503: log epoch number for tensorboard
URL: https://github.com/apache/incubator-mxnet/pull/7503#issuecomment-365027273
 
 
   @szha Completely forgot about this, sorry! Would like to re-open. Very small 
change, anything else needed from me to merge this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #9765: Installation with GPU on Fedora 27?

2018-02-12 Thread GitBox
marcoabreu commented on issue #9765: Installation with GPU on Fedora 27?
URL: 
https://github.com/apache/incubator-mxnet/issues/9765#issuecomment-365042998
 
 
   @cjolivier01 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on issue #9681: Better Exception Handling for Operators

2018-02-12 Thread GitBox
anirudh2290 commented on issue #9681: Better Exception Handling for Operators
URL: https://github.com/apache/incubator-mxnet/pull/9681#issuecomment-365019710
 
 
   @piiswrong: Do you have additional suggestions ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #8972: Profiling enhancements, python API, vtune and chrome tracing objects, etc.

2018-02-12 Thread GitBox
cjolivier01 commented on issue #8972: Profiling enhancements, python API, vtune 
and chrome tracing objects, etc.
URL: https://github.com/apache/incubator-mxnet/pull/8972#issuecomment-364973585
 
 
   Got sucked into some other stuff, will get back to this in awhile


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #9675: Add contrib.compute_accidental_hits operator for candidate sampling

2018-02-12 Thread GitBox
sxjscience commented on issue #9675: Add contrib.compute_accidental_hits 
operator for candidate sampling
URL: https://github.com/apache/incubator-mxnet/pull/9675#issuecomment-365001760
 
 
   Is the functionality something like `broadcast_equal`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #9672: CMake CUDA fixes + NCCL

2018-02-12 Thread GitBox
cjolivier01 commented on issue #9672: CMake CUDA fixes + NCCL
URL: https://github.com/apache/incubator-mxnet/pull/9672#issuecomment-364999845
 
 
   why is Windows python runs hanging?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #9675: Add contrib.compute_accidental_hits operator for candidate sampling

2018-02-12 Thread GitBox
sxjscience commented on issue #9675: Add contrib.compute_accidental_hits 
operator for candidate sampling
URL: https://github.com/apache/incubator-mxnet/pull/9675#issuecomment-365002076
 
 
   I see, it returns a CSR matrix.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 closed pull request #9773: Incremental move from cmake_fix

2018-02-12 Thread GitBox
cjolivier01 closed pull request #9773: Incremental move from cmake_fix
URL: https://github.com/apache/incubator-mxnet/pull/9773
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/CMakeLists.txt b/CMakeLists.txt
index cddbf725c2..2a51a4bf9b 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -1,35 +1,31 @@
 cmake_minimum_required(VERSION 3.0.2)
-message(STATUS "CMake version '${CMAKE_VERSION}' using generator 
'${CMAKE_GENERATOR}'")
-if(((${CMAKE_GENERATOR} MATCHES "Visual Studio.*") OR (${CMAKE_GENERATOR} 
MATCHES "Xcode.*"))
-AND ((${CMAKE_VERSION} VERSION_GREATER "3.9.0") OR (${CMAKE_VERSION} 
VERSION_EQUAL "3.9.0")))
-  # Toolsets are only supported for Visual Studio and Xcode
-  set(FIRST_CUDA TRUE)
-else()
-  set(FIRST_CUDA FALSE)
+
+if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/build/private/local_config.cmake)
+  include(${CMAKE_CURRENT_SOURCE_DIR}/build/private/local_config.cmake)
 endif()
+
 include(cmake/Utils.cmake)
 
 #Some things have order. This must be put in front alone
 mxnet_option(USE_CUDA "Build with CUDA support"   ON)
-mxnet_option(USE_OLDCMAKECUDA   "Build with old cmake cuda" OFF)
-if(USE_CUDA)
-  add_definitions(-DMSHADOW_USE_CUDA=1)
-  IF(FIRST_CUDA AND (NOT USE_OLDCMAKECUDA))
-set(__cuda_toolset "7.5" "8.0" "9.0")
-set(CUDA_TOOLSET "8.0" CACHE STRING "Select CUDA Version.")
-set_property( CACHE CUDA_TOOLSET PROPERTY STRINGS "" ${__cuda_toolset} )
-set(CMAKE_GENERATOR_TOOLSET "cuda=${CUDA_TOOLSET},host=x64")
+mxnet_option(USE_OLDCMAKECUDA "Build with old cmake cuda" OFF)
+
+if(USE_CUDA AND NOT USE_OLDCMAKECUDA)
+  message(STATUS "CMake version '${CMAKE_VERSION}' using generator 
'${CMAKE_GENERATOR}'")
+  if(((${CMAKE_GENERATOR} MATCHES "Visual Studio.*") OR (${CMAKE_GENERATOR} 
MATCHES "Xcode.*"))
+AND ((${CMAKE_VERSION} VERSION_GREATER "3.9.0") OR (${CMAKE_VERSION} 
VERSION_EQUAL "3.9.0")))
+set(FIRST_CUDA TRUE)
 project(mxnet C CXX CUDA)
   else()
-project(mxnet C CXX)
 set(FIRST_CUDA FALSE)
+set(USE_OLDCMAKECUDA TRUE)
+project(mxnet C CXX)
   endif()
 else()
   project(mxnet C CXX)
-  add_definitions(-DMSHADOW_USE_CUDA=0)
 endif()
 
-
+mxnet_option(USE_NCCL "Use NVidia NCCL with CUDA" OFF)
 mxnet_option(USE_OPENCV   "Build with OpenCV support" ON)
 mxnet_option(USE_OPENMP   "Build with Openmp support" ON)
 mxnet_option(USE_CUDNN"Build with cudnn support"  ON) # one could 
set CUDNN_ROOT for search path
@@ -37,7 +33,7 @@ mxnet_option(USE_LAPACK   "Build with lapack support" 
ON IF NOT MSVC)
 mxnet_option(USE_MKL_IF_AVAILABLE "Use MKL if found" ON)
 mxnet_option(USE_MKLML_MKL"Use MKLML variant of MKL (if MKL found)" ON 
IF USE_MKL_IF_AVAILABLE AND UNIX AND (NOT APPLE))
 mxnet_option(USE_MKL_EXPERIMENTAL "Use experimental MKL (if MKL enabled and 
found)" OFF)
-mxnet_option(USE_OPERATOR_TUNING  "Enable auto-tuning of operators" ON AND NOT 
MSVC)
+mxnet_option(USE_OPERATOR_TUNING  "Enable auto-tuning of operators" ON IF NOT 
MSVC)
 mxnet_option(USE_GPERFTOOLS   "Build with GPerfTools support (if found)" 
ON)
 mxnet_option(USE_JEMALLOC "Build with Jemalloc support"   ON)
 mxnet_option(USE_PROFILER "Build with Profiler support"   ON)
@@ -47,30 +43,25 @@ mxnet_option(USE_PLUGIN_CAFFE "Use Caffe Plugin" OFF)
 mxnet_option(USE_CPP_PACKAGE  "Build C++ Package" OFF)
 mxnet_option(USE_MXNET_LIB_NAMING "Use MXNet library naming conventions." ON)
 mxnet_option(USE_GPROF"Compile with gprof (profiling) flag" OFF)
+mxnet_option(USE_CXX14_IF_AVAILABLE "Build with C++14 if the compiler supports 
it" OFF)
 mxnet_option(USE_VTUNE"Enable use of Intel Amplifier XE (VTune)" 
OFF) # one could set VTUNE_ROOT for search path
 mxnet_option(ENABLE_CUDA_RTC  "Build with CUDA runtime compilation 
support" ON)
 mxnet_option(INSTALL_EXAMPLES "Install the example source files." OFF)
 mxnet_option(USE_SIGNAL_HANDLER   "Print stack traces on segfaults." OFF)
 
-
-
-if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/build/private/local_config.cmake)
-  include(${CMAKE_CURRENT_SOURCE_DIR}/build/private/local_config.cmake)
-endif()
-
 if(MSVC)
   set(SYSTEM_ARCHITECTURE x86_64)
 else()
   EXECUTE_PROCESS( COMMAND uname -m COMMAND tr -d '\n' OUTPUT_VARIABLE 
SYSTEM_ARCHITECTURE)
 endif()
 
-set(CMAKE_MODULE_PATH 
"${PROJECT_SOURCE_DIR}/cmake/Modules;${CMAKE_MODULE_PATH}")
+set(CMAKE_MODULE_PATH 
"${CMAKE_CURRENT_SOURCE_DIR}/cmake/Modules;${CMAKE_MODULE_PATH}")
 
 SET(EXTRA_OPERATORS "" CACHE PATH "EXTRA OPERATORS PATH")
 
 if("$ENV{VERBOSE}" STREQUAL "1")
   message(STATUS " Verbose Makefile ACTIVATED")
-  set(CMAKE_VERBOISE_MAKEFILE ON)
+  set(CMAKE_VERBOSE_MAKEFILE ON)
 endif()
 
 
@@ -87,6 +78,9 @@ if(MSVC)
   

[GitHub] cjolivier01 commented on issue #9773: Incremental move from cmake_fix

2018-02-12 Thread GitBox
cjolivier01 commented on issue #9773: Incremental move from cmake_fix
URL: https://github.com/apache/incubator-mxnet/pull/9773#issuecomment-365094754
 
 
   Actual changes are here once problem was found: 
https://github.com/apache/incubator-mxnet/pull/9672


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #9672: CMake CUDA fixes + NCCL

2018-02-12 Thread GitBox
cjolivier01 commented on issue #9672: CMake CUDA fixes + NCCL
URL: https://github.com/apache/incubator-mxnet/pull/9672#issuecomment-365094329
 
 
   turns out related to project() not being at top, and thus MSVC not being 
turned on


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #9672: CMake CUDA fixes + NCCL

2018-02-12 Thread GitBox
cjolivier01 commented on issue #9672: CMake CUDA fixes + NCCL
URL: https://github.com/apache/incubator-mxnet/pull/9672#issuecomment-365094329
 
 
   turns out related to project() not being at top


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 opened a new pull request #9773: Incremental move from cmake_fix

2018-02-12 Thread GitBox
cjolivier01 opened a new pull request #9773: Incremental move from cmake_fix
URL: https://github.com/apache/incubator-mxnet/pull/9773
 
 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #9672: CMake CUDA fixes + NCCL

2018-02-12 Thread GitBox
marcoabreu commented on issue #9672: CMake CUDA fixes + NCCL
URL: https://github.com/apache/incubator-mxnet/pull/9672#issuecomment-365046302
 
 
   I'd guess that the process is crashing due to some compilation errors, thus 
making nosetests fail. Maybe Jenkins does not pick this crash up. 
Interestingly, it's happening on all 4 windows jobs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] parallelgithub commented on issue #9771: Modify NDArrayIter constructor to receive tuple (i.e. dict in Python)?

2018-02-12 Thread GitBox
parallelgithub commented on issue #9771: Modify NDArrayIter constructor to 
receive tuple (i.e. dict in Python)?
URL: https://github.com/apache/incubator-mxnet/pull/9771#issuecomment-365139987
 
 
   Adding an additional constructor is our first consideration. But after 
observing the original interface: 
   ```scala
   class NDArrayIter (data: IndexedSeq[NDArray], label: IndexedSeq[NDArray] = 
IndexedSeq.empty,
 private val dataBatchSize: Int = 1, shuffle: Boolean = 
false,
 lastBatchHandle: String = "pad",
 dataName: String = "data", labelName: String = "label") 
extends DataIter {...
   ```
   we found there is no position to place the custom names. Therefore, since 
each overloading constructor has to call primary constructor, keeping the 
original interface may not a possible way if we want to make using multiple 
input flexible.
   
   Another reason we modifying the existing constructor is that we observer the 
fields defined in the class:
   ```scala
   val initData: IndexedSeq[(String, NDArray)] = IO.initData(_dataList, false, 
dataName)
   val initLabel: IndexedSeq[(String, NDArray)] = IO.initData(_labelList, true, 
labelName)
   ```
   We guess maybe using the type `IndexedSeq[(String, NDArray)] ` is the 
original motivation at the beginning.
   
   In Python there is a 
[way](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/io.py#L490)
 to manipulate multiple input. But Scala is static type. Maybe we can try to 
rethink about the API interface design.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on issue #9776: Use get_bz2_data from test_utils for sparse_op script

2018-02-12 Thread GitBox
anirudh2290 commented on issue #9776: Use get_bz2_data from test_utils for 
sparse_op script
URL: https://github.com/apache/incubator-mxnet/pull/9776#issuecomment-365139835
 
 
   This change should have gone with #7799


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sethah opened a new pull request #9777: Mx 9588

2018-02-12 Thread GitBox
sethah opened a new pull request #9777: Mx 9588
URL: https://github.com/apache/incubator-mxnet/pull/9777
 
 
   ## Description ##
   This PR adds a mixin class that F1 and other metrics like precision and 
recall can leverage in the future. It also provides a new option for the F1 
metric called `average` which defines how the metric will be aggregated across 
mini batches. 
   
   ## Checklist ##
   ### Essentials ###
   - [X] Passed code style checking (`make lint`)
   - [X] Changes are complete (i.e. I finished coding on this PR)
   - [X] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [X] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [X] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ## Approach
   
   The "micro" vs "macro" update strategy is not specific to F1 score. The 
macro update just takes an average of averages, which can be done for any 
metric. It may be best to design an abstraction where any metric can have the 
micro/macro update option, but I couldn't see a good way to do that here that 
would:
   
   * be easy to use for end users AND
   * maintain backward compatibility AND
   * maintain current semantics
   
   For now, the behavior for each type of update is hard coded into the 
`update` method of the `F1` class. We can discuss the approach.
   
   Please let me know if I have missed or overlooked anything :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sethah commented on issue #9777: Mx 9588

2018-02-12 Thread GitBox
sethah commented on issue #9777: Mx 9588
URL: https://github.com/apache/incubator-mxnet/pull/9777#issuecomment-365147592
 
 
   Regarding other approaches, something I looked at was the following:
   
   ```python
   class MacroMetric(EvalMetric):
   
   def __init__(self, base_metric):
   super(MacroMetric, self).__init__("macro_" + base_metric.name, 
output_names=base_metric.output_names,
 
label_names=base_metric.label_names)
   self.base_metric = base_metric
   
   def update(self, labels, preds):
   self.base_metric.update(labels, preds)
   self.sum_metric += self.base_metric.get()[1]
   self.num_inst += 1
   self.base_metric.reset()
   ```
   
   Any metric that has defined the "micro" behavior can then be used as "macro" 
just by calling `metric = mx.metric.MacroMetric(mx.metric.F1())`, but that 
seems pretty awkward for the users, and also changes the default behavior to 
micro, which doesn't work for backwards compatibility. The current solution 
works well enough, and we could probably revisit in a later PR if needed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9761: Don't use FIRST_CUDA on generators which don't support toolsets

2018-02-12 Thread GitBox
cjolivier01 commented on a change in pull request #9761: Don't use FIRST_CUDA 
on generators which don't support toolsets
URL: https://github.com/apache/incubator-mxnet/pull/9761#discussion_r167719518
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -1,6 +1,8 @@
 cmake_minimum_required(VERSION 3.0.2)
-
-if((${CMAKE_VERSION} VERSION_GREATER "3.9.0") OR (${CMAKE_VERSION} 
VERSION_EQUAL "3.9.0"))
+message(STATUS "CMake version '${CMAKE_VERSION}' using generator 
'${CMAKE_GENERATOR}'")
+if(((${CMAKE_GENERATOR} MATCHES "Visual Studio.*") OR (${CMAKE_GENERATOR} 
MATCHES "Xcode.*"))
 
 Review comment:
   You left out "Unix Makefiles" and broke the Linux builds, it seems.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9770: eye operator, for default storage type

2018-02-12 Thread GitBox
cjolivier01 commented on a change in pull request #9770: eye operator, for 
default storage type
URL: https://github.com/apache/incubator-mxnet/pull/9770#discussion_r167722707
 
 

 ##
 File path: src/operator/tensor/init_op.h
 ##
 @@ -63,6 +63,86 @@ struct InitOpParam : public dmlc::Parameter {
   }
 };
 
+struct EyeParam : public dmlc::Parameter {
+  nnvm::dim_t N;
+  nnvm::dim_t M;
+  nnvm::dim_t k;
+  std::string ctx;
+  int dtype;
+
+  DMLC_DECLARE_PARAMETER(EyeParam) {
+DMLC_DECLARE_FIELD(N)
+.describe("Number of rows in the output.");
+DMLC_DECLARE_FIELD(M)
+.set_default(0)
+.describe("Number of columns in the output. If 0, defaults to N");
+DMLC_DECLARE_FIELD(k)
+.set_default(0)
+.describe("Index of the diagonal. 0 (the default) refers to the main 
diagonal."
+  "A positive value refers to an upper diagonal."
+  "A negative value to a lower diagonal.");
+DMLC_DECLARE_FIELD(ctx)
+.set_default("")
+.describe("Context of output, in format [cpu|gpu|cpu_pinned](n)."
+  "Only used for imperative calls.");
+DMLC_DECLARE_FIELD(dtype).set_default(mshadow::kFloat32)
+.add_enum("float32", mshadow::kFloat32)
+.add_enum("float64", mshadow::kFloat64)
+.add_enum("float16", mshadow::kFloat16)
+.add_enum("uint8", mshadow::kUint8)
+.add_enum("int32", mshadow::kInt32)
+.add_enum("int64", mshadow::kInt64)
+.describe("Target data type.");
+  }
+};
+
+template
+inline bool InitEyeShape(const nnvm::NodeAttrs& attrs,
+ std::vector *in_attrs,
+ std::vector *out_attrs) {
+  const ParamType& param = nnvm::get(attrs.parsed);
+  CHECK_EQ(in_attrs->size(), 0U);
+  CHECK_EQ(out_attrs->size(), 1U);
+  SHAPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::Shape2(param.N, param.M > 0 ? 
param.M : param.N));
+  return true;
+}
+
+template
+struct eye_dns_fill {
+  template
+  MSHADOW_XINLINE static void Map(int i, DType* out_data, const nnvm::dim_t 
num_cols,
+  const nnvm::dim_t k) {
+if ((i % num_cols) == ((i / num_cols) + k)) {
 
 Review comment:
   Looks like two divides per value-fill. Would it be faster ( at least for 
CPU), to fill with 0's and then "walk" across an offset (using only add) to 
fill in the one's?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8601: Why is the accuracy overestimated?

2018-02-12 Thread GitBox
szha commented on issue #8601: Why is the accuracy overestimated?
URL: 
https://github.com/apache/incubator-mxnet/issues/8601#issuecomment-365109767
 
 
   @apache/mxnet-committers: This issue has been inactive for the past 90 days. 
It has no label and needs triage.
   
   For general "how-to" questions, our [user forum](https://discuss.mxnet.io/) 
(and [Chinese version](https://discuss.gluon.ai/)) is a good place to get help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev opened a new issue #9775: MXNet doesn't seem to have "Squeeze" symbol support. Is this expected to be added ?

2018-02-12 Thread GitBox
spidyDev opened a new issue #9775: MXNet doesn't seem to have "Squeeze" symbol 
support. Is this expected to be added ?
URL: https://github.com/apache/incubator-mxnet/issues/9775
 
 
   Note: Providing complete information in the most concise form is the best 
way to get help. This issue template serves as the checklist for essential 
information to most of the technical issues and bug reports. For non-technical 
issues and feature requests, feel free to present the information in what you 
believe is the best form.
   
   For Q & A and discussion, please start a discussion thread at 
https://discuss.mxnet.io 
   
   ## Description
   (Brief description of the problem in no more than 2 sentences.)
   
   ## Environment info (Required)
   
   ```
   What to do:
   1. Download the diagnosis script from 
https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py
   2. Run the script using `python diagnose.py` and paste its output here.
   
   ```
   
   Package used (Python/R/Scala/Julia):
   (I'm using ...)
   
   For Scala user, please provide:
   1. Java version: (`java -version`)
   2. Maven version: (`mvn -version`)
   3. Scala runtime if applicable: (`scala -version`)
   
   For R user, please provide R `sessionInfo()`:
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio):
   
   MXNet commit hash:
   (Paste the output of `git rev-parse HEAD` here.)
   
   Build config:
   (Paste the content of config.mk, or the build command.)
   
   ## Error Message:
   (Paste the complete error message, including stack trace.)
   
   ## Minimum reproducible example
   (If you are using your own code, please provide a short script that 
reproduces the error. Otherwise, please provide link to the existing example.)
   
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   
   1.
   2.
   
   ## What have you tried to solve it?
   
   1.
   2.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #9775: MXNet doesn't seem to have "Squeeze" symbol support. Is this expected to be added ?

2018-02-12 Thread GitBox
szha commented on issue #9775: MXNet doesn't seem to have "Squeeze" symbol 
support. Is this expected to be added ?
URL: 
https://github.com/apache/incubator-mxnet/issues/9775#issuecomment-365112617
 
 
   This is already provided by #9734 thanks to @reminisce 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed issue #9775: MXNet doesn't seem to have "Squeeze" symbol support. Is this expected to be added ?

2018-02-12 Thread GitBox
szha closed issue #9775: MXNet doesn't seem to have "Squeeze" symbol support. 
Is this expected to be added ?
URL: https://github.com/apache/incubator-mxnet/issues/9775
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on a change in pull request #9747: Add contrib.rand_log_uniform

2018-02-12 Thread GitBox
sxjscience commented on a change in pull request #9747: Add 
contrib.rand_log_uniform
URL: https://github.com/apache/incubator-mxnet/pull/9747#discussion_r167737932
 
 

 ##
 File path: python/mxnet/ndarray/contrib.py
 ##
 @@ -18,9 +18,76 @@
 # coding: utf-8
 # pylint: disable=wildcard-import, unused-wildcard-import
 """Contrib NDArray API of MXNet."""
+import math
+from ..context import current_context
+from ..random import uniform
 try:
 from .gen_contrib import *
 except ImportError:
 pass
 
-__all__ = []
+__all__ = ["rand_log_uniform"]
+
+def rand_log_uniform(true_classes, num_sampled, range_max, ctx=None):
+"""Draw random samples from an approximately log-uniform or Zipfian 
distribution.
+
+This operation randomly samples *num_sampled* candidates the range of 
integers [0, range_max).
+The elements of sampled_candidates are drawn with replacement from the 
base distribution.
+
+The base distribution for this operator is an approximately log-uniform or 
Zipfian distribution:
+
+P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)
+
+This sampler is useful when the true classes approximately follow such a 
distribution.
+For example, if the classes represent words in a lexicon sorted in 
decreasing order of \
+frequency. If your classes are not ordered by decreasing frequency, do not 
use this op.
+
+Additionaly, it also returns the number of times each of the \
+true classes and the sampled classes is expected to occur.
+
+Parameters
+--
+true_classes : NDArray
+A 1-D NDArray of the target classes.
+num_sampled: int
+The number of classes to randomly sample.
+range_max: int
+The number of possible classes.
+ctx : Context
+Device context of output. Default is current context. Overridden by
+`mu.context` when `mu` is an NDArray.
+
+Returns
+---
+list of NDArrays
+A 1-D `int64` `NDArray` for sampled candidate classes, a 1-D `float64` 
`NDArray` for \
+the expected count for true classes, and a 1-D `float64` `NDArray` for 
the \
+expected count for sampled classes.
 
 Review comment:
   We need to write the docstring as:
   ```log
   Returns
   
   samples : NDArray
   A 1-D `int64` `NDArray` for sampled candidate classes
   exp_count_true : NDArray
  ...
   exp_count_sample : NDArray
  ...
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on a change in pull request #9747: Add contrib.rand_log_uniform

2018-02-12 Thread GitBox
sxjscience commented on a change in pull request #9747: Add 
contrib.rand_log_uniform
URL: https://github.com/apache/incubator-mxnet/pull/9747#discussion_r167739501
 
 

 ##
 File path: python/mxnet/ndarray/contrib.py
 ##
 @@ -18,9 +18,76 @@
 # coding: utf-8
 # pylint: disable=wildcard-import, unused-wildcard-import
 """Contrib NDArray API of MXNet."""
+import math
+from ..context import current_context
+from ..random import uniform
 try:
 from .gen_contrib import *
 except ImportError:
 pass
 
-__all__ = []
+__all__ = ["rand_log_uniform"]
+
+def rand_log_uniform(true_classes, num_sampled, range_max, ctx=None):
+"""Draw random samples from an approximately log-uniform or Zipfian 
distribution.
+
+This operation randomly samples *num_sampled* candidates the range of 
integers [0, range_max).
+The elements of sampled_candidates are drawn with replacement from the 
base distribution.
+
+The base distribution for this operator is an approximately log-uniform or 
Zipfian distribution:
+
+P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)
+
+This sampler is useful when the true classes approximately follow such a 
distribution.
+For example, if the classes represent words in a lexicon sorted in 
decreasing order of \
+frequency. If your classes are not ordered by decreasing frequency, do not 
use this op.
+
+Additionaly, it also returns the number of times each of the \
+true classes and the sampled classes is expected to occur.
+
+Parameters
+--
+true_classes : NDArray
+A 1-D NDArray of the target classes.
+num_sampled: int
+The number of classes to randomly sample.
+range_max: int
+The number of possible classes.
+ctx : Context
+Device context of output. Default is current context. Overridden by
+`mu.context` when `mu` is an NDArray.
+
+Returns
+---
+list of NDArrays
+A 1-D `int64` `NDArray` for sampled candidate classes, a 1-D `float64` 
`NDArray` for \
+the expected count for true classes, and a 1-D `float64` `NDArray` for 
the \
+expected count for sampled classes.
+
+Examples
+
+>>> true_cls = mx.nd.array([3])
+>>> samples, exp_count_true, exp_count_sample = 
mx.nd.contrib.rand_log_uniform(true_cls, 4, 5)
+>>> samples
+[1 3 3 3]
+
+>>> exp_count_true
+[ 0.12453879]
+
+>>> exp_count_sample
+[ 0.22629439  0.12453879  0.12453879  0.12453879]
+
+"""
+if ctx is None:
+ctx = current_context()
+log_range = math.log(range_max + 1)
+rand = uniform(0, log_range, shape=(num_sampled,), dtype='float64', 
ctx=ctx)
+# make sure sampled_classes are in the range of [0, range_max)
+sampled_classes = (rand.exp() - 1).astype('int64') % range_max
+
+true_classes = true_classes.as_in_context(ctx).astype('float64')
+expected_count_true = ((true_classes + 2.0) / (true_classes + 1.0)).log() 
/ log_range
+# cast sampled classes to fp64 to avoid interget division
+sampled_cls_fp64 = sampled_classes.astype('float64')
+expected_count_sampled = ((sampled_cls_fp64 + 2.0) / (sampled_cls_fp64 + 
1.0)).log() / log_range
+return [sampled_classes, expected_count_true, expected_count_sampled]
 
 Review comment:
   No need to return a list here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on a change in pull request #9747: Add contrib.rand_log_uniform

2018-02-12 Thread GitBox
sxjscience commented on a change in pull request #9747: Add 
contrib.rand_log_uniform
URL: https://github.com/apache/incubator-mxnet/pull/9747#discussion_r167740339
 
 

 ##
 File path: python/mxnet/ndarray/contrib.py
 ##
 @@ -18,9 +18,76 @@
 # coding: utf-8
 # pylint: disable=wildcard-import, unused-wildcard-import
 """Contrib NDArray API of MXNet."""
+import math
+from ..context import current_context
+from ..random import uniform
 try:
 from .gen_contrib import *
 except ImportError:
 pass
 
-__all__ = []
+__all__ = ["rand_log_uniform"]
+
+def rand_log_uniform(true_classes, num_sampled, range_max, ctx=None):
 
 Review comment:
   I think it should not be called as `rand_log_uniform` because `LogUniform` 
has a specific meaning. Should be called something like rand_zipfian, or 
log_uniform_candidate_sampler like in TF.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on a change in pull request #9747: Add contrib.rand_log_uniform

2018-02-12 Thread GitBox
sxjscience commented on a change in pull request #9747: Add 
contrib.rand_log_uniform
URL: https://github.com/apache/incubator-mxnet/pull/9747#discussion_r167741420
 
 

 ##
 File path: python/mxnet/ndarray/contrib.py
 ##
 @@ -18,9 +18,76 @@
 # coding: utf-8
 # pylint: disable=wildcard-import, unused-wildcard-import
 """Contrib NDArray API of MXNet."""
+import math
+from ..context import current_context
+from ..random import uniform
 try:
 from .gen_contrib import *
 except ImportError:
 pass
 
-__all__ = []
+__all__ = ["rand_log_uniform"]
+
+def rand_log_uniform(true_classes, num_sampled, range_max, ctx=None):
+"""Draw random samples from an approximately log-uniform or Zipfian 
distribution.
+
+This operation randomly samples *num_sampled* candidates the range of 
integers [0, range_max).
+The elements of sampled_candidates are drawn with replacement from the 
base distribution.
+
+The base distribution for this operator is an approximately log-uniform or 
Zipfian distribution:
+
+P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)
+
+This sampler is useful when the true classes approximately follow such a 
distribution.
+For example, if the classes represent words in a lexicon sorted in 
decreasing order of \
+frequency. If your classes are not ordered by decreasing frequency, do not 
use this op.
+
+Additionaly, it also returns the number of times each of the \
+true classes and the sampled classes is expected to occur.
+
+Parameters
+--
+true_classes : NDArray
+A 1-D NDArray of the target classes.
+num_sampled: int
+The number of classes to randomly sample.
+range_max: int
+The number of possible classes.
+ctx : Context
+Device context of output. Default is current context. Overridden by
+`mu.context` when `mu` is an NDArray.
+
+Returns
+---
+list of NDArrays
+A 1-D `int64` `NDArray` for sampled candidate classes, a 1-D `float64` 
`NDArray` for \
+the expected count for true classes, and a 1-D `float64` `NDArray` for 
the \
+expected count for sampled classes.
+
+Examples
+
+>>> true_cls = mx.nd.array([3])
+>>> samples, exp_count_true, exp_count_sample = 
mx.nd.contrib.rand_log_uniform(true_cls, 4, 5)
+>>> samples
+[1 3 3 3]
+
+>>> exp_count_true
+[ 0.12453879]
+
+>>> exp_count_sample
+[ 0.22629439  0.12453879  0.12453879  0.12453879]
 
 Review comment:
   The example output looks suspicious as it does not sum up to 1.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on a change in pull request #9747: Add contrib.rand_log_uniform

2018-02-12 Thread GitBox
sxjscience commented on a change in pull request #9747: Add 
contrib.rand_log_uniform
URL: https://github.com/apache/incubator-mxnet/pull/9747#discussion_r167741277
 
 

 ##
 File path: python/mxnet/ndarray/contrib.py
 ##
 @@ -18,9 +18,76 @@
 # coding: utf-8
 # pylint: disable=wildcard-import, unused-wildcard-import
 """Contrib NDArray API of MXNet."""
+import math
+from ..context import current_context
+from ..random import uniform
 try:
 from .gen_contrib import *
 except ImportError:
 pass
 
-__all__ = []
+__all__ = ["rand_log_uniform"]
+
+def rand_log_uniform(true_classes, num_sampled, range_max, ctx=None):
+"""Draw random samples from an approximately log-uniform or Zipfian 
distribution.
+
+This operation randomly samples *num_sampled* candidates the range of 
integers [0, range_max).
+The elements of sampled_candidates are drawn with replacement from the 
base distribution.
+
+The base distribution for this operator is an approximately log-uniform or 
Zipfian distribution:
+
+P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)
+
+This sampler is useful when the true classes approximately follow such a 
distribution.
+For example, if the classes represent words in a lexicon sorted in 
decreasing order of \
+frequency. If your classes are not ordered by decreasing frequency, do not 
use this op.
+
+Additionaly, it also returns the number of times each of the \
+true classes and the sampled classes is expected to occur.
+
+Parameters
+--
+true_classes : NDArray
+A 1-D NDArray of the target classes.
+num_sampled: int
+The number of classes to randomly sample.
+range_max: int
+The number of possible classes.
+ctx : Context
+Device context of output. Default is current context. Overridden by
+`mu.context` when `mu` is an NDArray.
+
+Returns
+---
+list of NDArrays
+A 1-D `int64` `NDArray` for sampled candidate classes, a 1-D `float64` 
`NDArray` for \
+the expected count for true classes, and a 1-D `float64` `NDArray` for 
the \
+expected count for sampled classes.
+
+Examples
+
+>>> true_cls = mx.nd.array([3])
+>>> samples, exp_count_true, exp_count_sample = 
mx.nd.contrib.rand_log_uniform(true_cls, 4, 5)
+>>> samples
+[1 3 3 3]
+
+>>> exp_count_true
+[ 0.12453879]
+
+>>> exp_count_sample
+[ 0.22629439  0.12453879  0.12453879  0.12453879]
+
+"""
+if ctx is None:
+ctx = current_context()
+log_range = math.log(range_max + 1)
+rand = uniform(0, log_range, shape=(num_sampled,), dtype='float64', 
ctx=ctx)
+# make sure sampled_classes are in the range of [0, range_max)
+sampled_classes = (rand.exp() - 1).astype('int64') % range_max
+
+true_classes = true_classes.as_in_context(ctx).astype('float64')
+expected_count_true = ((true_classes + 2.0) / (true_classes + 1.0)).log() 
/ log_range
 
 Review comment:
   I think it should be `expected_count_true = ((true_classes + 2.0) / 
(true_classes + 1.0)).log() / log_range * num_sampled`. Otherwise it should be 
called something like `prob_true_class`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on a change in pull request #9747: Add contrib.rand_log_uniform

2018-02-12 Thread GitBox
sxjscience commented on a change in pull request #9747: Add 
contrib.rand_log_uniform
URL: https://github.com/apache/incubator-mxnet/pull/9747#discussion_r167741541
 
 

 ##
 File path: python/mxnet/ndarray/contrib.py
 ##
 @@ -18,9 +18,76 @@
 # coding: utf-8
 # pylint: disable=wildcard-import, unused-wildcard-import
 """Contrib NDArray API of MXNet."""
+import math
+from ..context import current_context
+from ..random import uniform
 try:
 from .gen_contrib import *
 except ImportError:
 pass
 
-__all__ = []
+__all__ = ["rand_log_uniform"]
+
+def rand_log_uniform(true_classes, num_sampled, range_max, ctx=None):
+"""Draw random samples from an approximately log-uniform or Zipfian 
distribution.
+
+This operation randomly samples *num_sampled* candidates the range of 
integers [0, range_max).
+The elements of sampled_candidates are drawn with replacement from the 
base distribution.
+
+The base distribution for this operator is an approximately log-uniform or 
Zipfian distribution:
+
+P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)
+
+This sampler is useful when the true classes approximately follow such a 
distribution.
+For example, if the classes represent words in a lexicon sorted in 
decreasing order of \
+frequency. If your classes are not ordered by decreasing frequency, do not 
use this op.
+
+Additionaly, it also returns the number of times each of the \
+true classes and the sampled classes is expected to occur.
+
+Parameters
+--
+true_classes : NDArray
+A 1-D NDArray of the target classes.
+num_sampled: int
+The number of classes to randomly sample.
+range_max: int
+The number of possible classes.
+ctx : Context
+Device context of output. Default is current context. Overridden by
+`mu.context` when `mu` is an NDArray.
+
+Returns
+---
+list of NDArrays
+A 1-D `int64` `NDArray` for sampled candidate classes, a 1-D `float64` 
`NDArray` for \
+the expected count for true classes, and a 1-D `float64` `NDArray` for 
the \
+expected count for sampled classes.
+
+Examples
+
+>>> true_cls = mx.nd.array([3])
+>>> samples, exp_count_true, exp_count_sample = 
mx.nd.contrib.rand_log_uniform(true_cls, 4, 5)
+>>> samples
+[1 3 3 3]
+
+>>> exp_count_true
+[ 0.12453879]
+
+>>> exp_count_sample
+[ 0.22629439  0.12453879  0.12453879  0.12453879]
 
 Review comment:
   Sorry I've misunderstood the term. It should be correct.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on a change in pull request #9747: Add contrib.rand_log_uniform

2018-02-12 Thread GitBox
sxjscience commented on a change in pull request #9747: Add 
contrib.rand_log_uniform
URL: https://github.com/apache/incubator-mxnet/pull/9747#discussion_r167741923
 
 

 ##
 File path: python/mxnet/ndarray/contrib.py
 ##
 @@ -18,9 +18,76 @@
 # coding: utf-8
 # pylint: disable=wildcard-import, unused-wildcard-import
 """Contrib NDArray API of MXNet."""
+import math
+from ..context import current_context
+from ..random import uniform
 try:
 from .gen_contrib import *
 except ImportError:
 pass
 
-__all__ = []
+__all__ = ["rand_log_uniform"]
+
+def rand_log_uniform(true_classes, num_sampled, range_max, ctx=None):
+"""Draw random samples from an approximately log-uniform or Zipfian 
distribution.
+
+This operation randomly samples *num_sampled* candidates the range of 
integers [0, range_max).
+The elements of sampled_candidates are drawn with replacement from the 
base distribution.
+
+The base distribution for this operator is an approximately log-uniform or 
Zipfian distribution:
+
+P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)
+
+This sampler is useful when the true classes approximately follow such a 
distribution.
+For example, if the classes represent words in a lexicon sorted in 
decreasing order of \
+frequency. If your classes are not ordered by decreasing frequency, do not 
use this op.
+
+Additionaly, it also returns the number of times each of the \
+true classes and the sampled classes is expected to occur.
+
+Parameters
+--
+true_classes : NDArray
+A 1-D NDArray of the target classes.
+num_sampled: int
+The number of classes to randomly sample.
+range_max: int
+The number of possible classes.
+ctx : Context
+Device context of output. Default is current context. Overridden by
+`mu.context` when `mu` is an NDArray.
+
+Returns
+---
+list of NDArrays
+A 1-D `int64` `NDArray` for sampled candidate classes, a 1-D `float64` 
`NDArray` for \
+the expected count for true classes, and a 1-D `float64` `NDArray` for 
the \
+expected count for sampled classes.
+
+Examples
+
+>>> true_cls = mx.nd.array([3])
+>>> samples, exp_count_true, exp_count_sample = 
mx.nd.contrib.rand_log_uniform(true_cls, 4, 5)
+>>> samples
+[1 3 3 3]
+
+>>> exp_count_true
+[ 0.12453879]
+
+>>> exp_count_sample
+[ 0.22629439  0.12453879  0.12453879  0.12453879]
 
 Review comment:
   I feel it's suspicious at first glance because the exp_count of 1 is larger 
than the exp_count of 3. However, the sampling result show that 3 is much more 
often then 1. We need to sample multiple times and test if the empirical 
expectation matches the true expectation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 opened a new pull request #9776: Use get_bz2_data from test_utils for sparse_op script

2018-02-12 Thread GitBox
anirudh2290 opened a new pull request #9776: Use get_bz2_data from test_utils 
for sparse_op script
URL: https://github.com/apache/incubator-mxnet/pull/9776
 
 
   ## Description ##
   Fix the sparse_op script by making it Use get_bz2_data from test_utils. 
   
   ## Checklist ##
   ### Essentials ###
   - [x] Passed code style checking (`make lint`)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Feature1, tests, (and when applicable, API doc)
   
   @eric-haibin-lin 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #9776: Use get_bz2_data from test_utils for sparse_op script

2018-02-12 Thread GitBox
eric-haibin-lin commented on a change in pull request #9776: Use get_bz2_data 
from test_utils for sparse_op script
URL: https://github.com/apache/incubator-mxnet/pull/9776#discussion_r167748653
 
 

 ##
 File path: benchmark/python/sparse/sparse_op.py
 ##
 @@ -24,7 +24,8 @@
 import argparse
 
 from mxnet.base import check_call, _LIB
-from util import get_data, estimate_density
+from mxnet.test_utils import get_bz2_data
 
 Review comment:
   when was this broken?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #9747: Add contrib.rand_log_uniform

2018-02-12 Thread GitBox
eric-haibin-lin commented on a change in pull request #9747: Add 
contrib.rand_log_uniform
URL: https://github.com/apache/incubator-mxnet/pull/9747#discussion_r167748997
 
 

 ##
 File path: python/mxnet/ndarray/contrib.py
 ##
 @@ -18,9 +18,76 @@
 # coding: utf-8
 # pylint: disable=wildcard-import, unused-wildcard-import
 """Contrib NDArray API of MXNet."""
+import math
+from ..context import current_context
+from ..random import uniform
 try:
 from .gen_contrib import *
 except ImportError:
 pass
 
-__all__ = []
+__all__ = ["rand_log_uniform"]
+
+def rand_log_uniform(true_classes, num_sampled, range_max, ctx=None):
+"""Draw random samples from an approximately log-uniform or Zipfian 
distribution.
+
+This operation randomly samples *num_sampled* candidates the range of 
integers [0, range_max).
+The elements of sampled_candidates are drawn with replacement from the 
base distribution.
+
+The base distribution for this operator is an approximately log-uniform or 
Zipfian distribution:
+
+P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)
+
+This sampler is useful when the true classes approximately follow such a 
distribution.
+For example, if the classes represent words in a lexicon sorted in 
decreasing order of \
+frequency. If your classes are not ordered by decreasing frequency, do not 
use this op.
+
+Additionaly, it also returns the number of times each of the \
+true classes and the sampled classes is expected to occur.
+
+Parameters
+--
+true_classes : NDArray
+A 1-D NDArray of the target classes.
+num_sampled: int
+The number of classes to randomly sample.
+range_max: int
+The number of possible classes.
+ctx : Context
+Device context of output. Default is current context. Overridden by
+`mu.context` when `mu` is an NDArray.
+
+Returns
+---
+list of NDArrays
+A 1-D `int64` `NDArray` for sampled candidate classes, a 1-D `float64` 
`NDArray` for \
+the expected count for true classes, and a 1-D `float64` `NDArray` for 
the \
+expected count for sampled classes.
 
 Review comment:
   will do


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #9747: Add contrib.rand_log_uniform

2018-02-12 Thread GitBox
eric-haibin-lin commented on a change in pull request #9747: Add 
contrib.rand_log_uniform
URL: https://github.com/apache/incubator-mxnet/pull/9747#discussion_r167749166
 
 

 ##
 File path: python/mxnet/ndarray/contrib.py
 ##
 @@ -18,9 +18,76 @@
 # coding: utf-8
 # pylint: disable=wildcard-import, unused-wildcard-import
 """Contrib NDArray API of MXNet."""
+import math
+from ..context import current_context
+from ..random import uniform
 try:
 from .gen_contrib import *
 except ImportError:
 pass
 
-__all__ = []
+__all__ = ["rand_log_uniform"]
+
+def rand_log_uniform(true_classes, num_sampled, range_max, ctx=None):
+"""Draw random samples from an approximately log-uniform or Zipfian 
distribution.
+
+This operation randomly samples *num_sampled* candidates the range of 
integers [0, range_max).
+The elements of sampled_candidates are drawn with replacement from the 
base distribution.
+
+The base distribution for this operator is an approximately log-uniform or 
Zipfian distribution:
+
+P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)
+
+This sampler is useful when the true classes approximately follow such a 
distribution.
+For example, if the classes represent words in a lexicon sorted in 
decreasing order of \
+frequency. If your classes are not ordered by decreasing frequency, do not 
use this op.
+
+Additionaly, it also returns the number of times each of the \
+true classes and the sampled classes is expected to occur.
+
+Parameters
+--
+true_classes : NDArray
+A 1-D NDArray of the target classes.
+num_sampled: int
+The number of classes to randomly sample.
+range_max: int
+The number of possible classes.
+ctx : Context
+Device context of output. Default is current context. Overridden by
+`mu.context` when `mu` is an NDArray.
+
+Returns
+---
+list of NDArrays
+A 1-D `int64` `NDArray` for sampled candidate classes, a 1-D `float64` 
`NDArray` for \
+the expected count for true classes, and a 1-D `float64` `NDArray` for 
the \
+expected count for sampled classes.
+
+Examples
+
+>>> true_cls = mx.nd.array([3])
+>>> samples, exp_count_true, exp_count_sample = 
mx.nd.contrib.rand_log_uniform(true_cls, 4, 5)
+>>> samples
+[1 3 3 3]
+
+>>> exp_count_true
+[ 0.12453879]
+
+>>> exp_count_sample
+[ 0.22629439  0.12453879  0.12453879  0.12453879]
+
+"""
+if ctx is None:
+ctx = current_context()
+log_range = math.log(range_max + 1)
+rand = uniform(0, log_range, shape=(num_sampled,), dtype='float64', 
ctx=ctx)
+# make sure sampled_classes are in the range of [0, range_max)
+sampled_classes = (rand.exp() - 1).astype('int64') % range_max
+
+true_classes = true_classes.as_in_context(ctx).astype('float64')
+expected_count_true = ((true_classes + 2.0) / (true_classes + 1.0)).log() 
/ log_range
 
 Review comment:
   You are right, I should either multiply it by `num_sampled` or change the 
name. Will do an update.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 opened a new issue #9774: mx.io.ImageRecordIter does not respect dtype argument

2018-02-12 Thread GitBox
rahul003 opened a new issue #9774: mx.io.ImageRecordIter does not respect dtype 
argument
URL: https://github.com/apache/incubator-mxnet/issues/9774
 
 
   ## Description
   mx.io.ImageRecordIter or src/io/iter_image_recordio_2.cc doesn't respect 
dtype parameter taken. 
   It is designed to only work with float32 because of instantiating the class 
with real_t dtype. (in src/io/iter_image_recordio_2.cc). Can we make it handle 
fp16 too? This is important for fp16 training.
   
   ## Environment info (Required)
   Mxnet 1.0
   Package used: Python
   
   ## Error Message:
   Silently generates fp32 data
   
   ## Minimum reproducible example
   N/A
   
   ## Steps to reproduce
   N/A
   
   ## What have you tried to solve it?
   Can we come up with a better way than to create a new operator passing DType 
as fp16?
   
   @ptrendx 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #9779: add missed optimizer docs

2018-02-12 Thread GitBox
szha commented on issue #9779: add missed optimizer docs
URL: https://github.com/apache/incubator-mxnet/pull/9779#issuecomment-365163434
 
 
   Updated doc can be found at 
http://mxnet-doc.s3-accelerate.dualstack.amazonaws.com/api/python/optimization/optimization.html#the-mxnet-optimizer-package


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rimusolem opened a new issue #9780: Build error

2018-02-12 Thread GitBox
rimusolem opened a new issue #9780: Build error
URL: https://github.com/apache/incubator-mxnet/issues/9780
 
 
   I got the following error while building mxnet in Arch linux with gcc 7.3.0. 
   ```
   make[3]: Entering directory 
'/home/xxx/LocalPackages/mxnet-without-cuda/src/incubator-mxnet/ps-lite/zeromq-4.1.4'
 CXX  src/libzmq_la-address.lo
   In file included from 
/usr/include/c++/7.3.0/x86_64-pc-linux-gnu/bits/os_defines.h:39:0,
from 
/usr/include/c++/7.3.0/x86_64-pc-linux-gnu/bits/c++config.h:533,
from /usr/include/c++/7.3.0/string:38,
from src/address.hpp:33,
from src/address.cpp:31:
   /usr/include/features.h:376:4: error: #warning _FORTIFY_SOURCE requires 
compiling with optimization (-O) [-Werror=cpp]
#  warning _FORTIFY_SOURCE requires compiling with optimization (-O)
   ^~~
   cc1plus: all warnings being treated as errors
   make[3]: *** [Makefile:2543: src/libzmq_la-address.lo] Error 1
   ```
   I used these options.
   ```
   USE_BLAS=mkl ADD_CFLAGS=-I/opt/intel/mkl/include 
ADD_LDFLAGS=-L/opt/intel/mkl/lib/intel64 USE_LAPACK=1 USE_CPP_PACKAGE=1 
USE_DIST_KVSTORE=1
   ```
   The compilation was successful without `USE_DIST_KVSTORE=1`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha opened a new pull request #9779: add missed optimizer docs

2018-02-12 Thread GitBox
szha opened a new pull request #9779: add missed optimizer docs
URL: https://github.com/apache/incubator-mxnet/pull/9779
 
 
   ## Description ##
   Add the optimizers that are missed in api doc
   
   ## Checklist ##
   ### Essentials ###
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Add missed optimizers in api doc


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rimusolem commented on issue #9780: Build error

2018-02-12 Thread GitBox
rimusolem commented on issue #9780: Build error
URL: 
https://github.com/apache/incubator-mxnet/issues/9780#issuecomment-365178385
 
 
   I have zeromq (4.2.2) installed. Is there a way that I can skip the zeromq 
build?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha opened a new pull request #9778: Update loss.md

2018-02-12 Thread GitBox
szha opened a new pull request #9778: Update loss.md
URL: https://github.com/apache/incubator-mxnet/pull/9778
 
 
   ## Description ##
   Remove duplicate doc entry in gluon loss.
   
   ## Checklist ##
   ### Essentials ###
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Remove duplicate entry for BCE
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #9777: [MX-9588] Add micro averaging strategy for F1 metric

2018-02-12 Thread GitBox
szha commented on issue #9777: [MX-9588] Add micro averaging strategy for F1 
metric
URL: https://github.com/apache/incubator-mxnet/pull/9777#issuecomment-365153901
 
 
   This fixes #9588 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 closed pull request #9672: CMake CUDA fixes + NCCL

2018-02-12 Thread GitBox
cjolivier01 closed pull request #9672: CMake CUDA fixes + NCCL
URL: https://github.com/apache/incubator-mxnet/pull/9672
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/CMakeLists.txt b/CMakeLists.txt
index cddbf725c2..2edf3c23c9 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -1,35 +1,17 @@
 cmake_minimum_required(VERSION 3.0.2)
-message(STATUS "CMake version '${CMAKE_VERSION}' using generator 
'${CMAKE_GENERATOR}'")
-if(((${CMAKE_GENERATOR} MATCHES "Visual Studio.*") OR (${CMAKE_GENERATOR} 
MATCHES "Xcode.*"))
-AND ((${CMAKE_VERSION} VERSION_GREATER "3.9.0") OR (${CMAKE_VERSION} 
VERSION_EQUAL "3.9.0")))
-  # Toolsets are only supported for Visual Studio and Xcode
-  set(FIRST_CUDA TRUE)
-else()
-  set(FIRST_CUDA FALSE)
-endif()
-include(cmake/Utils.cmake)
 
-#Some things have order. This must be put in front alone
-mxnet_option(USE_CUDA "Build with CUDA support"   ON)
-mxnet_option(USE_OLDCMAKECUDA   "Build with old cmake cuda" OFF)
-if(USE_CUDA)
-  add_definitions(-DMSHADOW_USE_CUDA=1)
-  IF(FIRST_CUDA AND (NOT USE_OLDCMAKECUDA))
-set(__cuda_toolset "7.5" "8.0" "9.0")
-set(CUDA_TOOLSET "8.0" CACHE STRING "Select CUDA Version.")
-set_property( CACHE CUDA_TOOLSET PROPERTY STRINGS "" ${__cuda_toolset} )
-set(CMAKE_GENERATOR_TOOLSET "cuda=${CUDA_TOOLSET},host=x64")
-project(mxnet C CXX CUDA)
-  else()
-project(mxnet C CXX)
-set(FIRST_CUDA FALSE)
-  endif()
-else()
-  project(mxnet C CXX)
-  add_definitions(-DMSHADOW_USE_CUDA=0)
+project(mxnet C CXX)
+
+if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/build/private/local_config.cmake)
+  include(${CMAKE_CURRENT_SOURCE_DIR}/build/private/local_config.cmake)
 endif()
 
+include(${CMAKE_CURRENT_SOURCE_DIR}/cmake/Utils.cmake)
 
+#Some things have order. This must be put in front alone
+mxnet_option(USE_CUDA "Build with CUDA support"   ON)
+mxnet_option(USE_OLDCMAKECUDA "Build with old cmake cuda" OFF)
+mxnet_option(USE_NCCL "Use NVidia NCCL with CUDA" OFF)
 mxnet_option(USE_OPENCV   "Build with OpenCV support" ON)
 mxnet_option(USE_OPENMP   "Build with Openmp support" ON)
 mxnet_option(USE_CUDNN"Build with cudnn support"  ON) # one could 
set CUDNN_ROOT for search path
@@ -37,7 +19,7 @@ mxnet_option(USE_LAPACK   "Build with lapack support" 
ON IF NOT MSVC)
 mxnet_option(USE_MKL_IF_AVAILABLE "Use MKL if found" ON)
 mxnet_option(USE_MKLML_MKL"Use MKLML variant of MKL (if MKL found)" ON 
IF USE_MKL_IF_AVAILABLE AND UNIX AND (NOT APPLE))
 mxnet_option(USE_MKL_EXPERIMENTAL "Use experimental MKL (if MKL enabled and 
found)" OFF)
-mxnet_option(USE_OPERATOR_TUNING  "Enable auto-tuning of operators" ON AND NOT 
MSVC)
+mxnet_option(USE_OPERATOR_TUNING  "Enable auto-tuning of operators" ON IF NOT 
MSVC)
 mxnet_option(USE_GPERFTOOLS   "Build with GPerfTools support (if found)" 
ON)
 mxnet_option(USE_JEMALLOC "Build with Jemalloc support"   ON)
 mxnet_option(USE_PROFILER "Build with Profiler support"   ON)
@@ -47,30 +29,49 @@ mxnet_option(USE_PLUGIN_CAFFE "Use Caffe Plugin" OFF)
 mxnet_option(USE_CPP_PACKAGE  "Build C++ Package" OFF)
 mxnet_option(USE_MXNET_LIB_NAMING "Use MXNet library naming conventions." ON)
 mxnet_option(USE_GPROF"Compile with gprof (profiling) flag" OFF)
+mxnet_option(USE_CXX14_IF_AVAILABLE "Build with C++14 if the compiler supports 
it" OFF)
 mxnet_option(USE_VTUNE"Enable use of Intel Amplifier XE (VTune)" 
OFF) # one could set VTUNE_ROOT for search path
 mxnet_option(ENABLE_CUDA_RTC  "Build with CUDA runtime compilation 
support" ON)
 mxnet_option(INSTALL_EXAMPLES "Install the example source files." OFF)
 mxnet_option(USE_SIGNAL_HANDLER   "Print stack traces on segfaults." OFF)
 
-
-
-if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/build/private/local_config.cmake)
-  include(${CMAKE_CURRENT_SOURCE_DIR}/build/private/local_config.cmake)
+if(USE_CUDA AND NOT USE_OLDCMAKECUDA)
+  message(STATUS "CMake version '${CMAKE_VERSION}' using generator 
'${CMAKE_GENERATOR}'")
+  if(
+  (
+(${CMAKE_GENERATOR} MATCHES "Visual Studio.*")
+OR (${CMAKE_GENERATOR} MATCHES "Xcode.*")
+OR (${CMAKE_GENERATOR} STREQUAL "Unix Makefiles")
+  ) AND (
+(${CMAKE_VERSION} VERSION_GREATER "3.9.0") OR (${CMAKE_VERSION} 
VERSION_EQUAL "3.9.0")
+  )
+)
+set(FIRST_CUDA TRUE)
+project(mxnet C CXX CUDA)
+message(ERROR " foo")
+  else()
+set(FIRST_CUDA FALSE)
+set(USE_OLDCMAKECUDA TRUE)
+project(mxnet C CXX)
+  endif()
+else()
+  project(mxnet C CXX)
 endif()
 
+
 if(MSVC)
   set(SYSTEM_ARCHITECTURE x86_64)
 else()
-  EXECUTE_PROCESS( COMMAND uname -m COMMAND tr -d '\n' OUTPUT_VARIABLE 
SYSTEM_ARCHITECTURE)
+ 

[incubator-mxnet] branch master updated (4a619ba -> 83f6279)

2018-02-12 Thread cjolivier01
This is an automated email from the ASF dual-hosted git repository.

cjolivier01 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 4a619ba  remove the extra @register (#9769)
 add 83f6279  CMake CUDA fixes + NCCL (#9672)

No new revisions were added by this update.

Summary of changes:
 CMakeLists.txt   | 142 ---
 cmake/Modules/FindNCCL.cmake |  65 
 tests/CMakeLists.txt |   1 +
 3 files changed, 158 insertions(+), 50 deletions(-)
 create mode 100644 cmake/Modules/FindNCCL.cmake

-- 
To stop receiving notification emails like this one, please contact
cjolivie...@apache.org.