[GitHub] GoodJoey opened a new issue #9709: what will happen if one of the node reboot when doing the distribute training?

2018-02-05 Thread GitBox
GoodJoey opened a new issue #9709: what will happen if one of the node reboot 
when doing the distribute training?
URL: https://github.com/apache/incubator-mxnet/issues/9709
 
 
   if i 'm doing the distribute training, supposed one of my node is 
rebooted(or 'dead' for some other reasons), what will happen? the training will 
fail? or it will find another node(if there is any) instead?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] safrooze commented on issue #9705: Added unittest for benchmarking metric performance

2018-02-05 Thread GitBox
safrooze commented on issue #9705: Added unittest for benchmarking metric 
performance
URL: https://github.com/apache/incubator-mxnet/pull/9705#issuecomment-363328924
 
 
   The intention is to observe a measurable elapsed time (hence large data 
size) and amplify the difference between CPU and GPU processing (hence 
processing all the data in one batch). A valid alternative is to use small 
batch size and iterate over multiple batches. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Davidrjx commented on issue #9489: terminate called after throwing an instance of 'dmlc::Error'

2018-02-05 Thread GitBox
Davidrjx commented on issue #9489: terminate called after throwing an instance 
of 'dmlc::Error'
URL: 
https://github.com/apache/incubator-mxnet/issues/9489#issuecomment-363327956
 
 
   i think should run container with nvidia-docker, at lease so am i ,and i run 
mxnet with gpu


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #9705: Added unittest for benchmarking metric performance

2018-02-05 Thread GitBox
eric-haibin-lin commented on issue #9705: Added unittest for benchmarking 
metric performance
URL: https://github.com/apache/incubator-mxnet/pull/9705#issuecomment-363326703
 
 
   Why not include small batch size like 64? 100k is huge. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new pull request #9708: Add code signing key

2018-02-05 Thread GitBox
eric-haibin-lin opened a new pull request #9708: Add code signing key
URL: https://github.com/apache/incubator-mxnet/pull/9708
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new pull request #9707: Bump version to 1.1.0

2018-02-05 Thread GitBox
eric-haibin-lin opened a new pull request #9707: Bump version to 1.1.0
URL: https://github.com/apache/incubator-mxnet/pull/9707
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Davidrjx commented on issue #212: crash when running python ./predict-with-pretrained-model.py

2018-02-05 Thread GitBox
Davidrjx commented on issue #212: crash when running python 
./predict-with-pretrained-model.py
URL: https://github.com/apache/incubator-mxnet/issues/212#issuecomment-363319926
 
 
   i came with similar problem when running 
   `mxnet.nd.ones((2,3),mx.gpu())` , but error as follows:
   ```
   terminate called after throwing an instance of 'dmlc::Error'
 what():  [05:55:58] 
/opt/incubator-mxnet/mshadow/mshadow/./tensor_gpu-inl.h:35: Check failed: e == 
cudaSuccess CUDA: unknown error
   
   Stack trace returned 9 entries:
   [bt] (0) 
/opt/incubator-mxnet/python/mxnet/../../lib/libmxnet.so(dmlc::StackTrace[abi:cxx11]()+0x5a)
 [0x7f38edde018a]
   [bt] (1) 
/opt/incubator-mxnet/python/mxnet/../../lib/libmxnet.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x28)
 [0x7f38edde0d28]
   [bt] (2) /opt/incubator-mxnet/python/mxnet/../../lib/libmxnet.so(void 
mshadow::SetDevice(int)+0xd0) [0x7f38f094b080]
   [bt] (3) /opt/incubator-mxnet/python/mxnet/../../lib/libmxnet.so(void 
mxnet::engine::ThreadedEnginePerDevice::GPUWorker<(dmlc::ConcurrentQueueType)0>(mxnet::Context,
 bool, 
mxnet::engine::ThreadedEnginePerDevice::ThreadWorkerBlock<(dmlc::ConcurrentQueueType)0>*,
 std::shared_ptr const&)+0x87) 
[0x7f38f0954fe7]
   [bt] (4) 
/opt/incubator-mxnet/python/mxnet/../../lib/libmxnet.so(std::_Function_handler), 
mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock*, 
bool)::{lambda()#3}::operator()() 
const::{lambda(std::shared_ptr)#1}>::_M_invoke(std::_Any_data
 const&, std::shared_ptr&&)+0x4e) 
[0x7f38f095529e]
   [bt] (5) 
/opt/incubator-mxnet/python/mxnet/../../lib/libmxnet.so(std::thread::_Impl 
(std::shared_ptr)> >::_M_run()+0x4a) 
[0x7f38f094e97a]
   [bt] (6) /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0xb8c80) [0x7f39851cac80]
   [bt] (7) /lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7f398aa206ba]
   [bt] (8) /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f398a75641d]
   
   
   terminate called recursively
   Aborted (core dumped)
   ```
   
   please give possible solutions , thanks!
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Davidrjx commented on issue #212: crash when running python ./predict-with-pretrained-model.py

2018-02-05 Thread GitBox
Davidrjx commented on issue #212: crash when running python 
./predict-with-pretrained-model.py
URL: https://github.com/apache/incubator-mxnet/issues/212#issuecomment-363319926
 
 
   i came with similar problem when running 
   `mx.nd.ones((2,3),mx.gpu())` , but error as follows:
   ```
   terminate called after throwing an instance of 'dmlc::Error'
 what():  [05:55:58] 
/opt/incubator-mxnet/mshadow/mshadow/./tensor_gpu-inl.h:35: Check failed: e == 
cudaSuccess CUDA: unknown error
   
   Stack trace returned 9 entries:
   [bt] (0) 
/opt/incubator-mxnet/python/mxnet/../../lib/libmxnet.so(dmlc::StackTrace[abi:cxx11]()+0x5a)
 [0x7f38edde018a]
   [bt] (1) 
/opt/incubator-mxnet/python/mxnet/../../lib/libmxnet.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x28)
 [0x7f38edde0d28]
   [bt] (2) /opt/incubator-mxnet/python/mxnet/../../lib/libmxnet.so(void 
mshadow::SetDevice(int)+0xd0) [0x7f38f094b080]
   [bt] (3) /opt/incubator-mxnet/python/mxnet/../../lib/libmxnet.so(void 
mxnet::engine::ThreadedEnginePerDevice::GPUWorker<(dmlc::ConcurrentQueueType)0>(mxnet::Context,
 bool, 
mxnet::engine::ThreadedEnginePerDevice::ThreadWorkerBlock<(dmlc::ConcurrentQueueType)0>*,
 std::shared_ptr const&)+0x87) 
[0x7f38f0954fe7]
   [bt] (4) 
/opt/incubator-mxnet/python/mxnet/../../lib/libmxnet.so(std::_Function_handler), 
mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock*, 
bool)::{lambda()#3}::operator()() 
const::{lambda(std::shared_ptr)#1}>::_M_invoke(std::_Any_data
 const&, std::shared_ptr&&)+0x4e) 
[0x7f38f095529e]
   [bt] (5) 
/opt/incubator-mxnet/python/mxnet/../../lib/libmxnet.so(std::thread::_Impl 
(std::shared_ptr)> >::_M_run()+0x4a) 
[0x7f38f094e97a]
   [bt] (6) /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0xb8c80) [0x7f39851cac80]
   [bt] (7) /lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7f398aa206ba]
   [bt] (8) /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f398a75641d]
   
   
   terminate called recursively
   Aborted (core dumped)
   ```
   
   please give possible solutions , thanks!
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new pull request #9706: Update years in NOTICE file to 2017-2018

2018-02-05 Thread GitBox
eric-haibin-lin opened a new pull request #9706: Update years in NOTICE file to 
2017-2018
URL: https://github.com/apache/incubator-mxnet/pull/9706
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ptrendx commented on issue #9583: use nd for accuracy calculation

2018-02-05 Thread GitBox
ptrendx commented on issue #9583: use nd for accuracy calculation
URL: https://github.com/apache/incubator-mxnet/pull/9583#issuecomment-363306616
 
 
   @szha Did you do the performance experiments that @piiswrong asked you about 
(and what were the results)? With this commit we see 20% perf regression on 8 
Voltas in resnet.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166181091
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/MXNetHandler.scala
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import java.util.concurrent._
+
+trait MXNetHandler {
+
+  def execute[T](f: => T): T
+
+  val executor: ExecutorService
+
+}
+
+object MXNetHandlerType extends Enumeration {
+
+  type MXNetHandlerType = Value
+  val SingleThreadHandler = Value("MXNetSingleThreadHandler")
+  val OneThreadPerModelHandler = Value("MXNetOneThreadPerModelHandler")
+}
+
+class MXNetOneThreadPerModelHandler extends MXNetHandler {
+
+  private val threadFactory = new ThreadFactory {
 
 Review comment:
   since you are working with Scala, you may want to work with more 
scala-native concurrency facility, i.e. Future, which provides more elegant way 
for error handling, etc.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166182245
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/PredictBase.scala
 ##
 @@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.io.NDArrayIter
+import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.module.Module
+
+import scala.collection.mutable.ListBuffer
+import org.slf4j.LoggerFactory
+
+/**
+  * Base Trait for MXNNet Predictor classes.
+  */
+trait PredictBase {
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+* @param input: A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+* is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  def predict(input: IndexedSeq[Array[Float]]): IndexedSeq[Array[Float]]
+
+  /**
+* Predict using NDArray as input. This method is useful when the input is 
a batch of data
+* or when multiple operations on the input/output have to performed.
+* Note: User is responsible for managing allocation/deallocation of 
NDArrays.
+* @param input: IndexedSequence NDArrays.
+* @return output of Predictions as NDArrays.
+*/
+  def predictWithNDArray(input: IndexedSeq[NDArray]): IndexedSeq[NDArray]
+
+}
+
+/**
+  * Implementation of predict routines.
+  *
+  * @param modelPathPrefix PathPrefix from where to load the model.
+  *Example: file://model-dir/resnet-152(containing 
resnet-152-symbol.json,
+  *resnet-152-.params and optionally synset.txt).
+  *Supports model loading from various sources like 
local disk,
+  *hdfs, https and s3. file://, hdfs://, https://, 
s3://
+  * @param inputDescriptors Descriptors defining the input node names, shape,
+  * layout and Type parameters
+  * @param outputDescriptors Descriptors defining the output node names, shape,
+  *  layout and Type parameters
+  */
+class Predictor(modelPathPrefix: String,
+ protected val inputDescriptors: IndexedSeq[DataDesc],
+ protected var outputDescriptors:
+ Option[IndexedSeq[DataDesc]] = None) extends PredictBase {
+
+  private val logger = LoggerFactory.getLogger(classOf[Predictor])
+
+  protected var batchIndex = inputDescriptors(0).layout.indexOf('N')
+  protected var batchSize = if (batchIndex != -1 ) 
inputDescriptors(0).shape(batchIndex) else 1
+
+  protected var iDescriptors = inputDescriptors
+
+  inputDescriptors.foreach((f: DataDesc) => require(f.layout.indexOf('N') == 
batchIndex,
+"batch size should be in the same index for all inputs"))
+
+
+  if (batchIndex != -1) {
+inputDescriptors.foreach((f: DataDesc) => require(f.shape(batchIndex) == 
batchSize,
+  "batch size should be same for all inputs"))
+  } else {
+// TODO: this is assuming that the input needs a batch
+iDescriptors = inputDescriptors.map((f : DataDesc) => new DataDesc(f.name,
+Shape(1 +: f.shape.toVector), f.dtype, 'N' +: f.layout) )
+batchIndex = 1
+  }
+
+  protected val mxNetHandler = MXNetHandler()
+
+  protected val mod = loadModule()
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+*
+* @param input : A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+*  is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  override def predict(input: IndexedSeq[Array[Float]]): 
IndexedSeq[Array[Float]] = {
+
+require(input.length == inputDescriptors.length, "number of inputs 
provided: %d" +
+  " do not match number of inputs in inputDescriptors: 
%d".format(input.length,
+   

[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166181172
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/MXNetHandler.scala
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import java.util.concurrent._
+
+trait MXNetHandler {
+
+  def execute[T](f: => T): T
+
+  val executor: ExecutorService
+
+}
+
+object MXNetHandlerType extends Enumeration {
+
+  type MXNetHandlerType = Value
+  val SingleThreadHandler = Value("MXNetSingleThreadHandler")
+  val OneThreadPerModelHandler = Value("MXNetOneThreadPerModelHandler")
+}
+
+class MXNetOneThreadPerModelHandler extends MXNetHandler {
+
+  private val threadFactory = new ThreadFactory {
+
+override def newThread(r: Runnable): Thread = new Thread(r) {
+  setName(classOf[MXNetOneThreadPerModelHandler].getCanonicalName)
+}
+  }
+
+  override val executor: ExecutorService = Executors.newFixedThreadPool(10, 
threadFactory)
+
+  override def execute[T](f: => T): T = {
+val task = new Callable[T] {
+  override def call(): T = {
+// scalastyle:off println
+println("threadId: %s".format(Thread.currentThread().getId()))
+// scalastyle:on println
+f
+  }
+}
+val result = executor.submit(task)
+try {
+  result.get()
+}
+catch {
+  case e: ExecutionException => throw e.getCause()
 
 Review comment:
   how about other uncaught exceptions? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166181917
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/PredictBase.scala
 ##
 @@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.io.NDArrayIter
+import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.module.Module
+
+import scala.collection.mutable.ListBuffer
+import org.slf4j.LoggerFactory
+
+/**
+  * Base Trait for MXNNet Predictor classes.
+  */
+trait PredictBase {
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+* @param input: A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+* is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  def predict(input: IndexedSeq[Array[Float]]): IndexedSeq[Array[Float]]
+
+  /**
+* Predict using NDArray as input. This method is useful when the input is 
a batch of data
+* or when multiple operations on the input/output have to performed.
+* Note: User is responsible for managing allocation/deallocation of 
NDArrays.
+* @param input: IndexedSequence NDArrays.
+* @return output of Predictions as NDArrays.
+*/
+  def predictWithNDArray(input: IndexedSeq[NDArray]): IndexedSeq[NDArray]
+
+}
+
+/**
+  * Implementation of predict routines.
+  *
+  * @param modelPathPrefix PathPrefix from where to load the model.
+  *Example: file://model-dir/resnet-152(containing 
resnet-152-symbol.json,
+  *resnet-152-.params and optionally synset.txt).
+  *Supports model loading from various sources like 
local disk,
+  *hdfs, https and s3. file://, hdfs://, https://, 
s3://
+  * @param inputDescriptors Descriptors defining the input node names, shape,
+  * layout and Type parameters
+  * @param outputDescriptors Descriptors defining the output node names, shape,
+  *  layout and Type parameters
+  */
+class Predictor(modelPathPrefix: String,
+ protected val inputDescriptors: IndexedSeq[DataDesc],
+ protected var outputDescriptors:
+ Option[IndexedSeq[DataDesc]] = None) extends PredictBase {
 
 Review comment:
   coding format, 
   
   ```scala
   class Predictor(
 modelPathPrefix: String,
 protected val inputDescriptors: IndexedSeq[DataDesc],
 protected var outputDescriptors: Option[IndexedSeq[DataDesc]] = None)
  extends PredictBase {
   ``` 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166182225
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/PredictBase.scala
 ##
 @@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.io.NDArrayIter
+import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.module.Module
+
+import scala.collection.mutable.ListBuffer
+import org.slf4j.LoggerFactory
+
+/**
+  * Base Trait for MXNNet Predictor classes.
+  */
+trait PredictBase {
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+* @param input: A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+* is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  def predict(input: IndexedSeq[Array[Float]]): IndexedSeq[Array[Float]]
+
+  /**
+* Predict using NDArray as input. This method is useful when the input is 
a batch of data
+* or when multiple operations on the input/output have to performed.
+* Note: User is responsible for managing allocation/deallocation of 
NDArrays.
+* @param input: IndexedSequence NDArrays.
+* @return output of Predictions as NDArrays.
+*/
+  def predictWithNDArray(input: IndexedSeq[NDArray]): IndexedSeq[NDArray]
+
+}
+
+/**
+  * Implementation of predict routines.
+  *
+  * @param modelPathPrefix PathPrefix from where to load the model.
+  *Example: file://model-dir/resnet-152(containing 
resnet-152-symbol.json,
+  *resnet-152-.params and optionally synset.txt).
+  *Supports model loading from various sources like 
local disk,
+  *hdfs, https and s3. file://, hdfs://, https://, 
s3://
+  * @param inputDescriptors Descriptors defining the input node names, shape,
+  * layout and Type parameters
+  * @param outputDescriptors Descriptors defining the output node names, shape,
+  *  layout and Type parameters
+  */
+class Predictor(modelPathPrefix: String,
+ protected val inputDescriptors: IndexedSeq[DataDesc],
+ protected var outputDescriptors:
+ Option[IndexedSeq[DataDesc]] = None) extends PredictBase {
+
+  private val logger = LoggerFactory.getLogger(classOf[Predictor])
+
+  protected var batchIndex = inputDescriptors(0).layout.indexOf('N')
+  protected var batchSize = if (batchIndex != -1 ) 
inputDescriptors(0).shape(batchIndex) else 1
+
+  protected var iDescriptors = inputDescriptors
+
+  inputDescriptors.foreach((f: DataDesc) => require(f.layout.indexOf('N') == 
batchIndex,
+"batch size should be in the same index for all inputs"))
+
+
+  if (batchIndex != -1) {
+inputDescriptors.foreach((f: DataDesc) => require(f.shape(batchIndex) == 
batchSize,
+  "batch size should be same for all inputs"))
+  } else {
+// TODO: this is assuming that the input needs a batch
+iDescriptors = inputDescriptors.map((f : DataDesc) => new DataDesc(f.name,
+Shape(1 +: f.shape.toVector), f.dtype, 'N' +: f.layout) )
+batchIndex = 1
+  }
+
+  protected val mxNetHandler = MXNetHandler()
+
+  protected val mod = loadModule()
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+*
+* @param input : A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+*  is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  override def predict(input: IndexedSeq[Array[Float]]): 
IndexedSeq[Array[Float]] = {
+
+require(input.length == inputDescriptors.length, "number of inputs 
provided: %d" +
+  " do not match number of inputs in inputDescriptors: 
%d".format(input.length,
+   

[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166182325
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/PredictBase.scala
 ##
 @@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.io.NDArrayIter
+import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.module.Module
+
+import scala.collection.mutable.ListBuffer
+import org.slf4j.LoggerFactory
+
+/**
+  * Base Trait for MXNNet Predictor classes.
+  */
+trait PredictBase {
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+* @param input: A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
 
 Review comment:
   Java?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166182205
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/PredictBase.scala
 ##
 @@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.io.NDArrayIter
+import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.module.Module
+
+import scala.collection.mutable.ListBuffer
+import org.slf4j.LoggerFactory
+
+/**
+  * Base Trait for MXNNet Predictor classes.
+  */
+trait PredictBase {
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+* @param input: A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+* is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  def predict(input: IndexedSeq[Array[Float]]): IndexedSeq[Array[Float]]
+
+  /**
+* Predict using NDArray as input. This method is useful when the input is 
a batch of data
+* or when multiple operations on the input/output have to performed.
+* Note: User is responsible for managing allocation/deallocation of 
NDArrays.
+* @param input: IndexedSequence NDArrays.
+* @return output of Predictions as NDArrays.
+*/
+  def predictWithNDArray(input: IndexedSeq[NDArray]): IndexedSeq[NDArray]
+
+}
+
+/**
+  * Implementation of predict routines.
+  *
+  * @param modelPathPrefix PathPrefix from where to load the model.
+  *Example: file://model-dir/resnet-152(containing 
resnet-152-symbol.json,
+  *resnet-152-.params and optionally synset.txt).
+  *Supports model loading from various sources like 
local disk,
+  *hdfs, https and s3. file://, hdfs://, https://, 
s3://
+  * @param inputDescriptors Descriptors defining the input node names, shape,
+  * layout and Type parameters
+  * @param outputDescriptors Descriptors defining the output node names, shape,
+  *  layout and Type parameters
+  */
+class Predictor(modelPathPrefix: String,
+ protected val inputDescriptors: IndexedSeq[DataDesc],
+ protected var outputDescriptors:
+ Option[IndexedSeq[DataDesc]] = None) extends PredictBase {
+
+  private val logger = LoggerFactory.getLogger(classOf[Predictor])
+
+  protected var batchIndex = inputDescriptors(0).layout.indexOf('N')
+  protected var batchSize = if (batchIndex != -1 ) 
inputDescriptors(0).shape(batchIndex) else 1
+
+  protected var iDescriptors = inputDescriptors
+
+  inputDescriptors.foreach((f: DataDesc) => require(f.layout.indexOf('N') == 
batchIndex,
+"batch size should be in the same index for all inputs"))
+
+
+  if (batchIndex != -1) {
+inputDescriptors.foreach((f: DataDesc) => require(f.shape(batchIndex) == 
batchSize,
+  "batch size should be same for all inputs"))
+  } else {
+// TODO: this is assuming that the input needs a batch
+iDescriptors = inputDescriptors.map((f : DataDesc) => new DataDesc(f.name,
+Shape(1 +: f.shape.toVector), f.dtype, 'N' +: f.layout) )
+batchIndex = 1
+  }
+
+  protected val mxNetHandler = MXNetHandler()
+
+  protected val mod = loadModule()
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+*
+* @param input : A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+*  is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  override def predict(input: IndexedSeq[Array[Float]]): 
IndexedSeq[Array[Float]] = {
+
+require(input.length == inputDescriptors.length, "number of inputs 
provided: %d" +
+  " do not match number of inputs in inputDescriptors: 
%d".format(input.length,
+   

[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166181419
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/PredictBase.scala
 ##
 @@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.io.NDArrayIter
+import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.module.Module
+
+import scala.collection.mutable.ListBuffer
+import org.slf4j.LoggerFactory
+
+/**
+  * Base Trait for MXNNet Predictor classes.
+  */
 
 Review comment:
   I think in other places, we use javadoc-styled comments, i.e. 
   
   ```
   /**
*
*/
   ```
   
   instead of 
   
   ```scala
   /**
 *
 */
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166181293
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/PredictBase.scala
 ##
 @@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.io.NDArrayIter
+import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.module.Module
+
+import scala.collection.mutable.ListBuffer
+import org.slf4j.LoggerFactory
+
+/**
+  * Base Trait for MXNNet Predictor classes.
 
 Review comment:
   MXNet


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166179638
 
 

 ##
 File path: scala-package/core/src/main/scala/ml/dmlc/mxnet/IO.scala
 ##
 @@ -230,6 +230,8 @@ abstract class DataPack() extends Iterable[DataBatch] {
 // Named data desc description contains name, shape, type and other extended 
attributes.
 case class DataDesc(name: String, shape: Shape,
 dtype: DType = Base.MX_REAL_TYPE, layout: String = "NCHW") 
{
+  require(shape.length == layout.length, "number of dimensions in shape should 
match the layout")
 
 Review comment:
   would you show the current length of both in the error msg?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166182216
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/PredictBase.scala
 ##
 @@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.io.NDArrayIter
+import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.module.Module
+
+import scala.collection.mutable.ListBuffer
+import org.slf4j.LoggerFactory
+
+/**
+  * Base Trait for MXNNet Predictor classes.
+  */
+trait PredictBase {
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+* @param input: A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+* is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  def predict(input: IndexedSeq[Array[Float]]): IndexedSeq[Array[Float]]
+
+  /**
+* Predict using NDArray as input. This method is useful when the input is 
a batch of data
+* or when multiple operations on the input/output have to performed.
+* Note: User is responsible for managing allocation/deallocation of 
NDArrays.
+* @param input: IndexedSequence NDArrays.
+* @return output of Predictions as NDArrays.
+*/
+  def predictWithNDArray(input: IndexedSeq[NDArray]): IndexedSeq[NDArray]
+
+}
+
+/**
+  * Implementation of predict routines.
+  *
+  * @param modelPathPrefix PathPrefix from where to load the model.
+  *Example: file://model-dir/resnet-152(containing 
resnet-152-symbol.json,
+  *resnet-152-.params and optionally synset.txt).
+  *Supports model loading from various sources like 
local disk,
+  *hdfs, https and s3. file://, hdfs://, https://, 
s3://
+  * @param inputDescriptors Descriptors defining the input node names, shape,
+  * layout and Type parameters
+  * @param outputDescriptors Descriptors defining the output node names, shape,
+  *  layout and Type parameters
+  */
+class Predictor(modelPathPrefix: String,
+ protected val inputDescriptors: IndexedSeq[DataDesc],
+ protected var outputDescriptors:
+ Option[IndexedSeq[DataDesc]] = None) extends PredictBase {
+
+  private val logger = LoggerFactory.getLogger(classOf[Predictor])
+
+  protected var batchIndex = inputDescriptors(0).layout.indexOf('N')
+  protected var batchSize = if (batchIndex != -1 ) 
inputDescriptors(0).shape(batchIndex) else 1
+
+  protected var iDescriptors = inputDescriptors
+
+  inputDescriptors.foreach((f: DataDesc) => require(f.layout.indexOf('N') == 
batchIndex,
+"batch size should be in the same index for all inputs"))
+
+
+  if (batchIndex != -1) {
+inputDescriptors.foreach((f: DataDesc) => require(f.shape(batchIndex) == 
batchSize,
+  "batch size should be same for all inputs"))
+  } else {
+// TODO: this is assuming that the input needs a batch
+iDescriptors = inputDescriptors.map((f : DataDesc) => new DataDesc(f.name,
+Shape(1 +: f.shape.toVector), f.dtype, 'N' +: f.layout) )
+batchIndex = 1
+  }
+
+  protected val mxNetHandler = MXNetHandler()
+
+  protected val mod = loadModule()
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+*
+* @param input : A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+*  is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  override def predict(input: IndexedSeq[Array[Float]]): 
IndexedSeq[Array[Float]] = {
+
+require(input.length == inputDescriptors.length, "number of inputs 
provided: %d" +
+  " do not match number of inputs in inputDescriptors: 
%d".format(input.length,
+   

[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166180002
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/MXNetHandler.scala
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import java.util.concurrent._
+
+trait MXNetHandler {
+
+  def execute[T](f: => T): T
+
+  val executor: ExecutorService
+
+}
+
+object MXNetHandlerType extends Enumeration {
 
 Review comment:
   if you are not certain about the stability of the API, you may want to make 
it private[infer]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166182170
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/PredictBase.scala
 ##
 @@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.io.NDArrayIter
+import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.module.Module
+
+import scala.collection.mutable.ListBuffer
+import org.slf4j.LoggerFactory
+
+/**
+  * Base Trait for MXNNet Predictor classes.
+  */
+trait PredictBase {
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+* @param input: A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+* is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  def predict(input: IndexedSeq[Array[Float]]): IndexedSeq[Array[Float]]
+
+  /**
+* Predict using NDArray as input. This method is useful when the input is 
a batch of data
+* or when multiple operations on the input/output have to performed.
+* Note: User is responsible for managing allocation/deallocation of 
NDArrays.
+* @param input: IndexedSequence NDArrays.
+* @return output of Predictions as NDArrays.
+*/
+  def predictWithNDArray(input: IndexedSeq[NDArray]): IndexedSeq[NDArray]
+
+}
+
+/**
+  * Implementation of predict routines.
+  *
+  * @param modelPathPrefix PathPrefix from where to load the model.
+  *Example: file://model-dir/resnet-152(containing 
resnet-152-symbol.json,
+  *resnet-152-.params and optionally synset.txt).
+  *Supports model loading from various sources like 
local disk,
+  *hdfs, https and s3. file://, hdfs://, https://, 
s3://
+  * @param inputDescriptors Descriptors defining the input node names, shape,
+  * layout and Type parameters
+  * @param outputDescriptors Descriptors defining the output node names, shape,
+  *  layout and Type parameters
+  */
+class Predictor(modelPathPrefix: String,
+ protected val inputDescriptors: IndexedSeq[DataDesc],
+ protected var outputDescriptors:
+ Option[IndexedSeq[DataDesc]] = None) extends PredictBase {
+
+  private val logger = LoggerFactory.getLogger(classOf[Predictor])
+
+  protected var batchIndex = inputDescriptors(0).layout.indexOf('N')
+  protected var batchSize = if (batchIndex != -1 ) 
inputDescriptors(0).shape(batchIndex) else 1
+
+  protected var iDescriptors = inputDescriptors
+
+  inputDescriptors.foreach((f: DataDesc) => require(f.layout.indexOf('N') == 
batchIndex,
+"batch size should be in the same index for all inputs"))
+
+
+  if (batchIndex != -1) {
+inputDescriptors.foreach((f: DataDesc) => require(f.shape(batchIndex) == 
batchSize,
+  "batch size should be same for all inputs"))
+  } else {
+// TODO: this is assuming that the input needs a batch
+iDescriptors = inputDescriptors.map((f : DataDesc) => new DataDesc(f.name,
+Shape(1 +: f.shape.toVector), f.dtype, 'N' +: f.layout) )
+batchIndex = 1
+  }
+
+  protected val mxNetHandler = MXNetHandler()
+
+  protected val mod = loadModule()
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+*
+* @param input : A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+*  is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  override def predict(input: IndexedSeq[Array[Float]]): 
IndexedSeq[Array[Float]] = {
+
+require(input.length == inputDescriptors.length, "number of inputs 
provided: %d" +
+  " do not match number of inputs in inputDescriptors: 
%d".format(input.length,
+   

[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166181996
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/PredictBase.scala
 ##
 @@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.io.NDArrayIter
+import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.module.Module
+
+import scala.collection.mutable.ListBuffer
+import org.slf4j.LoggerFactory
+
+/**
+  * Base Trait for MXNNet Predictor classes.
+  */
+trait PredictBase {
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+* @param input: A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+* is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  def predict(input: IndexedSeq[Array[Float]]): IndexedSeq[Array[Float]]
+
+  /**
+* Predict using NDArray as input. This method is useful when the input is 
a batch of data
+* or when multiple operations on the input/output have to performed.
+* Note: User is responsible for managing allocation/deallocation of 
NDArrays.
+* @param input: IndexedSequence NDArrays.
+* @return output of Predictions as NDArrays.
+*/
+  def predictWithNDArray(input: IndexedSeq[NDArray]): IndexedSeq[NDArray]
+
+}
+
+/**
+  * Implementation of predict routines.
+  *
+  * @param modelPathPrefix PathPrefix from where to load the model.
+  *Example: file://model-dir/resnet-152(containing 
resnet-152-symbol.json,
+  *resnet-152-.params and optionally synset.txt).
+  *Supports model loading from various sources like 
local disk,
+  *hdfs, https and s3. file://, hdfs://, https://, 
s3://
+  * @param inputDescriptors Descriptors defining the input node names, shape,
+  * layout and Type parameters
+  * @param outputDescriptors Descriptors defining the output node names, shape,
+  *  layout and Type parameters
+  */
+class Predictor(modelPathPrefix: String,
+ protected val inputDescriptors: IndexedSeq[DataDesc],
+ protected var outputDescriptors:
+ Option[IndexedSeq[DataDesc]] = None) extends PredictBase {
+
+  private val logger = LoggerFactory.getLogger(classOf[Predictor])
+
+  protected var batchIndex = inputDescriptors(0).layout.indexOf('N')
 
 Review comment:
   you need to validate inputDescriptor's size, and also you may use 
inputDescriptors.head


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166182192
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/PredictBase.scala
 ##
 @@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.io.NDArrayIter
+import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.module.Module
+
+import scala.collection.mutable.ListBuffer
+import org.slf4j.LoggerFactory
+
+/**
+  * Base Trait for MXNNet Predictor classes.
+  */
+trait PredictBase {
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+* @param input: A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+* is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  def predict(input: IndexedSeq[Array[Float]]): IndexedSeq[Array[Float]]
+
+  /**
+* Predict using NDArray as input. This method is useful when the input is 
a batch of data
+* or when multiple operations on the input/output have to performed.
+* Note: User is responsible for managing allocation/deallocation of 
NDArrays.
+* @param input: IndexedSequence NDArrays.
+* @return output of Predictions as NDArrays.
+*/
+  def predictWithNDArray(input: IndexedSeq[NDArray]): IndexedSeq[NDArray]
+
+}
+
+/**
+  * Implementation of predict routines.
+  *
+  * @param modelPathPrefix PathPrefix from where to load the model.
+  *Example: file://model-dir/resnet-152(containing 
resnet-152-symbol.json,
+  *resnet-152-.params and optionally synset.txt).
+  *Supports model loading from various sources like 
local disk,
+  *hdfs, https and s3. file://, hdfs://, https://, 
s3://
+  * @param inputDescriptors Descriptors defining the input node names, shape,
+  * layout and Type parameters
+  * @param outputDescriptors Descriptors defining the output node names, shape,
+  *  layout and Type parameters
+  */
+class Predictor(modelPathPrefix: String,
+ protected val inputDescriptors: IndexedSeq[DataDesc],
+ protected var outputDescriptors:
+ Option[IndexedSeq[DataDesc]] = None) extends PredictBase {
+
+  private val logger = LoggerFactory.getLogger(classOf[Predictor])
+
+  protected var batchIndex = inputDescriptors(0).layout.indexOf('N')
+  protected var batchSize = if (batchIndex != -1 ) 
inputDescriptors(0).shape(batchIndex) else 1
+
+  protected var iDescriptors = inputDescriptors
+
+  inputDescriptors.foreach((f: DataDesc) => require(f.layout.indexOf('N') == 
batchIndex,
+"batch size should be in the same index for all inputs"))
+
+
+  if (batchIndex != -1) {
+inputDescriptors.foreach((f: DataDesc) => require(f.shape(batchIndex) == 
batchSize,
+  "batch size should be same for all inputs"))
+  } else {
+// TODO: this is assuming that the input needs a batch
+iDescriptors = inputDescriptors.map((f : DataDesc) => new DataDesc(f.name,
+Shape(1 +: f.shape.toVector), f.dtype, 'N' +: f.layout) )
+batchIndex = 1
+  }
+
+  protected val mxNetHandler = MXNetHandler()
+
+  protected val mod = loadModule()
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+*
+* @param input : A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+*  is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  override def predict(input: IndexedSeq[Array[Float]]): 
IndexedSeq[Array[Float]] = {
+
+require(input.length == inputDescriptors.length, "number of inputs 
provided: %d" +
+  " do not match number of inputs in inputDescriptors: 
%d".format(input.length,
+   

[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166179771
 
 

 ##
 File path: scala-package/infer/pom.xml
 ##
 @@ -0,0 +1,124 @@
+
+http://maven.apache.org/POM/4.0.0;
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+
+mxnet-parent_2.11
+ml.dmlc.mxnet
+1.0.1-SNAPSHOT
+
+4.0.0
+
+mxnet-infer
+MXNet Scala Package - Inference
+
+
+
+release
+
+
+
+org.apache.maven.plugins
+maven-source-plugin
+
+true
+
+
+
+org.apache.maven.plugins
+maven-javadoc-plugin
+
+true
+
+
+
+org.apache.maven.plugins
+maven-gpg-plugin
+
+true
+
+
+
+org.sonatype.plugins
+nexus-staging-maven-plugin
+
+
true
+
+
+
+
+
+
 
 Review comment:
   all of the above things can be inherited from parent pom.xml


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166182196
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/PredictBase.scala
 ##
 @@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.io.NDArrayIter
+import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.module.Module
+
+import scala.collection.mutable.ListBuffer
+import org.slf4j.LoggerFactory
+
+/**
+  * Base Trait for MXNNet Predictor classes.
+  */
+trait PredictBase {
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+* @param input: A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+* is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  def predict(input: IndexedSeq[Array[Float]]): IndexedSeq[Array[Float]]
+
+  /**
+* Predict using NDArray as input. This method is useful when the input is 
a batch of data
+* or when multiple operations on the input/output have to performed.
+* Note: User is responsible for managing allocation/deallocation of 
NDArrays.
+* @param input: IndexedSequence NDArrays.
+* @return output of Predictions as NDArrays.
+*/
+  def predictWithNDArray(input: IndexedSeq[NDArray]): IndexedSeq[NDArray]
+
+}
+
+/**
+  * Implementation of predict routines.
+  *
+  * @param modelPathPrefix PathPrefix from where to load the model.
+  *Example: file://model-dir/resnet-152(containing 
resnet-152-symbol.json,
+  *resnet-152-.params and optionally synset.txt).
+  *Supports model loading from various sources like 
local disk,
+  *hdfs, https and s3. file://, hdfs://, https://, 
s3://
+  * @param inputDescriptors Descriptors defining the input node names, shape,
+  * layout and Type parameters
+  * @param outputDescriptors Descriptors defining the output node names, shape,
+  *  layout and Type parameters
+  */
+class Predictor(modelPathPrefix: String,
+ protected val inputDescriptors: IndexedSeq[DataDesc],
+ protected var outputDescriptors:
+ Option[IndexedSeq[DataDesc]] = None) extends PredictBase {
+
+  private val logger = LoggerFactory.getLogger(classOf[Predictor])
+
+  protected var batchIndex = inputDescriptors(0).layout.indexOf('N')
+  protected var batchSize = if (batchIndex != -1 ) 
inputDescriptors(0).shape(batchIndex) else 1
+
+  protected var iDescriptors = inputDescriptors
+
+  inputDescriptors.foreach((f: DataDesc) => require(f.layout.indexOf('N') == 
batchIndex,
+"batch size should be in the same index for all inputs"))
+
+
+  if (batchIndex != -1) {
+inputDescriptors.foreach((f: DataDesc) => require(f.shape(batchIndex) == 
batchSize,
+  "batch size should be same for all inputs"))
+  } else {
+// TODO: this is assuming that the input needs a batch
+iDescriptors = inputDescriptors.map((f : DataDesc) => new DataDesc(f.name,
+Shape(1 +: f.shape.toVector), f.dtype, 'N' +: f.layout) )
+batchIndex = 1
+  }
+
+  protected val mxNetHandler = MXNetHandler()
+
+  protected val mod = loadModule()
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+*
+* @param input : A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+*  is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  override def predict(input: IndexedSeq[Array[Float]]): 
IndexedSeq[Array[Float]] = {
+
+require(input.length == inputDescriptors.length, "number of inputs 
provided: %d" +
+  " do not match number of inputs in inputDescriptors: 
%d".format(input.length,
+   

[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166181466
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/MXNetHandler.scala
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import java.util.concurrent._
+
+trait MXNetHandler {
+
+  def execute[T](f: => T): T
+
+  val executor: ExecutorService
+
+}
+
+object MXNetHandlerType extends Enumeration {
+
+  type MXNetHandlerType = Value
+  val SingleThreadHandler = Value("MXNetSingleThreadHandler")
+  val OneThreadPerModelHandler = Value("MXNetOneThreadPerModelHandler")
+}
+
+class MXNetOneThreadPerModelHandler extends MXNetHandler {
+
+  private val threadFactory = new ThreadFactory {
+
+override def newThread(r: Runnable): Thread = new Thread(r) {
+  setName(classOf[MXNetOneThreadPerModelHandler].getCanonicalName)
+}
+  }
+
+  override val executor: ExecutorService = Executors.newFixedThreadPool(10, 
threadFactory)
+
+  override def execute[T](f: => T): T = {
+val task = new Callable[T] {
+  override def call(): T = {
+// scalastyle:off println
+println("threadId: %s".format(Thread.currentThread().getId()))
+// scalastyle:on println
+f
+  }
+}
+val result = executor.submit(task)
+try {
+  result.get()
+}
+catch {
+  case e: ExecutionException => throw e.getCause()
+}
+  }
+
+}
+
+object MXNetSingleThreadHandler extends MXNetOneThreadPerModelHandler {
+
+}
+
+object MXNetHandler {
 
 Review comment:
   same for other classes


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166179973
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/MXNetHandler.scala
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import java.util.concurrent._
+
+trait MXNetHandler {
 
 Review comment:
   if you are not certain about the stability of the API, you may want to make 
it private[infer]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166182175
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/PredictBase.scala
 ##
 @@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.io.NDArrayIter
+import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.module.Module
+
+import scala.collection.mutable.ListBuffer
+import org.slf4j.LoggerFactory
+
+/**
+  * Base Trait for MXNNet Predictor classes.
+  */
+trait PredictBase {
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+* @param input: A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+* is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  def predict(input: IndexedSeq[Array[Float]]): IndexedSeq[Array[Float]]
+
+  /**
+* Predict using NDArray as input. This method is useful when the input is 
a batch of data
+* or when multiple operations on the input/output have to performed.
+* Note: User is responsible for managing allocation/deallocation of 
NDArrays.
+* @param input: IndexedSequence NDArrays.
+* @return output of Predictions as NDArrays.
+*/
+  def predictWithNDArray(input: IndexedSeq[NDArray]): IndexedSeq[NDArray]
+
+}
+
+/**
+  * Implementation of predict routines.
+  *
+  * @param modelPathPrefix PathPrefix from where to load the model.
+  *Example: file://model-dir/resnet-152(containing 
resnet-152-symbol.json,
+  *resnet-152-.params and optionally synset.txt).
+  *Supports model loading from various sources like 
local disk,
+  *hdfs, https and s3. file://, hdfs://, https://, 
s3://
+  * @param inputDescriptors Descriptors defining the input node names, shape,
+  * layout and Type parameters
+  * @param outputDescriptors Descriptors defining the output node names, shape,
+  *  layout and Type parameters
+  */
+class Predictor(modelPathPrefix: String,
+ protected val inputDescriptors: IndexedSeq[DataDesc],
+ protected var outputDescriptors:
+ Option[IndexedSeq[DataDesc]] = None) extends PredictBase {
+
+  private val logger = LoggerFactory.getLogger(classOf[Predictor])
+
+  protected var batchIndex = inputDescriptors(0).layout.indexOf('N')
+  protected var batchSize = if (batchIndex != -1 ) 
inputDescriptors(0).shape(batchIndex) else 1
+
+  protected var iDescriptors = inputDescriptors
+
+  inputDescriptors.foreach((f: DataDesc) => require(f.layout.indexOf('N') == 
batchIndex,
+"batch size should be in the same index for all inputs"))
+
+
+  if (batchIndex != -1) {
+inputDescriptors.foreach((f: DataDesc) => require(f.shape(batchIndex) == 
batchSize,
+  "batch size should be same for all inputs"))
+  } else {
+// TODO: this is assuming that the input needs a batch
+iDescriptors = inputDescriptors.map((f : DataDesc) => new DataDesc(f.name,
+Shape(1 +: f.shape.toVector), f.dtype, 'N' +: f.layout) )
+batchIndex = 1
+  }
+
+  protected val mxNetHandler = MXNetHandler()
+
+  protected val mod = loadModule()
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+*
+* @param input : A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+*  is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  override def predict(input: IndexedSeq[Array[Float]]): 
IndexedSeq[Array[Float]] = {
+
+require(input.length == inputDescriptors.length, "number of inputs 
provided: %d" +
+  " do not match number of inputs in inputDescriptors: 
%d".format(input.length,
+   

[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166182830
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/PredictBase.scala
 ##
 @@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.io.NDArrayIter
+import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.module.Module
+
+import scala.collection.mutable.ListBuffer
+import org.slf4j.LoggerFactory
+
+/**
+  * Base Trait for MXNNet Predictor classes.
+  */
+trait PredictBase {
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+* @param input: A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+* is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  def predict(input: IndexedSeq[Array[Float]]): IndexedSeq[Array[Float]]
+
+  /**
+* Predict using NDArray as input. This method is useful when the input is 
a batch of data
+* or when multiple operations on the input/output have to performed.
+* Note: User is responsible for managing allocation/deallocation of 
NDArrays.
+* @param input: IndexedSequence NDArrays.
+* @return output of Predictions as NDArrays.
+*/
+  def predictWithNDArray(input: IndexedSeq[NDArray]): IndexedSeq[NDArray]
+
+}
+
+/**
+  * Implementation of predict routines.
+  *
+  * @param modelPathPrefix PathPrefix from where to load the model.
+  *Example: file://model-dir/resnet-152(containing 
resnet-152-symbol.json,
+  *resnet-152-.params and optionally synset.txt).
+  *Supports model loading from various sources like 
local disk,
+  *hdfs, https and s3. file://, hdfs://, https://, 
s3://
+  * @param inputDescriptors Descriptors defining the input node names, shape,
+  * layout and Type parameters
+  * @param outputDescriptors Descriptors defining the output node names, shape,
+  *  layout and Type parameters
+  */
+class Predictor(modelPathPrefix: String,
+ protected val inputDescriptors: IndexedSeq[DataDesc],
+ protected var outputDescriptors:
+ Option[IndexedSeq[DataDesc]] = None) extends PredictBase {
+
+  private val logger = LoggerFactory.getLogger(classOf[Predictor])
+
+  protected var batchIndex = inputDescriptors(0).layout.indexOf('N')
+  protected var batchSize = if (batchIndex != -1 ) 
inputDescriptors(0).shape(batchIndex) else 1
+
+  protected var iDescriptors = inputDescriptors
+
+  inputDescriptors.foreach((f: DataDesc) => require(f.layout.indexOf('N') == 
batchIndex,
+"batch size should be in the same index for all inputs"))
+
+
+  if (batchIndex != -1) {
+inputDescriptors.foreach((f: DataDesc) => require(f.shape(batchIndex) == 
batchSize,
+  "batch size should be same for all inputs"))
+  } else {
+// TODO: this is assuming that the input needs a batch
+iDescriptors = inputDescriptors.map((f : DataDesc) => new DataDesc(f.name,
+Shape(1 +: f.shape.toVector), f.dtype, 'N' +: f.layout) )
+batchIndex = 1
+  }
+
+  protected val mxNetHandler = MXNetHandler()
+
+  protected val mod = loadModule()
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+*
+* @param input : A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+*  is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  override def predict(input: IndexedSeq[Array[Float]]): 
IndexedSeq[Array[Float]] = {
+
+require(input.length == inputDescriptors.length, "number of inputs 
provided: %d" +
+  " do not match number of inputs in inputDescriptors: 
%d".format(input.length,
+   

[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166181244
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/MXNetHandler.scala
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import java.util.concurrent._
+
+trait MXNetHandler {
+
+  def execute[T](f: => T): T
+
+  val executor: ExecutorService
+
+}
+
+object MXNetHandlerType extends Enumeration {
+
+  type MXNetHandlerType = Value
+  val SingleThreadHandler = Value("MXNetSingleThreadHandler")
+  val OneThreadPerModelHandler = Value("MXNetOneThreadPerModelHandler")
+}
+
+class MXNetOneThreadPerModelHandler extends MXNetHandler {
+
+  private val threadFactory = new ThreadFactory {
+
+override def newThread(r: Runnable): Thread = new Thread(r) {
+  setName(classOf[MXNetOneThreadPerModelHandler].getCanonicalName)
+}
+  }
+
+  override val executor: ExecutorService = Executors.newFixedThreadPool(10, 
threadFactory)
+
+  override def execute[T](f: => T): T = {
+val task = new Callable[T] {
+  override def call(): T = {
+// scalastyle:off println
+println("threadId: %s".format(Thread.currentThread().getId()))
+// scalastyle:on println
+f
+  }
+}
+val result = executor.submit(task)
+try {
+  result.get()
+}
+catch {
+  case e: ExecutionException => throw e.getCause()
+}
+  }
+
+}
+
+object MXNetSingleThreadHandler extends MXNetOneThreadPerModelHandler {
+
+}
+
+object MXNetHandler {
 
 Review comment:
   again, you may want to minimize the accessibility if you are not certain 
about the stability of the API


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CodingCat commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
CodingCat commented on a change in pull request #9678: [First cut] Scala 
Inference APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166182180
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/PredictBase.scala
 ##
 @@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import ml.dmlc.mxnet.io.NDArrayIter
+import ml.dmlc.mxnet.{DataDesc, NDArray, Shape}
+import ml.dmlc.mxnet.module.Module
+
+import scala.collection.mutable.ListBuffer
+import org.slf4j.LoggerFactory
+
+/**
+  * Base Trait for MXNNet Predictor classes.
+  */
+trait PredictBase {
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+* @param input: A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+* is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  def predict(input: IndexedSeq[Array[Float]]): IndexedSeq[Array[Float]]
+
+  /**
+* Predict using NDArray as input. This method is useful when the input is 
a batch of data
+* or when multiple operations on the input/output have to performed.
+* Note: User is responsible for managing allocation/deallocation of 
NDArrays.
+* @param input: IndexedSequence NDArrays.
+* @return output of Predictions as NDArrays.
+*/
+  def predictWithNDArray(input: IndexedSeq[NDArray]): IndexedSeq[NDArray]
+
+}
+
+/**
+  * Implementation of predict routines.
+  *
+  * @param modelPathPrefix PathPrefix from where to load the model.
+  *Example: file://model-dir/resnet-152(containing 
resnet-152-symbol.json,
+  *resnet-152-.params and optionally synset.txt).
+  *Supports model loading from various sources like 
local disk,
+  *hdfs, https and s3. file://, hdfs://, https://, 
s3://
+  * @param inputDescriptors Descriptors defining the input node names, shape,
+  * layout and Type parameters
+  * @param outputDescriptors Descriptors defining the output node names, shape,
+  *  layout and Type parameters
+  */
+class Predictor(modelPathPrefix: String,
+ protected val inputDescriptors: IndexedSeq[DataDesc],
+ protected var outputDescriptors:
+ Option[IndexedSeq[DataDesc]] = None) extends PredictBase {
+
+  private val logger = LoggerFactory.getLogger(classOf[Predictor])
+
+  protected var batchIndex = inputDescriptors(0).layout.indexOf('N')
+  protected var batchSize = if (batchIndex != -1 ) 
inputDescriptors(0).shape(batchIndex) else 1
+
+  protected var iDescriptors = inputDescriptors
+
+  inputDescriptors.foreach((f: DataDesc) => require(f.layout.indexOf('N') == 
batchIndex,
+"batch size should be in the same index for all inputs"))
+
+
+  if (batchIndex != -1) {
+inputDescriptors.foreach((f: DataDesc) => require(f.shape(batchIndex) == 
batchSize,
+  "batch size should be same for all inputs"))
+  } else {
+// TODO: this is assuming that the input needs a batch
+iDescriptors = inputDescriptors.map((f : DataDesc) => new DataDesc(f.name,
+Shape(1 +: f.shape.toVector), f.dtype, 'N' +: f.layout) )
+batchIndex = 1
+  }
+
+  protected val mxNetHandler = MXNetHandler()
+
+  protected val mod = loadModule()
+
+  /**
+* This method will take input as IndexedSeq one dimensional arrays and 
creates
+* NDArray needed for inference. The array will be reshaped based on the 
input descriptors.
+*
+* @param input : A IndexedSequence of Java one-dimensional array, An 
IndexedSequence is
+*  is needed when the model has more than one input/output
+* @return IndexedSequence array of outputs.
+*/
+  override def predict(input: IndexedSeq[Array[Float]]): 
IndexedSeq[Array[Float]] = {
+
+require(input.length == inputDescriptors.length, "number of inputs 
provided: %d" +
+  " do not match number of inputs in inputDescriptors: 
%d".format(input.length,
+   

[GitHub] safrooze commented on issue #9705: Added unittest for benchmarking metric performance

2018-02-05 Thread GitBox
safrooze commented on issue #9705: Added unittest for benchmarking metric 
performance
URL: https://github.com/apache/incubator-mxnet/pull/9705#issuecomment-363302529
 
 
   @szha Please review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] safrooze opened a new pull request #9705: Added unittest for benchmarking metric performance

2018-02-05 Thread GitBox
safrooze opened a new pull request #9705: Added unittest for benchmarking 
metric performance
URL: https://github.com/apache/incubator-mxnet/pull/9705
 
 
   Output of the benchmark is sent to stderr
   
   ## Description ##
   Benchmark loops through two batch-sizes (100,000 and 1,000,000) and two 
output dimensions (100 and 500) and generates random data on CPU and GPU and 
calls metric.update() on a list of metrics with the generated date.
   
   ## Checklist ##
   ### Essentials ###
   - [x] Passed code style checking (`make lint`)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - [x] Code is well-documented: 
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Added unit-test for benchmarking metric performance. 
   
   ## Comments ##
   - Unit-test passes without GPU, but fails if GPU memory allocation fails
   - The output looks like this on a p2.x instance
   ```
   mx.metric benchmarks
   Metric Ctx   Batch Size Output Dim Elapsed Time
   --
   acccpu(0)10 1000.069804
   accgpu(0)10 1000.0055592
   --
   acccpu(0)10 5000.29323
   accgpu(0)10 5000.034261
   --
   acccpu(0)1001000.66856
   accgpu(0)1001000.057442
   --
   acccpu(0)1005002.9239
   accgpu(0)1005000.27827
   --
   top_k_acc  cpu(0)10 1000.39707
   top_k_acc  gpu(0)10 1000.39684
   --
   top_k_acc  cpu(0)10 5002.6537
   top_k_acc  gpu(0)10 5002.6574
   --
   top_k_acc  cpu(0)1001004.0662
   top_k_acc  gpu(0)1001004.0537
   --
   top_k_acc  cpu(0)10050026.581
   top_k_acc  gpu(0)10050026.594
   --
   F1 cpu(0)10 2  0.2515
   F1 gpu(0)10 2  0.25105
   --
   F1 cpu(0)10 2  0.25086
   F1 gpu(0)10 2  0.24956
   --
   F1 cpu(0)1002  2.509
   F1 gpu(0)1002  2.5127
   --
   F1 cpu(0)1002  2.5107
   F1 gpu(0)1002  2.5094
   --
   Perplexity cpu(0)10 1000.0058115
   Perplexity gpu(0)10 1000.0030518
   --
   Perplexity cpu(0)10 5000.0054376
   Perplexity gpu(0)10 5000.0070541
   --
   Perplexity cpu(0)1001000.042403
   Perplexity gpu(0)1001000.003443
   --
   Perplexity cpu(0)1005000.041232
   Perplexity gpu(0)1005000.051778
   --
   MAEcpu(0)10 1000.058175
   MAEgpu(0)10 1000.056117
   --
   MAEcpu(0)10 5000.26928
   MAEgpu(0)10 5000.26553
   --
   MAEcpu(0)100100  

[GitHub] GoodJoey closed issue #9430: when i use k8s to distribute training, i get the error on scheduler?(my_node_.port) != (-1) bind failed

2018-02-05 Thread GitBox
GoodJoey closed issue #9430: when i use k8s to distribute training, i get the 
error on scheduler?(my_node_.port) != (-1) bind failed
URL: https://github.com/apache/incubator-mxnet/issues/9430
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Godricly opened a new issue #9704: Misleading Document for conv1d and max1d in gluon

2018-02-05 Thread GitBox
Godricly opened a new issue #9704: Misleading Document for conv1d and max1d in 
gluon
URL: https://github.com/apache/incubator-mxnet/issues/9704
 
 
   The document claims to support 'NWC' layout, while it actually not.
   my current mxnet version is mxnet-cu80 (1.0.0.post4).
   
   @zhreshold  @mli 
   ```
   layout (str, default 'NCW') ? Dimension ordering of data and weight. Can be 
?NCW?, ?NWC?, etc. ?N?, ?C?, ?W? stands for batch, channel, and width (time) 
dimensions 
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhaodongsun opened a new issue #9703: Can't run mx.nd.smooth_l1

2018-02-05 Thread GitBox
zhaodongsun opened a new issue #9703: Can't run mx.nd.smooth_l1
URL: https://github.com/apache/incubator-mxnet/issues/9703
 
 
   ## Description
   (Brief description of the problem in no more than 2 sentences.)
   Can't run mx.nd.smooth_l1
   
   ## Environment info (Required)
   --Python Info--
   Version  : 3.5.2
   Compiler : MSC v.1900 64 bit (AMD64)
   Build: ('v3.5.2:4def2a2901a5', 'Jun 25 2016 22:18:55')
   Arch : ('64bit', 'WindowsPE')
   Pip Info---
   Version  : 9.0.1
   Directory: 
C:\Users\67009\AppData\Local\Programs\Python\Python35\lib\site-packages\pip
   --MXNet Info---
   Version  : 1.0.0
   Directory: 
C:\Users\67009\AppData\Local\Programs\Python\Python35\lib\site-packages\mxnet
   Hashtag not found. Not installed from pre-built package.
   --System Info--
   Platform : Windows-10-10.0.16299-SP0
   system   : Windows
   node : DESKTOP-NRACBB8
   release  : 10
   version  : 10.0.16299
   --Hardware Info--
   machine  : AMD64
   processor: Intel64 Family 6 Model 60 Stepping 3, GenuineIntel
   Name
   Intel(R) Core(TM) i7-4710HQ CPU @ 2.50GHz
   
   --Network Test--
   Setting timeout: 10
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0393 sec, LOAD: 
1.9675 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0338 sec, 
LOAD: 1.6308 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0321 sec, LOAD: 
0.4048 sec.
   Timing for FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 DNS: 0.2478 sec, LOAD: 1.1301 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0341 sec, LOAD: 
1.5128 sec.
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0072 
sec, LOAD: 1.3811 sec.
   
   
   ## Build info (Required if built from source)
   N/A
   
   ## Error Message:
   Traceback (most recent call last):
 File "", line 1, in 
   b=mx.nd.smooth_l1(a)
 File "", line 48, in smooth_l1
 File 
"C:\Users\67009\AppData\Local\Programs\Python\Python35\lib\site-packages\mxnet\_ctypes\ndarray.py",
 line 92, in _imperative_invoke
   ctypes.byref(out_stypes)))
   OSError: [WinError -529697949] Windows Error 0xe06d7363
   
   ## Steps to reproduce
   
   ```
   a=mx.nd.array([5,9,4])
   b=mx.nd.smooth_l1(a)
   ```
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zihaolucky commented on a change in pull request #9195: [WIP]NCE loss gluon

2018-02-05 Thread GitBox
zihaolucky commented on a change in pull request #9195: [WIP]NCE loss gluon
URL: https://github.com/apache/incubator-mxnet/pull/9195#discussion_r166171054
 
 

 ##
 File path: python/mxnet/gluon/data/sampler.py
 ##
 @@ -136,3 +138,74 @@ def __len__(self):
 raise ValueError(
 "last_batch must be one of 'keep', 'discard', or 'rollover', " \
 "but got %s"%self._last_batch)
+
+
+class AliasMethodSampler(object):
 
 Review comment:
   @piiswrong As the `NoiseContrastiveEstimationLoss` use it, where should I 
put the code?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9677: Refactor operators and add MKLDNN

2018-02-05 Thread GitBox
cjolivier01 commented on a change in pull request #9677: Refactor operators and 
add MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/9677#discussion_r166170702
 
 

 ##
 File path: tests/cpp/include/test_core_op.h
 ##
 @@ -209,6 +209,13 @@ class CoreOpExecutor : public 
test::op::OperatorDataInitializer
   requested.emplace_back(r);
 } else if (req.type == ResourceRequest::kRandom) {
   
requested.emplace_back(ResourceManager::Get()->Request(ctx->run_ctx.ctx, req));
+} else if (req.type == ResourceRequest::kParallelRandom) {
+  Resource rm = ResourceManager::Get()->Request(ctx->run_ctx.ctx, req);
+  if (ctx->run_ctx.ctx.dev_mask() == Context::kCPU) {
+common::random::RandGenerator::AllocState(
 
 Review comment:
   you define the seed just like you?d define it in the normal code ? there?s a 
C API for it 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mbaijal opened a new issue #9702: [Post 1.1.0] Apply PR #9701 to the master branch

2018-02-05 Thread GitBox
mbaijal opened a new issue #9702: [Post 1.1.0] Apply PR #9701 to the master 
branch
URL: https://github.com/apache/incubator-mxnet/issues/9702
 
 
   The changes to top level LICENSE file were reverted on the release branch 
for the 1.1.0 release in PR #9701 
   Once these changes are approved, create a PR to master branch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mbaijal opened a new pull request #9701: Revert "[Review Required] Fixing Licenses: Cleaning up the Top Level LICENSE file (#9484)"

2018-02-05 Thread GitBox
mbaijal opened a new pull request #9701: Revert "[Review Required] Fixing 
Licenses: Cleaning up the Top Level LICENSE file (#9484)"
URL: https://github.com/apache/incubator-mxnet/pull/9701
 
 
   This reverts commit 8930d96b265560a797c5554a9617f607cea7740f.
   
   ## Description ##
   As discussed on the general vote thread.
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   
   ### Changes ###
   - [ ] Reverts PR #9484 
   
   ## Comments ##
   Further changes may be necessary.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #9545: Profiling discussion

2018-02-05 Thread GitBox
TaoLv commented on issue #9545: Profiling discussion
URL: 
https://github.com/apache/incubator-mxnet/issues/9545#issuecomment-363281815
 
 
   > If you have better code to determine the number of "real" cores, I will 
use it.
   > Right now, we use Intel OMP (at least when building with cmake on Linux) 
and it reports tat it has bound the first N/2 threads to the respective "real" 
cores.
   
   OK, I will look into it. I will submit a new PR after finish that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on a change in pull request #9681: Better Exception Handling for Operators

2018-02-05 Thread GitBox
anirudh2290 commented on a change in pull request #9681: Better Exception 
Handling for Operators
URL: https://github.com/apache/incubator-mxnet/pull/9681#discussion_r166166236
 
 

 ##
 File path: src/engine/threaded_engine.h
 ##
 @@ -338,33 +346,46 @@ class ThreadedEngine : public Engine {
 #endif
 CallbackOnComplete callback = this->CreateCallback(
 ThreadedEngine::OnCompleteStatic, opr_block);
+CallbackOnComplete on_start_callback = this->CreateCallback(
 
 Review comment:
   Called OnStart directly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on a change in pull request #9677: Refactor operators and add MKLDNN

2018-02-05 Thread GitBox
pengzhao-intel commented on a change in pull request #9677: Refactor operators 
and add MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/9677#discussion_r166163300
 
 

 ##
 File path: tests/cpp/include/test_core_op.h
 ##
 @@ -209,6 +209,13 @@ class CoreOpExecutor : public 
test::op::OperatorDataInitializer
   requested.emplace_back(r);
 } else if (req.type == ResourceRequest::kRandom) {
   
requested.emplace_back(ResourceManager::Get()->Request(ctx->run_ctx.ctx, req));
+} else if (req.type == ResourceRequest::kParallelRandom) {
+  Resource rm = ResourceManager::Get()->Request(ctx->run_ctx.ctx, req);
+  if (ctx->run_ctx.ctx.dev_mask() == Context::kCPU) {
+common::random::RandGenerator::AllocState(
 
 Review comment:
   @cjolivier01 could you help comment for pre-set seed? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 commented on issue #1363: Error in FCN

2018-02-05 Thread GitBox
zhanghang1989 commented on issue #1363: Error in FCN
URL: 
https://github.com/apache/incubator-mxnet/issues/1363#issuecomment-363274148
 
 
   Got the same error using Gluon, any updates?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce opened a new pull request #9700: Squeeze op

2018-02-05 Thread GitBox
reminisce opened a new pull request #9700: Squeeze op
URL: https://github.com/apache/incubator-mxnet/pull/9700
 
 
   ## Description ##
   This PR implemented squeeze op in MXNet as numpy.squeeze. Requested by @szha 
.
   https://docs.scipy.org/doc/numpy/reference/generated/numpy.squeeze.html
   
   @eric-haibin-lin 
   
   ## Checklist ##
   ### Essentials ###
   - [x] Passed code style checking (`make lint`)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8552: Problem in mx.io.ImageRecordIter

2018-02-05 Thread GitBox
szha commented on issue #8552: Problem in mx.io.ImageRecordIter
URL: 
https://github.com/apache/incubator-mxnet/issues/8552#issuecomment-363267754
 
 
   @apache/mxnet-committers: This issue has been inactive for the past 90 days. 
It has no label and needs triage.
   
   For general "how-to" questions, our [user forum](https://discuss.mxnet.io/) 
(and [Chinese version](https://discuss.gluon.ai/)) is a good place to get help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #5475: Difference between ImageIter and ImageRecordIter

2018-02-05 Thread GitBox
szha commented on issue #5475: Difference between ImageIter  and ImageRecordIter
URL: 
https://github.com/apache/incubator-mxnet/issues/5475#issuecomment-363267747
 
 
   @apache/mxnet-committers: This issue has been inactive for the past 90 days. 
It has no label and needs triage.
   
   For general "how-to" questions, our [user forum](https://discuss.mxnet.io/) 
(and [Chinese version](https://discuss.gluon.ai/)) is a good place to get help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8560: Upsampling: lack of arguments crashes the kernel

2018-02-05 Thread GitBox
szha commented on issue #8560: Upsampling: lack of arguments crashes the kernel
URL: 
https://github.com/apache/incubator-mxnet/issues/8560#issuecomment-363267742
 
 
   @apache/mxnet-committers: This issue has been inactive for the past 90 days. 
It has no label and needs triage.
   
   For general "how-to" questions, our [user forum](https://discuss.mxnet.io/) 
(and [Chinese version](https://discuss.gluon.ai/)) is a good place to get help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #9698: Fixed 4 broken links

2018-02-05 Thread GitBox
eric-haibin-lin commented on issue #9698: Fixed 4 broken links
URL: https://github.com/apache/incubator-mxnet/pull/9698#issuecomment-363264070
 
 
   Can you also update the broken link in `docs/faq/multi_devices.md` for 
ps-lite to be http://ps-lite.readthedocs.io/en/latest/


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thinksanky commented on a change in pull request #9698: Fixed 4 broken links

2018-02-05 Thread GitBox
thinksanky commented on a change in pull request #9698: Fixed 4 broken links
URL: https://github.com/apache/incubator-mxnet/pull/9698#discussion_r166151717
 
 

 ##
 File path: docs/tutorials/index.md
 ##
 @@ -174,7 +174,7 @@ The Gluon and Module tutorials are in Python, but you can 
also find a variety of
 
 
 
-- [Connectionist Temporal 
Classification](http://mxnet.incubator.apache.org/tutorials/speech_recognition/ctc.html)
+- [Connectionist Temporal 
Classification](../tutorials/speech_recognition/ctc.html)
 
 Review comment:
   yes - this is a good practice. I will scan others and carefully do this in a 
different PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


svn commit: r24717 - in /dev/incubator/mxnet: 0.11.0.rc0/ 0.11.0.rc1/ 0.11.0.rc2/ 0.11.0.rc3/ 0.12.0.rc0/ 0.12.1.rc0/ 0.12.1/ 1.0.0.rc0/ 1.0.0.rc1/

2018-02-05 Thread haibin
Author: haibin
Date: Mon Feb  5 22:47:06 2018
New Revision: 24717

Log:
remove older releases

Removed:
dev/incubator/mxnet/0.11.0.rc0/
dev/incubator/mxnet/0.11.0.rc1/
dev/incubator/mxnet/0.11.0.rc2/
dev/incubator/mxnet/0.11.0.rc3/
dev/incubator/mxnet/0.12.0.rc0/
dev/incubator/mxnet/0.12.1/
dev/incubator/mxnet/0.12.1.rc0/
dev/incubator/mxnet/1.0.0.rc0/
dev/incubator/mxnet/1.0.0.rc1/



svn commit: r24716 - in /release/incubator/mxnet: 0.11.0/ 0.12.0/ 0.12.1/

2018-02-05 Thread haibin
Author: haibin
Date: Mon Feb  5 22:45:12 2018
New Revision: 24716

Log:
remove older releases

Removed:
release/incubator/mxnet/0.11.0/
release/incubator/mxnet/0.12.0/
release/incubator/mxnet/0.12.1/



[GitHub] zheng-da commented on a change in pull request #9677: Refactor operators and add MKLDNN

2018-02-05 Thread GitBox
zheng-da commented on a change in pull request #9677: Refactor operators and 
add MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/9677#discussion_r166135426
 
 

 ##
 File path: tests/cpp/include/test_core_op.h
 ##
 @@ -209,6 +209,13 @@ class CoreOpExecutor : public 
test::op::OperatorDataInitializer
   requested.emplace_back(r);
 } else if (req.type == ResourceRequest::kRandom) {
   
requested.emplace_back(ResourceManager::Get()->Request(ctx->run_ctx.ctx, req));
+} else if (req.type == ResourceRequest::kParallelRandom) {
+  Resource rm = ResourceManager::Get()->Request(ctx->run_ctx.ctx, req);
+  if (ctx->run_ctx.ctx.dev_mask() == Context::kCPU) {
+common::random::RandGenerator::AllocState(
 
 Review comment:
   I don't know. I got the code from Chris.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on issue #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

2018-02-05 Thread GitBox
reminisce commented on issue #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model 
Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#issuecomment-363243755
 
 
   @KellenSunderland cudnn6 should be alright.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

2018-02-05 Thread GitBox
reminisce commented on a change in pull request #9552: [REQUEST FOR REVIEW | DO 
NOT MERGE] Model Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#discussion_r166132741
 
 

 ##
 File path: Jenkinsfile
 ##
 @@ -442,6 +479,42 @@ try {
 }
   }
 },
+'Python2: Quantize CPU': {
+  node('mxnetlinux-gpu-p3') {
 
 Review comment:
   I see. I will change that. Thanks for pointing it out.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on a change in pull request #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

2018-02-05 Thread GitBox
KellenSunderland commented on a change in pull request #9552: [REQUEST FOR 
REVIEW | DO NOT MERGE] Model Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#discussion_r166132342
 
 

 ##
 File path: Jenkinsfile
 ##
 @@ -442,6 +479,42 @@ try {
 }
   }
 },
+'Python2: Quantize CPU': {
+  node('mxnetlinux-gpu-p3') {
 
 Review comment:
   Exactly, generally if the test is cpu-only we'd run it on a 'mxnetlinux-cpu' 
node.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on issue #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

2018-02-05 Thread GitBox
KellenSunderland commented on issue #9552: [REQUEST FOR REVIEW | DO NOT MERGE] 
Model Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#issuecomment-363242452
 
 
   @reminisce Do you know what version of cudnn you need for this?  Is cudnn6 
alright, or do you need 7?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on issue #9681: Better Exception Handling for Operators

2018-02-05 Thread GitBox
anirudh2290 commented on issue #9681: Better Exception Handling for Operators
URL: https://github.com/apache/incubator-mxnet/pull/9681#issuecomment-363241899
 
 
   @piiswrong Trying to understand your comment. 
   
Lets say we have some code snippet like the below:
   
   ```
   try:
  x, y, z = op()
  x.asnumpy()
   except:
  handle_exc()
   y = op2(y)
   y.asnumpy()   
   ```
   
   If we clear the exception_ptr corresponding to the var y when x.asnumpy() is 
executed, y may have some garbage value in it. op2 may end up executing fine, 
and after the last line  `y.asnumpy()` we have no exception thrown. Shouldn't 
all the vars and ops which are in the chain following a failed op also fail ? 
Not doing this, will lead to the vars in the dependency chain following a 
failed op having non-deterministic garbage values depending on how the failed 
op and the following ops behave.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #9625: sparse regression operators

2018-02-05 Thread GitBox
cjolivier01 commented on a change in pull request #9625: sparse regression 
operators
URL: https://github.com/apache/incubator-mxnet/pull/9625#discussion_r166129856
 
 

 ##
 File path: src/operator/regression_output.cc
 ##
 @@ -90,7 +90,7 @@ The parameter `grad_scale` can be used to change this scale 
to `grad_scale/m`.
 )code" ADD_FILELINE);
 
 MXNET_OPERATOR_REGISTER_REGRESSION_BWD(_backward_linear_reg_out, 
mshadow_op::minus, true);
-
+/*
 
 Review comment:
   Will this code be re-enabled before merge?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

2018-02-05 Thread GitBox
reminisce commented on a change in pull request #9552: [REQUEST FOR REVIEW | DO 
NOT MERGE] Model Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#discussion_r166128812
 
 

 ##
 File path: Jenkinsfile
 ##
 @@ -442,6 +479,42 @@ try {
 }
   }
 },
+'Python2: Quantize CPU': {
+  node('mxnetlinux-gpu-p3') {
 
 Review comment:
   I'm not familiar with this setup. Are we supposed not to run cpu tests on a 
gpu node? If that's the case, I would have to submit the CPU tests under 
`mxnetlinux-cpu`. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy opened a new issue #9699: MXNet pip package installs older version of numpy (1.13)

2018-02-05 Thread GitBox
sandeep-krishnamurthy opened a new issue #9699: MXNet pip package installs 
older version of numpy (1.13)
URL: https://github.com/apache/incubator-mxnet/issues/9699
 
 
   MXNet pip package installs numpy (<=1.13). We need to test and upgrade to 
the latest (1.14).
   Some of the issues users can face:
   1. Have an conda environment - with numpy, scipy etc. which is usually done 
by a DL practitioner.
   2. Do pip install mxnet in the environment. This installs numpy 1.13.
   
   Later user will fall into the following issue:
   
   ```
   RuntimeError: module compiled against API version 0xc but this version of 
numpy is 0xb
   ImportError: numpy.core.multiarray failed to import
   ImportError: numpy.core.umath failed to import
   ImportError: numpy.core.umath failed to import
   ```
   
   due to conflicting versions of Numpy.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Use argmax instead of argmax_channel in Accuracy to keep dimention (#8245)

2018-02-05 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 3e79aef  Use argmax instead of argmax_channel in Accuracy to keep 
dimention (#8245)
3e79aef is described below

commit 3e79aefba36889d800d56c2048e6dd9ff0adbe54
Author: Benoît Quartier 
AuthorDate: Mon Feb 5 22:43:53 2018 +0100

Use argmax instead of argmax_channel in Accuracy to keep dimention (#8245)

Fix github issue 8129
---
 scala-package/core/src/main/scala/ml/dmlc/mxnet/EvalMetric.scala | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/scala-package/core/src/main/scala/ml/dmlc/mxnet/EvalMetric.scala 
b/scala-package/core/src/main/scala/ml/dmlc/mxnet/EvalMetric.scala
index 98a09d2..ed99a1f 100644
--- a/scala-package/core/src/main/scala/ml/dmlc/mxnet/EvalMetric.scala
+++ b/scala-package/core/src/main/scala/ml/dmlc/mxnet/EvalMetric.scala
@@ -107,7 +107,11 @@ class Accuracy extends EvalMetric("accuracy") {
   "labels and predictions should have the same length.")
 
 for ((pred, label) <- preds zip labels) {
-  val predLabel = NDArray.argmax_channel(pred)
+  val predLabel = if (pred.shape == label.shape) {
+NDArray.argmax(Map("axis" -> 1, "keepdims" -> true))(pred)
+  } else {
+NDArray.argmax_channel(pred)
+  }
   require(label.shape == predLabel.shape,
 s"label ${label.shape} and prediction ${predLabel.shape}" +
 s"should have the same length.")

-- 
To stop receiving notification emails like this one, please contact
liuyi...@apache.org.


[GitHub] yzhliu closed pull request #8245: Use argmax instead of argmax_channel in Accuracy to keep dimention

2018-02-05 Thread GitBox
yzhliu closed pull request #8245: Use argmax instead of argmax_channel in 
Accuracy to keep dimention
URL: https://github.com/apache/incubator-mxnet/pull/8245
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/scala-package/core/src/main/scala/ml/dmlc/mxnet/EvalMetric.scala 
b/scala-package/core/src/main/scala/ml/dmlc/mxnet/EvalMetric.scala
index 98a09d2250..ed99a1f90e 100644
--- a/scala-package/core/src/main/scala/ml/dmlc/mxnet/EvalMetric.scala
+++ b/scala-package/core/src/main/scala/ml/dmlc/mxnet/EvalMetric.scala
@@ -107,7 +107,11 @@ class Accuracy extends EvalMetric("accuracy") {
   "labels and predictions should have the same length.")
 
 for ((pred, label) <- preds zip labels) {
-  val predLabel = NDArray.argmax_channel(pred)
+  val predLabel = if (pred.shape == label.shape) {
+NDArray.argmax(Map("axis" -> 1, "keepdims" -> true))(pred)
+  } else {
+NDArray.argmax_channel(pred)
+  }
   require(label.shape == predLabel.shape,
 s"label ${label.shape} and prediction ${predLabel.shape}" +
 s"should have the same length.")


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu commented on issue #8530: Fatal JVM Error due to Exception in CustomOpProp#inferTypeEntry

2018-02-05 Thread GitBox
yzhliu commented on issue #8530: Fatal JVM Error due to Exception in 
CustomOpProp#inferTypeEntry
URL: 
https://github.com/apache/incubator-mxnet/issues/8530#issuecomment-363230784
 
 
   @tutnixzursache Could you shoot a fix for this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #9573: Added two functions to the C API

2018-02-05 Thread GitBox
piiswrong commented on issue #9573: Added two functions to the C API
URL: https://github.com/apache/incubator-mxnet/pull/9573#issuecomment-363213889
 
 
   The CAPI is meant to follow the hour glass model. Its like small kernel. If 
something can be done by composing existing API we shouldn't add a new API for 
it. It should be done in the frontend.
   
   This API feels like a convenience wrapper rather than core functionality.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on a change in pull request #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

2018-02-05 Thread GitBox
KellenSunderland commented on a change in pull request #9552: [REQUEST FOR 
REVIEW | DO NOT MERGE] Model Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#discussion_r166103566
 
 

 ##
 File path: Jenkinsfile
 ##
 @@ -442,6 +479,42 @@ try {
 }
   }
 },
+'Python2: Quantize CPU': {
+  node('mxnetlinux-gpu-p3') {
 
 Review comment:
   Possible c+p error.   Is it intended that cpu tests are running on a gpu 
node?  
   Same comment on the test directly below.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on a change in pull request #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

2018-02-05 Thread GitBox
KellenSunderland commented on a change in pull request #9552: [REQUEST FOR 
REVIEW | DO NOT MERGE] Model Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#discussion_r166103566
 
 

 ##
 File path: Jenkinsfile
 ##
 @@ -442,6 +479,42 @@ try {
 }
   }
 },
+'Python2: Quantize CPU': {
+  node('mxnetlinux-gpu-p3') {
 
 Review comment:
   Possible c+p error.   Is it intended that cpu tests are running on a gpu 
node?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #9698: Fixed 4 broken links

2018-02-05 Thread GitBox
eric-haibin-lin commented on issue #9698: Fixed 4 broken links
URL: https://github.com/apache/incubator-mxnet/pull/9698#issuecomment-363211869
 
 
   Please fix lint


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #9698: Fixed 4 broken links

2018-02-05 Thread GitBox
eric-haibin-lin commented on a change in pull request #9698: Fixed 4 broken 
links
URL: https://github.com/apache/incubator-mxnet/pull/9698#discussion_r166102767
 
 

 ##
 File path: docs/tutorials/index.md
 ##
 @@ -174,7 +174,7 @@ The Gluon and Module tutorials are in Python, but you can 
also find a variety of
 
 
 
-- [Connectionist Temporal 
Classification](http://mxnet.incubator.apache.org/tutorials/speech_recognition/ctc.html)
+- [Connectionist Temporal 
Classification](../tutorials/speech_recognition/ctc.html)
 
 Review comment:
   Maybe we should use relative links to avoid version issues and keep it as a 
general guideline for all other links? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

2018-02-05 Thread GitBox
eric-haibin-lin commented on a change in pull request #9552: [REQUEST FOR 
REVIEW | DO NOT MERGE] Model Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#discussion_r166097177
 
 

 ##
 File path: example/quantization/common
 ##
 @@ -0,0 +1 @@
+../image-classification/common
 
 Review comment:
   Is this a sym link? Does it work on windows?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

2018-02-05 Thread GitBox
eric-haibin-lin commented on a change in pull request #9552: [REQUEST FOR 
REVIEW | DO NOT MERGE] Model Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#discussion_r166097853
 
 

 ##
 File path: example/quantization/imagenet_gen_qsym.py
 ##
 @@ -0,0 +1,192 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import argparse
+from common import modelzoo
+import mxnet as mx
+from mxnet.contrib.quantization import *
+
+
+def download_calib_dataset(dataset_url, calib_dataset, logger=None):
+if logger is not None:
+logger.info('Downloading calibration dataset from %s to %s' % 
(dataset_url, calib_dataset))
+mx.test_utils.download(dataset_url, calib_dataset)
+
+
+def download_model(model_name, logger=None):
+dir_path = os.path.dirname(os.path.realpath(__file__))
+model_path = os.path.join(dir_path, 'model')
+if logger is not None:
+logger.info('Downloading model %s... into path %s' % (model_name, 
model_path))
+return modelzoo.download_model(args.model, os.path.join(dir_path, 'model'))
+
+
+def save_symbol(fname, sym, logger=None):
+if logger is not None:
+logger.info('Saving symbol into file at %s' % fname)
+sym.save(fname)
+
+
+def save_params(fname, arg_params, aux_params, logger=None):
+if logger is not None:
+logger.info('Saving params into file at %s' % fname)
+save_dict = {('arg:%s' % k): v.as_in_context(cpu()) for k, v in 
arg_params.items()}
+save_dict.update({('aux:%s' % k): v.as_in_context(cpu()) for k, v in 
aux_params.items()})
+mx.nd.save(fname, save_dict)
+
+
+if __name__ == '__main__':
+parser = argparse.ArgumentParser(description='Generate a calibrated 
quantized model from a FP32 model')
+parser.add_argument('--model', type=str, required=True,
+help='currently only supports imagenet1k-resnet-152 or 
imagenet1k-inception-bn')
 
 Review comment:
   Consider using `choices` option for argparse: 
https://docs.python.org/2/library/argparse.html#choices


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

2018-02-05 Thread GitBox
eric-haibin-lin commented on a change in pull request #9552: [REQUEST FOR 
REVIEW | DO NOT MERGE] Model Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#discussion_r166099294
 
 

 ##
 File path: python/mxnet/contrib/quantization.py
 ##
 @@ -0,0 +1,502 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Quantization module for generating quantized (INT8) models from FP32 
models."""
+
+from __future__ import absolute_import
+
+try:
+from scipy import stats
+except ImportError:
+stats = None
+
+import ctypes
+import logging
+import os
+import numpy as np
+from ..base import _LIB, check_call
+from ..base import c_array, c_str, mx_uint, c_str_array
+from ..base import NDArrayHandle, SymbolHandle
+from ..symbol import Symbol
+from ..symbol import load as sym_load
+from .. import ndarray
+from ..ndarray import load as nd_load
+from ..ndarray import NDArray
+from ..io import DataIter
+from ..context import cpu, Context
+from ..module import Module
+
+
+def _quantize_params(qsym, params):
+"""Given a quantized symbol and a dict of params that have not been 
quantized,
+generate quantized params. Currently only supports quantizing the 
arg_params
+with names of `weight` or `bias`, not aux_params. If `qsym` contains 
symbols
+that are excluded from being quantized, their corresponding params will
+not be quantized, but saved together with quantized params of the symbols 
that
+have been quantized.
+
+Parameters
+--
+qsym : Symbol
+Quantized symbol from FP32 symbol.
+params : dict of str->NDArray
+"""
+inputs_name = qsym.list_arguments()
+quantized_params = {}
+for name in inputs_name:
+if name.endswith(('weight_quantize', 'bias_quantize')):
+original_name = name[:-len('_quantize')]
+param = params[original_name]
+val, vmin, vmax = ndarray.contrib.quantize(data=param,
+   
min_range=ndarray.min(param),
+   
max_range=ndarray.max(param),
+   out_type='int8')
+quantized_params[name] = val
+quantized_params[name+'_min'] = vmin
+quantized_params[name+'_max'] = vmax
+elif name in params:
+quantized_params[name] = params[name]
+return quantized_params
+
+
+def _quantize_symbol(sym, excluded_symbols=None, offline_params=None):
+"""Given a symbol object representing a neural network of data type FP32,
+quantize it into a INT8 network.
+
+Parameters
+--
+sym : Symbol
+FP32 neural network symbol.
+excluded_symbols : list of symbols
+Nodes in the network that users do not want to replace with a symbol 
of INT8 data type.
+offline_params : list of strs
+Names of the parameters that users want to quantize offline. It's 
always recommended to
+quantize parameters offline so that quantizing parameters during the 
inference can be
+avoided.
+"""
+num_excluded_symbols = 0
+excluded_handles = []
+if excluded_symbols is not None:
+assert isinstance(excluded_symbols, list)
+num_excluded_symbols = len(excluded_symbols)
+for s in excluded_symbols:
+excluded_handles.append(s.handle)
+
+num_offline = 0
+offline = []
+if offline_params is not None:
+num_offline = len(offline_params)
+for k in offline_params:
+offline.append(c_str(k))
+
+out = SymbolHandle()
+check_call(_LIB.MXQuantizeSymbol(sym.handle,
+ ctypes.byref(out),
+ mx_uint(num_excluded_symbols),
+ c_array(SymbolHandle, excluded_handles),
+ mx_uint(num_offline),
+ c_array(ctypes.c_char_p, offline)))
+return Symbol(out)
+
+
+class _LayerOutputCollector(object):
+"""Saves layer output NDArray in a dict with layer names as keys and lists 
of NDArrays as
+values. The collected NDArrays will be 

[GitHub] eric-haibin-lin commented on a change in pull request #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

2018-02-05 Thread GitBox
eric-haibin-lin commented on a change in pull request #9552: [REQUEST FOR 
REVIEW | DO NOT MERGE] Model Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#discussion_r166098419
 
 

 ##
 File path: include/mxnet/op_attr_types.h
 ##
 @@ -261,6 +260,10 @@ using FInferStorageType = std::function* in_attrs,
   std::vector* out_attrs)>;
 
+using FQuantizedOp = std::function;
 
 Review comment:
   Missing Doc?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

2018-02-05 Thread GitBox
eric-haibin-lin commented on a change in pull request #9552: [REQUEST FOR 
REVIEW | DO NOT MERGE] Model Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#discussion_r166098311
 
 

 ##
 File path: include/mxnet/c_api.h
 ##
 @@ -1237,8 +1237,28 @@ MXNET_DLL int MXSymbolInferType(SymbolHandle sym,
 const int **aux_type_data,
 int *complete);
 
-
-
+MXNET_DLL int MXQuantizeSymbol(SymbolHandle sym_handle,
 
 Review comment:
   Missing documentation for this function?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

2018-02-05 Thread GitBox
eric-haibin-lin commented on a change in pull request #9552: [REQUEST FOR 
REVIEW | DO NOT MERGE] Model Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#discussion_r166100616
 
 

 ##
 File path: src/operator/quantization/quantization_utils.h
 ##
 @@ -0,0 +1,186 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2017 by Contributors
+ * \file quantization_utils-inl.h
+ */
+#ifndef MXNET_OPERATOR_QUANTIZATION_QUANTIZATION_UTILS_H_
+#define MXNET_OPERATOR_QUANTIZATION_QUANTIZATION_UTILS_H_
+
+#include 
+#include 
+#include "../mxnet_op.h"
+
+namespace mxnet {
+namespace op {
+
+
+template
+MSHADOW_XINLINE int Sign(T val) {
+  return (val > T(0)) - (val < T(0));
+}
+
+template
+MSHADOW_XINLINE T Abs(T a) {
+#ifdef __CUDACC__
+  return ::abs(a);
+#else
+  return std::abs(a);
+#endif
+}
+
+template
+MSHADOW_XINLINE T Max(T a, T b) {
+#ifdef __CUDACC__
+  return ::max(a, b);
+#else
+  return std::max(a, b);
+#endif
+}
+
+template
+MSHADOW_XINLINE T Min(T a, T b) {
+#ifdef __CUDACC__
+  return ::min(a, b);
+#else
+  return std::min(a, b);
+#endif
+}
+
+template
+MSHADOW_XINLINE float MaxAbs(T a, T b) {
+  return Max(Abs(static_cast(a)), Abs(static_cast(b)));
+}
+
+template
+MSHADOW_XINLINE float MinAbs(T a, T b) {
+  return Min(Abs(static_cast(a)), Abs(static_cast(b)));
+}
+
+#if 0
 
 Review comment:
   Is this not used?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Hebali commented on issue #9647: Gluon in Core API

2018-02-05 Thread GitBox
Hebali commented on issue #9647: Gluon in Core API
URL: 
https://github.com/apache/incubator-mxnet/issues/9647#issuecomment-363209250
 
 
   Core functionality for supported languages (namely Python, Scala, R, Julia, 
C++ and Perl) is delivered through bindings to MXNet's C API (see 
[c_api.h](https://github.com/apache/incubator-mxnet/blob/master/include/mxnet/c_api.h)
 and 
[c_predict_api.h](https://github.com/apache/incubator-mxnet/blob/master/include/mxnet/c_predict_api.h)).
 
   
   Each language's wrapper (such as 
[R-package](https://github.com/apache/incubator-mxnet/tree/master/R-package), 
[cpp-package](https://github.com/apache/incubator-mxnet/tree/master/cpp-package),
 etc) adds its own language-specific conveniences. For instance, the C++ 
package adds some OOP friendliness to the C functionality. 
   
   Naturally, it does not make sense and in many cases would not be not 
possible to include those language-specific conveniences in the C API. So, 
admittedly, there are numerous aspects of Gluon that cannot be promoted to the 
core C API.
   
   But, it seems like Gluon is going in a direction that will lead to a lot of 
great functionality that could and should be included in the C API. For 
instance, 
[rnn_cell.py](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/rnn/rnn_cell.py)
 and 
[rnn_layer.py](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/rnn/rnn_layer.py).
 RNNs are highly relevant to ML work being done in any of the supported 
languages. Sure, someone using one of those languages could implement 
equivalent functionality for themselves. But, I think such a concession 
diminishes the overall value of MXNet / Gluon.
   
   As I said in my original post, TensorFlow has already made this mistake. 
Hopefully, they will back out of it. But why make the same one in MXNet? Python 
is clearly a preferred language for ML. I use it a lot and am a fan. But there 
are some important reasons for using MXNet in the other supported languages. 
Support within C and/or C++ means that native applications can take direct 
advantage of the functionality. The current thinking seems to be: "ML 
researchers develop models in Python, serialize them and then (maybe) serve 
them from C/C++" But that seems like somewhat short-term thinking. For MXNet's 
long-term flexibility and relevance, I think it is important to include as much 
as possible in the core C API.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
yzhliu commented on a change in pull request #9678: [First cut] Scala Inference 
APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166084734
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/MXNetHandler.scala
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import java.util.concurrent._
+
+trait MXNetHandler {
+
+  def execute[T](f: => T): T
+
+  val executor: ExecutorService
+
+}
+
+object MXNetHandlerType extends Enumeration {
+
+  type MXNetHandlerType = Value
+  val SingleThreadHandler = Value("MXNetSingleThreadHandler")
+  val OneThreadPerModelHandler = Value("MXNetOneThreadPerModelHandler")
+}
+
+class MXNetOneThreadPerModelHandler extends MXNetHandler {
+
+  private val threadFactory = new ThreadFactory {
+
+override def newThread(r: Runnable): Thread = new Thread(r) {
+  setName(classOf[MXNetOneThreadPerModelHandler].getCanonicalName)
+}
+  }
+
+  override val executor: ExecutorService = Executors.newFixedThreadPool(10, 
threadFactory)
+
+  override def execute[T](f: => T): T = {
+val task = new Callable[T] {
+  override def call(): T = {
+// scalastyle:off println
+println("threadId: %s".format(Thread.currentThread().getId()))
 
 Review comment:
   better to use Log


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
yzhliu commented on a change in pull request #9678: [First cut] Scala Inference 
APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166088215
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/MXNetHandler.scala
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import java.util.concurrent._
+
+trait MXNetHandler {
+
+  def execute[T](f: => T): T
+
+  val executor: ExecutorService
 
 Review comment:
   `private`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
yzhliu commented on a change in pull request #9678: [First cut] Scala Inference 
APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166089036
 
 

 ##
 File path: scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/package.scala
 ##
 @@ -0,0 +1,24 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet
+
+import ml.dmlc.mxnet.infer.MXNetHandlerType.MXNetHandlerType
+
+package object infer {
+  private[mxnet] val handlerType: MXNetHandlerType = 
MXNetHandlerType.SingleThreadHandler
 
 Review comment:
   can we change this variable anywhere?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
yzhliu commented on a change in pull request #9678: [First cut] Scala Inference 
APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166085995
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/MXNetHandler.scala
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import java.util.concurrent._
+
+trait MXNetHandler {
+
+  def execute[T](f: => T): T
+
+  val executor: ExecutorService
+
+}
+
+object MXNetHandlerType extends Enumeration {
+
+  type MXNetHandlerType = Value
+  val SingleThreadHandler = Value("MXNetSingleThreadHandler")
+  val OneThreadPerModelHandler = Value("MXNetOneThreadPerModelHandler")
+}
+
+class MXNetOneThreadPerModelHandler extends MXNetHandler {
+
+  private val threadFactory = new ThreadFactory {
+
+override def newThread(r: Runnable): Thread = new Thread(r) {
+  setName(classOf[MXNetOneThreadPerModelHandler].getCanonicalName)
+}
+  }
+
+  override val executor: ExecutorService = Executors.newFixedThreadPool(10, 
threadFactory)
 
 Review comment:
   make `10` configurable through constructor.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
yzhliu commented on a change in pull request #9678: [First cut] Scala Inference 
APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166087684
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/MXNetHandler.scala
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import java.util.concurrent._
+
+trait MXNetHandler {
+
+  def execute[T](f: => T): T
+
+  val executor: ExecutorService
+
+}
+
+object MXNetHandlerType extends Enumeration {
+
+  type MXNetHandlerType = Value
+  val SingleThreadHandler = Value("MXNetSingleThreadHandler")
+  val OneThreadPerModelHandler = Value("MXNetOneThreadPerModelHandler")
+}
+
+class MXNetOneThreadPerModelHandler extends MXNetHandler {
+
+  private val threadFactory = new ThreadFactory {
+
+override def newThread(r: Runnable): Thread = new Thread(r) {
+  setName(classOf[MXNetOneThreadPerModelHandler].getCanonicalName)
+}
+  }
+
+  override val executor: ExecutorService = Executors.newFixedThreadPool(10, 
threadFactory)
+
+  override def execute[T](f: => T): T = {
+val task = new Callable[T] {
+  override def call(): T = {
+// scalastyle:off println
+println("threadId: %s".format(Thread.currentThread().getId()))
+// scalastyle:on println
+f
+  }
+}
+val result = executor.submit(task)
+try {
+  result.get()
+}
+catch {
+  case e: ExecutionException => throw e.getCause()
 
 Review comment:
   add `throw ExecutionException` to the signature of this method.
   place `catch` the same line of `}`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu commented on a change in pull request #9678: [First cut] Scala Inference APIs

2018-02-05 Thread GitBox
yzhliu commented on a change in pull request #9678: [First cut] Scala Inference 
APIs
URL: https://github.com/apache/incubator-mxnet/pull/9678#discussion_r166085310
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/MXNetHandler.scala
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package ml.dmlc.mxnet.infer
+
+import java.util.concurrent._
+
+trait MXNetHandler {
+
+  def execute[T](f: => T): T
+
+  val executor: ExecutorService
+
+}
+
+object MXNetHandlerType extends Enumeration {
+
+  type MXNetHandlerType = Value
+  val SingleThreadHandler = Value("MXNetSingleThreadHandler")
+  val OneThreadPerModelHandler = Value("MXNetOneThreadPerModelHandler")
+}
+
+class MXNetOneThreadPerModelHandler extends MXNetHandler {
+
+  private val threadFactory = new ThreadFactory {
+
+override def newThread(r: Runnable): Thread = new Thread(r) {
+  setName(classOf[MXNetOneThreadPerModelHandler].getCanonicalName)
+}
+  }
+
+  override val executor: ExecutorService = Executors.newFixedThreadPool(10, 
threadFactory)
+
+  override def execute[T](f: => T): T = {
+val task = new Callable[T] {
+  override def call(): T = {
+// scalastyle:off println
+println("threadId: %s".format(Thread.currentThread().getId()))
+// scalastyle:on println
+f
+  }
+}
+val result = executor.submit(task)
+try {
+  result.get()
+}
+catch {
+  case e: ExecutionException => throw e.getCause()
+}
+  }
+
+}
+
+object MXNetSingleThreadHandler extends MXNetOneThreadPerModelHandler {
 
 Review comment:
   why do we need an `object` extends something? any `static` members to be 
accessed?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #9698: Fixed 4 broken links

2018-02-05 Thread GitBox
szha commented on a change in pull request #9698: Fixed 4 broken links
URL: https://github.com/apache/incubator-mxnet/pull/9698#discussion_r166088717
 
 

 ##
 File path: python/mxnet/gluon/trainer.py
 ##
 @@ -34,7 +34,7 @@ class Trainer(object):
 The set of parameters to optimize.
 optimizer : str or Optimizer
 The optimizer to use. See
-`help 
`_
+`help 
`_
 on Optimizer for a list of available optimizers.
 
 Review comment:
   Fix lint. Line is too long


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #9662: Gluon PReLU, ELU, SELU, Swish

2018-02-05 Thread GitBox
szha commented on a change in pull request #9662: Gluon PReLU, ELU, SELU, Swish
URL: https://github.com/apache/incubator-mxnet/pull/9662#discussion_r166087641
 
 

 ##
 File path: src/operator/leaky_relu-inl.h
 ##
 @@ -225,7 +242,11 @@ class LeakyReLUProp : public OperatorProperty {
 const TShape  = in_shape->at(leakyrelu::kData);
 if (dshape.ndim() == 0) return false;
 if (param_.act_type == leakyrelu::kPReLU) {
-  in_shape->at(leakyrelu::kGamma) = TShape(Shape1(dshape[1]));
+  const TShape  = in_shape->at(leakyrelu::kGamma);
+  if (gshape.Size() != 1)
 
 Review comment:
   So, I should check for both ndim and shape_[0] then. How do I check whether 
it?s undefined?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #9662: Gluon PReLU, ELU, SELU, Swish

2018-02-05 Thread GitBox
szha commented on a change in pull request #9662: Gluon PReLU, ELU, SELU, Swish
URL: https://github.com/apache/incubator-mxnet/pull/9662#discussion_r166086915
 
 

 ##
 File path: python/mxnet/gluon/nn/activations.py
 ##
 @@ -0,0 +1,215 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+# pylint: disable= arguments-differ
+"""Basic neural network layers."""
+__all__ = ['Activation', 'LeakyReLU', 'PReLU', 'ELU', 'SELU', 'Swish']
+
+from ... import initializer
+from ..block import HybridBlock
+
+
+class Activation(HybridBlock):
+r"""Applies an activation function to input.
+
+Parameters
+--
+activation : str
+Name of activation function to use.
+See :func:`~mxnet.ndarray.Activation` for available choices.
+
+
+Inputs:
+- **data**: input tensor with arbitrary shape.
+
+Outputs:
+- **out**: output tensor with the same shape as `data`.
+"""
+def __init__(self, activation, **kwargs):
+self._act_type = activation
+super(Activation, self).__init__(**kwargs)
+
+def _alias(self):
+return self._act_type
+
+def hybrid_forward(self, F, x):
+return F.Activation(x, act_type=self._act_type, name='fwd')
+
+def __repr__(self):
+s = '{name}({_act_type})'
+return s.format(name=self.__class__.__name__,
+**self.__dict__)
+
+
+class LeakyReLU(HybridBlock):
+r"""Leaky version of a Rectified Linear Unit.
+
+It allows a small gradient when the unit is not active
+
+.. math::
+
+f\left(x\right) = \left\{
+\begin{array}{lr}
+   \alpha x & : x \lt 0 \\
+  x & : x \geq 0 \\
+\end{array}
+\right.\\
+
+Parameters
+--
+alpha : float
+slope coefficient for the negative half axis. Must be >= 0.
+
+
+Inputs:
+- **data**: input tensor with arbitrary shape.
+
+Outputs:
+- **out**: output tensor with the same shape as `data`.
+"""
+def __init__(self, alpha, **kwargs):
+assert alpha >= 0, "Slope coefficient for LeakyReLU must be no less 
than 0."
+super(LeakyReLU, self).__init__(**kwargs)
+self._alpha = alpha
+
+def hybrid_forward(self, F, x):
+return F.LeakyReLU(x, act_type='leaky', slope=self._alpha, name='fwd')
+
+def __repr__(self):
+s = '{name}({alpha})'
+return s.format(name=self.__class__.__name__,
+alpha=self._alpha)
+
+
+class PReLU(HybridBlock):
+r"""Parametric leaky version of a Rectified Linear Unit.
+`_ paper.
+
+It learns a gradient when the unit is not active
+
+.. math::
+
+f\left(x\right) = \left\{
+\begin{array}{lr}
+   \alpha x & : x \lt 0 \\
+  x & : x \geq 0 \\
+\end{array}
+\right.\\
+
+where alpha is a learned parameter.
+
+Parameters
+--
+alpha_initializer : Initializer
+Initializer for the `embeddings` matrix.
+
+
+Inputs:
+- **data**: input tensor with arbitrary shape.
+
+Outputs:
+- **out**: output tensor with the same shape as `data`.
+"""
+def __init__(self, alpha_initializer=initializer.Constant(0.25), *args):
+super(PReLU, self).__init__(*args)
+with self.name_scope():
+self.alpha = self.params.get('alpha', shape=(1,), 
init=alpha_initializer)
+
+def hybrid_forward(self, F, x, alpha):
+return F.LeakyReLU(x, gamma=alpha, act_type='prelu', name='fwd')
+
+def __repr__(self):
+s = '{name}'
 
 Review comment:
   Yes


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #9662: Gluon PReLU, ELU, SELU, Swish

2018-02-05 Thread GitBox
piiswrong commented on a change in pull request #9662: Gluon PReLU, ELU, SELU, 
Swish
URL: https://github.com/apache/incubator-mxnet/pull/9662#discussion_r166086739
 
 

 ##
 File path: src/operator/leaky_relu-inl.h
 ##
 @@ -225,7 +242,11 @@ class LeakyReLUProp : public OperatorProperty {
 const TShape  = in_shape->at(leakyrelu::kData);
 if (dshape.ndim() == 0) return false;
 if (param_.act_type == leakyrelu::kPReLU) {
-  in_shape->at(leakyrelu::kGamma) = TShape(Shape1(dshape[1]));
+  const TShape  = in_shape->at(leakyrelu::kGamma);
+  if (gshape.Size() != 1)
 
 Review comment:
   gshape could be empty, in which case Size is undefined.
   Also if gshape.Size is 1, it could be (1,1), which is invalid


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #9662: Gluon PReLU, ELU, SELU, Swish

2018-02-05 Thread GitBox
piiswrong commented on a change in pull request #9662: Gluon PReLU, ELU, SELU, 
Swish
URL: https://github.com/apache/incubator-mxnet/pull/9662#discussion_r166086384
 
 

 ##
 File path: tests/python/unittest/test_gluon.py
 ##
 @@ -719,6 +719,42 @@ def test_inline():
 assert len_1 == len_2 + 2
 
 
+def test_activations():
+point_to_validate = mx.nd.array([-0.1, 0.1])
+
+swish = mx.gluon.nn.Swish()
+def swish_test(x):
+return x * mx.nd.sigmoid(x)
+
+for test_point, ref_point in zip(swish_test(point_to_validate), 
swish(point_to_validate)):
+assert test_point == ref_point
+
+elu = mx.gluon.nn.ELU()
+def elu_test(x):
+def elu(x):
+return 1.0 * (mx.nd.exp(x) - 1) if x < 0 else x
+return [elu(x_i) for x_i in x]
+
+for test_point, ref_point in zip(elu_test(point_to_validate), 
elu(point_to_validate)):
+assert test_point == ref_point
+
+selu = mx.gluon.nn.SELU()
+def selu_test(x):
+def selu(x):
+scale, alpha = 1.0507009873554804934193349852946, 
1.6732632423543772848170429916717
+return scale * x if x >= 0 else alpha * mx.nd.exp(x) - alpha
+return [selu(x_i) for x_i in x]
+
+for test_point, ref_point in zip(selu(point_to_validate), 
selu(point_to_validate)):
+assert test_point == ref_point
+
+prelu = mx.gluon.nn.PReLU()
+prelu.initialize()
+x = point_to_validate.reshape((1, 1, 2))
 
 Review comment:
   use a different input shape that can catch the infershape problem


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thinksanky commented on issue #9698: Fixed 4 broken links

2018-02-05 Thread GitBox
thinksanky commented on issue #9698: Fixed 4 broken links
URL: https://github.com/apache/incubator-mxnet/pull/9698#issuecomment-363192917
 
 
   @eric-haibin-lin - please review and merge


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thinksanky opened a new pull request #9698: Fixed 4 broken links

2018-02-05 Thread GitBox
thinksanky opened a new pull request #9698: Fixed 4 broken links
URL: https://github.com/apache/incubator-mxnet/pull/9698
 
 
   ## Description ##
   * Fixed the remaining broken links reported on Feb 4th 2018


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #9662: Gluon PReLU, ELU, SELU, Swish

2018-02-05 Thread GitBox
piiswrong commented on a change in pull request #9662: Gluon PReLU, ELU, SELU, 
Swish
URL: https://github.com/apache/incubator-mxnet/pull/9662#discussion_r166084419
 
 

 ##
 File path: python/mxnet/gluon/nn/activations.py
 ##
 @@ -0,0 +1,215 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+# pylint: disable= arguments-differ
+"""Basic neural network layers."""
+__all__ = ['Activation', 'LeakyReLU', 'PReLU', 'ELU', 'SELU', 'Swish']
+
+from ... import initializer
+from ..block import HybridBlock
+
+
+class Activation(HybridBlock):
+r"""Applies an activation function to input.
+
+Parameters
+--
+activation : str
+Name of activation function to use.
+See :func:`~mxnet.ndarray.Activation` for available choices.
+
+
+Inputs:
+- **data**: input tensor with arbitrary shape.
+
+Outputs:
+- **out**: output tensor with the same shape as `data`.
+"""
+def __init__(self, activation, **kwargs):
+self._act_type = activation
+super(Activation, self).__init__(**kwargs)
+
+def _alias(self):
+return self._act_type
+
+def hybrid_forward(self, F, x):
+return F.Activation(x, act_type=self._act_type, name='fwd')
+
+def __repr__(self):
+s = '{name}({_act_type})'
+return s.format(name=self.__class__.__name__,
+**self.__dict__)
+
+
+class LeakyReLU(HybridBlock):
+r"""Leaky version of a Rectified Linear Unit.
+
+It allows a small gradient when the unit is not active
+
+.. math::
+
+f\left(x\right) = \left\{
+\begin{array}{lr}
+   \alpha x & : x \lt 0 \\
+  x & : x \geq 0 \\
+\end{array}
+\right.\\
+
+Parameters
+--
+alpha : float
+slope coefficient for the negative half axis. Must be >= 0.
+
+
+Inputs:
+- **data**: input tensor with arbitrary shape.
+
+Outputs:
+- **out**: output tensor with the same shape as `data`.
+"""
+def __init__(self, alpha, **kwargs):
+assert alpha >= 0, "Slope coefficient for LeakyReLU must be no less 
than 0."
+super(LeakyReLU, self).__init__(**kwargs)
+self._alpha = alpha
+
+def hybrid_forward(self, F, x):
+return F.LeakyReLU(x, act_type='leaky', slope=self._alpha, name='fwd')
+
+def __repr__(self):
+s = '{name}({alpha})'
+return s.format(name=self.__class__.__name__,
+alpha=self._alpha)
+
+
+class PReLU(HybridBlock):
+r"""Parametric leaky version of a Rectified Linear Unit.
+`_ paper.
+
+It learns a gradient when the unit is not active
+
+.. math::
+
+f\left(x\right) = \left\{
+\begin{array}{lr}
+   \alpha x & : x \lt 0 \\
+  x & : x \geq 0 \\
+\end{array}
+\right.\\
+
+where alpha is a learned parameter.
+
+Parameters
+--
+alpha_initializer : Initializer
+Initializer for the `embeddings` matrix.
+
+
+Inputs:
+- **data**: input tensor with arbitrary shape.
+
+Outputs:
+- **out**: output tensor with the same shape as `data`.
+"""
+def __init__(self, alpha_initializer=initializer.Constant(0.25), *args):
+super(PReLU, self).__init__(*args)
+with self.name_scope():
+self.alpha = self.params.get('alpha', shape=(1,), 
init=alpha_initializer)
+
+def hybrid_forward(self, F, x, alpha):
+return F.LeakyReLU(x, gamma=alpha, act_type='prelu', name='fwd')
+
+def __repr__(self):
+s = '{name}'
 
 Review comment:
   Can we handle these trivial cases in base class?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #9681: Better Exception Handling for Operators

2018-02-05 Thread GitBox
piiswrong commented on issue #9681: Better Exception Handling for Operators
URL: https://github.com/apache/incubator-mxnet/pull/9681#issuecomment-363190218
 
 
   suppose and operator has three outputs x, y and z and it raises an exception.
   then x.asnumpy() would raise an error.
   Then y.asnumpy() would raise the same error again.
   
   I think an error should only be raised once. After it's raised, it should be 
cleared from all arrays that is pointing to that error.
   
   This can be achieved by setting the object referenced by exception_ptr to an 
invalid value


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #9681: Better Exception Handling for Operators

2018-02-05 Thread GitBox
piiswrong commented on a change in pull request #9681: Better Exception 
Handling for Operators
URL: https://github.com/apache/incubator-mxnet/pull/9681#discussion_r166079802
 
 

 ##
 File path: src/engine/threaded_engine.h
 ##
 @@ -338,33 +346,46 @@ class ThreadedEngine : public Engine {
 #endif
 CallbackOnComplete callback = this->CreateCallback(
 ThreadedEngine::OnCompleteStatic, opr_block);
+CallbackOnComplete on_start_callback = this->CreateCallback(
 
 Review comment:
   I think this is unnecessary overhead. Call OnStart directly if possible


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong closed pull request #9694: Remove '+=' in inception-resnet-v2

2018-02-05 Thread GitBox
piiswrong closed pull request #9694: Remove '+=' in inception-resnet-v2
URL: https://github.com/apache/incubator-mxnet/pull/9694
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/example/image-classification/symbols/inception-resnet-v2.py 
b/example/image-classification/symbols/inception-resnet-v2.py
index 5f313351ea..866d8106ba 100644
--- a/example/image-classification/symbols/inception-resnet-v2.py
+++ b/example/image-classification/symbols/inception-resnet-v2.py
@@ -48,7 +48,7 @@ def block35(net, input_num_channels, scale=1.0, 
with_act=True, act_type='relu',
 tower_out = ConvFactory(
 tower_mixed, input_num_channels, (1, 1), with_act=False)
 
-net += scale * tower_out
+net = net + scale * tower_out
 if with_act:
 act = mx.symbol.Activation(
 data=net, act_type=act_type, attr=mirror_attr)
@@ -65,7 +65,7 @@ def block17(net, input_num_channels, scale=1.0, 
with_act=True, act_type='relu',
 tower_mixed = mx.symbol.Concat(*[tower_conv, tower_conv1_2])
 tower_out = ConvFactory(
 tower_mixed, input_num_channels, (1, 1), with_act=False)
-net += scale * tower_out
+net = net + scale * tower_out
 if with_act:
 act = mx.symbol.Activation(
 data=net, act_type=act_type, attr=mirror_attr)
@@ -82,7 +82,7 @@ def block8(net, input_num_channels, scale=1.0, with_act=True, 
act_type='relu', m
 tower_mixed = mx.symbol.Concat(*[tower_conv, tower_conv1_2])
 tower_out = ConvFactory(
 tower_mixed, input_num_channels, (1, 1), with_act=False)
-net += scale * tower_out
+net = net + scale * tower_out
 if with_act:
 act = mx.symbol.Activation(
 data=net, act_type=act_type, attr=mirror_attr)


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Remove '+=' in inception-resnet-v2 (#9694)

2018-02-05 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 921fbe9  Remove '+=' in inception-resnet-v2 (#9694)
921fbe9 is described below

commit 921fbe9cd7ebb4defef0f9c5bf1c9818ae1759f5
Author: solin319 
AuthorDate: Tue Feb 6 03:05:03 2018 +0800

Remove '+=' in inception-resnet-v2 (#9694)

Symbol does not support '+='.
```
def __iadd__(self, other):
raise NotImplementedForSymbol(self.__iadd__, '+=', other, 1)
```
---
 example/image-classification/symbols/inception-resnet-v2.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/example/image-classification/symbols/inception-resnet-v2.py 
b/example/image-classification/symbols/inception-resnet-v2.py
index 5f31335..866d810 100644
--- a/example/image-classification/symbols/inception-resnet-v2.py
+++ b/example/image-classification/symbols/inception-resnet-v2.py
@@ -48,7 +48,7 @@ def block35(net, input_num_channels, scale=1.0, 
with_act=True, act_type='relu',
 tower_out = ConvFactory(
 tower_mixed, input_num_channels, (1, 1), with_act=False)
 
-net += scale * tower_out
+net = net + scale * tower_out
 if with_act:
 act = mx.symbol.Activation(
 data=net, act_type=act_type, attr=mirror_attr)
@@ -65,7 +65,7 @@ def block17(net, input_num_channels, scale=1.0, 
with_act=True, act_type='relu',
 tower_mixed = mx.symbol.Concat(*[tower_conv, tower_conv1_2])
 tower_out = ConvFactory(
 tower_mixed, input_num_channels, (1, 1), with_act=False)
-net += scale * tower_out
+net = net + scale * tower_out
 if with_act:
 act = mx.symbol.Activation(
 data=net, act_type=act_type, attr=mirror_attr)
@@ -82,7 +82,7 @@ def block8(net, input_num_channels, scale=1.0, with_act=True, 
act_type='relu', m
 tower_mixed = mx.symbol.Concat(*[tower_conv, tower_conv1_2])
 tower_out = ConvFactory(
 tower_mixed, input_num_channels, (1, 1), with_act=False)
-net += scale * tower_out
+net = net + scale * tower_out
 if with_act:
 act = mx.symbol.Activation(
 data=net, act_type=act_type, attr=mirror_attr)

-- 
To stop receiving notification emails like this one, please contact
j...@apache.org.


[GitHub] piiswrong closed pull request #9697: fix NAG if multi_precision = true

2018-02-05 Thread GitBox
piiswrong closed pull request #9697: fix NAG if multi_precision = true
URL: https://github.com/apache/incubator-mxnet/pull/9697
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/optimizer.py b/python/mxnet/optimizer.py
index d3eeba1b50..0856a7f894 100644
--- a/python/mxnet/optimizer.py
+++ b/python/mxnet/optimizer.py
@@ -129,10 +129,11 @@ def register(klass):
 assert(isinstance(klass, type))
 name = klass.__name__.lower()
 if name in Optimizer.opt_registry:
-warnings.warn('WARNING: New optimizer %s.%s is overriding existing 
'
-  'optimizer %s.%s', klass.__module__, klass.__name__,
-  Optimizer.opt_registry[name].__module__,
-  Optimizer.opt_registry[name].__name__)
+warnings.warn('WARNING: New optimizer %s.%s is overriding '
+  'existing optimizer %s.%s' %
+  (klass.__module__, klass.__name__,
+   Optimizer.opt_registry[name].__module__,
+   Optimizer.opt_registry[name].__name__))
 Optimizer.opt_registry[name] = klass
 return klass
 
@@ -892,7 +893,8 @@ def update(self, index, weight, grad, state):
 weight[:] += mom
 
 @register
-class NAG(SGD):
+@register
+class NAG(Optimizer):
 """Nesterov accelerated SGD.
 
 This optimizer updates each weight by::
@@ -900,10 +902,26 @@ class NAG(SGD):
 state = momentum * state + grad + wd * weight
 weight = weight - (lr * (grad + momentum * state))
 
-This optimizer accepts the same arguments as :class:`.SGD`.
+Parameters
+--
+momentum : float, optional
+   The momentum value.
+multi_precision: bool, optional
+   Flag to control the internal precision of the optimizer.
+   ``False`` results in using the same precision as the weights (default),
+   ``True`` makes internal 32-bit copy of the weights and applies 
gradients \
+in 32-bit precision even if actual weights used in the model 
have lower precision.\
+Turning this on can improve convergence and accuracy when 
training with float16.
 """
-def __init__(self, **kwargs):
+def __init__(self, momentum=0.0, **kwargs):
 super(NAG, self).__init__(**kwargs)
+self.momentum = momentum
+
+def create_state(self, index, weight):
+momentum = None
+if self.momentum != 0.0:
+momentum = zeros(weight.shape, weight.context, dtype=weight.dtype)
+return momentum
 
 def update(self, index, weight, grad, state):
 assert(isinstance(weight, NDArray))
diff --git a/tests/python/unittest/test_optimizer.py 
b/tests/python/unittest/test_optimizer.py
index 26ff48babc..f7ee6d0653 100644
--- a/tests/python/unittest/test_optimizer.py
+++ b/tests/python/unittest/test_optimizer.py
@@ -357,6 +357,118 @@ def test_std_sparse_sgd():
   w_stype='row_sparse', 
g_stype='row_sparse')
 
 
+class PyNAG(PySGD):
+def __init__(self, **kwargs):
+super(PyNAG, self).__init__(**kwargs)
+
+def create_state(self, index, weight):
+"""Create additional optimizer state: momentum
+
+Parameters
+--
+weight : NDArray
+The weight data
+
+"""
+momentum = None
+weight_master_copy = None
+do_multi_precision = self.multi_precision and weight.dtype == 
np.float16
+if do_multi_precision:
+if self.momentum != 0.0:
+momentum = mx.nd.zeros(weight.shape, weight.context, 
dtype=np.float32)
+weight_master_copy = array(weight, ctx=weight.context, 
dtype=np.float32)
+return (weight_master_copy, momentum)
+else:
+if self.momentum != 0.0:
+momentum = mx.nd.zeros(weight.shape, weight.context, 
dtype=weight.dtype)
+return momentum
+
+def create_state_multi_precision(self, index, weight):
+return self.create_state(index, weight)
+
+def update(self, index, weight, grad, state):
+"""Update the parameters.
+
+Parameters
+--
+index : int
+An unique integer key used to index the parameters
+
+weight : NDArray
+weight ndarray
+
+grad : NDArray
+grad ndarray
+
+state : NDArray or other objects returned by init_state
+The auxiliary state used in optimization.
+"""
+lr = self._get_lr(index)
+wd = self._get_wd(index)
+self._update_count(index)
+use_multi_precision = isinstance(state, list) or isinstance(state, 
tuple)
+if not use_multi_precision:

[GitHub] cjolivier01 commented on issue #9545: Profiling discussion

2018-02-05 Thread GitBox
cjolivier01 commented on issue #9545: Profiling discussion
URL: 
https://github.com/apache/incubator-mxnet/issues/9545#issuecomment-363178424
 
 
   I added a place for associated performance-related JIRA tasks: 
   
   
https://issues.apache.org/jira/secure/RapidBoard.jspa?rapidView=211=MXNET=planning.nodetail=visible=MXNET-10


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #9625: sparse regression operators

2018-02-05 Thread GitBox
cjolivier01 commented on issue #9625: sparse regression operators
URL: https://github.com/apache/incubator-mxnet/pull/9625#issuecomment-363174023
 
 
   Looks like it compiles now :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #9545: Profiling discussion

2018-02-05 Thread GitBox
cjolivier01 commented on issue #9545: Profiling discussion
URL: 
https://github.com/apache/incubator-mxnet/issues/9545#issuecomment-363168613
 
 
   If you have better code to determine the number of "real" cores, I will use 
it.
   Right now, we use Intel OMP (at least when building with cmake on Linux) and 
it reports tat it has bound the first N/2 threads to the respective "real" 
cores.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >