[GitHub] haojin2 commented on a change in pull request #10550: [MXNET-320] Support elemwise_add/sub/max/min/hypot between dense and csr tensors

2018-04-13 Thread GitBox
haojin2 commented on a change in pull request #10550: [MXNET-320] Support 
elemwise_add/sub/max/min/hypot between dense and csr tensors
URL: https://github.com/apache/incubator-mxnet/pull/10550#discussion_r181541780
 
 

 ##
 File path: src/operator/elemwise_op_common.h
 ##
 @@ -73,6 +73,12 @@ inline bool ElemwiseStorageAttr(const nnvm::NodeAttrs& 
attrs,
 dispatched = storage_type_assign(out_attrs, kCSRStorage,
  dispatch_mode, dispatch_ex);
   }
 
 Review comment:
   I think I mentioned in the comments part that there were already some unit 
tests written for this, please check that part out.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #10550: [MXNET-320] Support elemwise_add/sub/max/min/hypot between dense and csr tensors

2018-04-13 Thread GitBox
eric-haibin-lin commented on a change in pull request #10550: [MXNET-320] 
Support elemwise_add/sub/max/min/hypot between dense and csr tensors
URL: https://github.com/apache/incubator-mxnet/pull/10550#discussion_r181541724
 
 

 ##
 File path: src/operator/elemwise_op_common.h
 ##
 @@ -73,6 +73,12 @@ inline bool ElemwiseStorageAttr(const nnvm::NodeAttrs& 
attrs,
 dispatched = storage_type_assign(out_attrs, kCSRStorage,
  dispatch_mode, dispatch_ex);
   }
 
 Review comment:
   As usual, please add unit tests


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #10550: [MXNET-320] Support elemwise_add/sub/max/min/hypot between dense and csr tensors

2018-04-13 Thread GitBox
eric-haibin-lin commented on a change in pull request #10550: [MXNET-320] 
Support elemwise_add/sub/max/min/hypot between dense and csr tensors
URL: https://github.com/apache/incubator-mxnet/pull/10550#discussion_r181541551
 
 

 ##
 File path: src/operator/elemwise_op_common.h
 ##
 @@ -73,6 +73,12 @@ inline bool ElemwiseStorageAttr(const nnvm::NodeAttrs& 
attrs,
 dispatched = storage_type_assign(out_attrs, kCSRStorage,
  dispatch_mode, dispatch_ex);
   }
 
 Review comment:
   As usual, please update the documentation for the sparse operators during 
registration


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #10550: [MXNET-320] Support elemwise_add/sub/max/min/hypot between dense and csr tensors

2018-04-13 Thread GitBox
eric-haibin-lin commented on a change in pull request #10550: [MXNET-320] 
Support elemwise_add/sub/max/min/hypot between dense and csr tensors
URL: https://github.com/apache/incubator-mxnet/pull/10550#discussion_r181541710
 
 

 ##
 File path: src/operator/elemwise_op_common.h
 ##
 @@ -73,6 +73,12 @@ inline bool ElemwiseStorageAttr(const nnvm::NodeAttrs& 
attrs,
 dispatched = storage_type_assign(out_attrs, kCSRStorage,
  dispatch_mode, dispatch_ex);
   }
+  if (!dispatched && (((*in_attrs)[0] == kDefaultStorage && (*in_attrs)[1] == 
kCSRStorage) ||
+  ((*in_attrs)[0] == kCSRStorage && (*in_attrs)[1] == 
kDefaultStorage))) {
+// dense, csr -> csr
+dispatched = storage_type_assign(out_attrs, kDefaultStorage,
 
 Review comment:
   1. Changing `ElemwiseStorageAttr` will affect ALL operators using this 
function. 
   2. Unary op also uses this function. There's no guarantee len(in_attr) > 1.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] indhub closed issue #10539: how to save 'gluon net' after calling hybirdize?

2018-04-13 Thread GitBox
indhub closed issue #10539: how to save 'gluon net' after calling hybirdize?
URL: https://github.com/apache/incubator-mxnet/issues/10539
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] indhub commented on issue #10539: how to save 'gluon net' after calling hybirdize?

2018-04-13 Thread GitBox
indhub commented on issue #10539: how to save 'gluon net' after calling 
hybirdize?
URL: 
https://github.com/apache/incubator-mxnet/issues/10539#issuecomment-381305688
 
 
   Please use the [user forum](https://discuss.mxnet.io/) for how-to questions 
for quicker response. Please include details like the stack trace, minimal 
reproducible example when you post in the forum to get the best responses. 
Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] guoswang commented on issue #10521: How can I get the output of the net's last layer- symbol.LinearRegressionOutput?

2018-04-13 Thread GitBox
guoswang commented on issue #10521: How can I get the output of the net's last 
layer- symbol.LinearRegressionOutput?
URL: 
https://github.com/apache/incubator-mxnet/issues/10521#issuecomment-381304006
 
 
   why you closed the question, I haven't got a best answer to solve the 
quenstion
   @nswamy


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #10366: fix bug in sgd

2018-04-13 Thread GitBox
eric-haibin-lin commented on issue #10366: fix bug in sgd
URL: https://github.com/apache/incubator-mxnet/pull/10366#issuecomment-380550452
 
 
   Tracked in #10509 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] fevemania commented on issue #10541: Keep illegal hardware instruction. How can I build mxnet from source without avx support?

2018-04-13 Thread GitBox
fevemania commented on issue #10541: Keep illegal hardware instruction. How can 
I build mxnet from source without avx support?
URL: 
https://github.com/apache/incubator-mxnet/issues/10541#issuecomment-381303237
 
 
   And after that, my other virtualenv which install mxnet-cu80==1.1.0 that 
originally stable also have `illegal hardware instruction`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] fevemania commented on issue #10541: Keep illegal hardware instruction. How can I build mxnet from source without avx support?

2018-04-13 Thread GitBox
fevemania commented on issue #10541: Keep illegal hardware instruction. How can 
I build mxnet from source without avx support?
URL: 
https://github.com/apache/incubator-mxnet/issues/10541#issuecomment-381303237
 
 
   And after that, my other virtualenv which install mxnet-cu80==1.1.0 that 
originally stable also have `illegal hardware instruction`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #10550: [MXNET-320] Support elemwise_add/sub/max/min/hypot between dense and csr tensors

2018-04-13 Thread GitBox
haojin2 commented on a change in pull request #10550: [MXNET-320] Support 
elemwise_add/sub/max/min/hypot between dense and csr tensors
URL: https://github.com/apache/incubator-mxnet/pull/10550#discussion_r181541934
 
 

 ##
 File path: src/operator/elemwise_op_common.h
 ##
 @@ -73,6 +73,12 @@ inline bool ElemwiseStorageAttr(const nnvm::NodeAttrs& 
attrs,
 dispatched = storage_type_assign(out_attrs, kCSRStorage,
  dispatch_mode, dispatch_ex);
   }
+  if (!dispatched && (((*in_attrs)[0] == kDefaultStorage && (*in_attrs)[1] == 
kCSRStorage) ||
+  ((*in_attrs)[0] == kCSRStorage && (*in_attrs)[1] == 
kDefaultStorage))) {
+// dense, csr -> csr
+dispatched = storage_type_assign(out_attrs, kDefaultStorage,
 
 Review comment:
   What about I add a check for # of in_attrs and out_attrs?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new pull request #10551: Add "next sections" to sparse tutorials

2018-04-13 Thread GitBox
eric-haibin-lin opened a new pull request #10551: Add "next sections" to sparse 
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/10551
 
 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini commented on a change in pull request #10546: [MXNET-319] Add Autocomplete Macros in Scala

2018-04-13 Thread GitBox
Roshrini commented on a change in pull request #10546: [MXNET-319] Add 
Autocomplete Macros in Scala
URL: https://github.com/apache/incubator-mxnet/pull/10546#discussion_r181501517
 
 

 ##
 File path: 
scala-package/macros/src/main/scala/org/apache/mxnet/SymbolBaseMacro.scala
 ##
 @@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.mxnet
+
+import scala.collection.mutable.{HashMap, ListBuffer}
+import org.apache.mxnet.init.Base._
+
+
 
 Review comment:
   Add comments explaining the functionality of this macro


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini commented on a change in pull request #10546: [MXNET-319] Add Autocomplete Macros in Scala

2018-04-13 Thread GitBox
Roshrini commented on a change in pull request #10546: [MXNET-319] Add 
Autocomplete Macros in Scala
URL: https://github.com/apache/incubator-mxnet/pull/10546#discussion_r181501607
 
 

 ##
 File path: 
scala-package/macros/src/main/scala/org/apache/mxnet/SymbolBaseMacro.scala
 ##
 @@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.mxnet
+
+import scala.collection.mutable.{HashMap, ListBuffer}
+import org.apache.mxnet.init.Base._
+
+
+private[mxnet] object SymbolDocMacros {
+
+  case class SymbolFunction(handle: SymbolHandle, paramStr: String)
+
+  def addDefs() : Unit = {
+val baseDir = System.getProperty("user.dir")
+val targetPath = baseDir + 
"/core/src/main/scala/org/apache/mxnet/SymbolBase.scala"
+SEImpl(targetPath)
+  }
+
+  def SEImpl(FILE_PATH : String) : Unit = {
+var symbolFunctions: List[SymbolFunction] = initSymbolModule()
+import java.io._
+val pw = new PrintWriter(new File(FILE_PATH))
+// scalastyle:off
+pw.write("/*\n* Licensed to the Apache Software Foundation (ASF) under one 
or more\n* contributor license agreements.  See the NOTICE file distributed 
with\n* this work for additional information regarding copyright ownership.\n* 
The ASF licenses this file to You under the Apache License, Version 2.0\n* (the 
\"License\"); you may not use this file except in compliance with\n* the 
License.  You may obtain a copy of the License at\n*\n*
http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable 
law or agreed to in writing, software\n* distributed under the License is 
distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express or implied.\n* See the License for the specific language 
governing permissions and\n* limitations under the License.\n*/\n\npackage 
org.apache.mxnet\n")
+// scalastyle:on
+pw.write(s"\ntrait SymbolBase {\n\n")
+pw.write(s"  // scalastyle:off\n")
+symbolFunctions = symbolFunctions.distinct
+for (ele <- symbolFunctions) {
+  val temp = ele.paramStr + "\n\n"
+  pw.write(temp)
+}
+pw.write(s"\n\n}")
+pw.close()
+  }
+
+
+  /*
+Code copies from the SymbolMacros Class
+   */
+  private def initSymbolModule(): List[SymbolFunction] = {
+var opNames = ListBuffer.empty[String]
+_LIB.mxListAllOpNames(opNames)
+opNames = opNames.distinct
+val result : ListBuffer[SymbolFunction] = ListBuffer[SymbolFunction]()
+opNames.foreach(opName => {
+  val opHandle = new RefLong
+  // printf(opName)
+  _LIB.nnGetOpHandle(opName, opHandle)
+  makeAtomicSymbolFunction(opHandle.value, opName, result)
+})
+
+result.toList
+  }
+
+  private def makeAtomicSymbolFunction(handle: SymbolHandle,
+   aliasName: String, result : 
ListBuffer[SymbolFunction])
+  : Unit = {
+val name = new RefString
+val desc = new RefString
+val keyVarNumArgs = new RefString
+val returnType = new RefString
+val numArgs = new RefInt
+val argNames = ListBuffer.empty[String]
+val argTypes = ListBuffer.empty[String]
+val argDescs = ListBuffer.empty[String]
+
+_LIB.mxSymbolGetAtomicSymbolInfo(
+  handle, name, desc, numArgs, argNames, argTypes, argDescs, 
keyVarNumArgs, returnType)
+
+if (name.value.charAt(0) == '_') {
+  // Internal function
+} else {
+  val paramStr =
+traitgen(name.value, desc.value, argNames, argTypes, argDescs, 
returnType.value)
+  val extraDoc: String = if (keyVarNumArgs.value != null && 
keyVarNumArgs.value.length > 0) {
+s"This function support variable length of positional input 
(${keyVarNumArgs.value})."
+  } else {
+""
+  }
+  result +=  SymbolFunction(handle, paramStr)
+}
+  }
+
+
+  def traitgen(functionName : String,
+   functionDesc : String,
+   argNames : Seq[String],
+   argTypes : Seq[String],
+   argDescs : Seq[String],
+   returnType : String) : String = {
+val desc = functionDesc.split("\n") map { currStr =>
+  s"  * $currStr"
+}
+val params =
+  (argNames zip argTypes zip argDescs) 

[GitHub] Roshrini commented on a change in pull request #10546: [MXNET-319] Add Autocomplete Macros in Scala

2018-04-13 Thread GitBox
Roshrini commented on a change in pull request #10546: [MXNET-319] Add 
Autocomplete Macros in Scala
URL: https://github.com/apache/incubator-mxnet/pull/10546#discussion_r181501563
 
 

 ##
 File path: 
scala-package/macros/src/main/scala/org/apache/mxnet/SymbolBaseMacro.scala
 ##
 @@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.mxnet
+
+import scala.collection.mutable.{HashMap, ListBuffer}
+import org.apache.mxnet.init.Base._
+
+
+private[mxnet] object SymbolDocMacros {
+
+  case class SymbolFunction(handle: SymbolHandle, paramStr: String)
+
+  def addDefs() : Unit = {
+val baseDir = System.getProperty("user.dir")
+val targetPath = baseDir + 
"/core/src/main/scala/org/apache/mxnet/SymbolBase.scala"
+SEImpl(targetPath)
+  }
+
+  def SEImpl(FILE_PATH : String) : Unit = {
+var symbolFunctions: List[SymbolFunction] = initSymbolModule()
+import java.io._
+val pw = new PrintWriter(new File(FILE_PATH))
+// scalastyle:off
+pw.write("/*\n* Licensed to the Apache Software Foundation (ASF) under one 
or more\n* contributor license agreements.  See the NOTICE file distributed 
with\n* this work for additional information regarding copyright ownership.\n* 
The ASF licenses this file to You under the Apache License, Version 2.0\n* (the 
\"License\"); you may not use this file except in compliance with\n* the 
License.  You may obtain a copy of the License at\n*\n*
http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable 
law or agreed to in writing, software\n* distributed under the License is 
distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express or implied.\n* See the License for the specific language 
governing permissions and\n* limitations under the License.\n*/\n\npackage 
org.apache.mxnet\n")
+// scalastyle:on
+pw.write(s"\ntrait SymbolBase {\n\n")
+pw.write(s"  // scalastyle:off\n")
+symbolFunctions = symbolFunctions.distinct
+for (ele <- symbolFunctions) {
+  val temp = ele.paramStr + "\n\n"
+  pw.write(temp)
+}
+pw.write(s"\n\n}")
+pw.close()
+  }
+
+
+  /*
+Code copies from the SymbolMacros Class
+   */
+  private def initSymbolModule(): List[SymbolFunction] = {
+var opNames = ListBuffer.empty[String]
+_LIB.mxListAllOpNames(opNames)
+opNames = opNames.distinct
+val result : ListBuffer[SymbolFunction] = ListBuffer[SymbolFunction]()
+opNames.foreach(opName => {
+  val opHandle = new RefLong
+  // printf(opName)
+  _LIB.nnGetOpHandle(opName, opHandle)
+  makeAtomicSymbolFunction(opHandle.value, opName, result)
+})
+
+result.toList
+  }
+
+  private def makeAtomicSymbolFunction(handle: SymbolHandle,
+   aliasName: String, result : 
ListBuffer[SymbolFunction])
+  : Unit = {
+val name = new RefString
+val desc = new RefString
+val keyVarNumArgs = new RefString
+val returnType = new RefString
+val numArgs = new RefInt
+val argNames = ListBuffer.empty[String]
+val argTypes = ListBuffer.empty[String]
+val argDescs = ListBuffer.empty[String]
+
+_LIB.mxSymbolGetAtomicSymbolInfo(
+  handle, name, desc, numArgs, argNames, argTypes, argDescs, 
keyVarNumArgs, returnType)
+
+if (name.value.charAt(0) == '_') {
+  // Internal function
+} else {
+  val paramStr =
+traitgen(name.value, desc.value, argNames, argTypes, argDescs, 
returnType.value)
+  val extraDoc: String = if (keyVarNumArgs.value != null && 
keyVarNumArgs.value.length > 0) {
+s"This function support variable length of positional input 
(${keyVarNumArgs.value})."
+  } else {
+""
+  }
+  result +=  SymbolFunction(handle, paramStr)
+}
+  }
+
+
+  def traitgen(functionName : String,
+   functionDesc : String,
+   argNames : Seq[String],
+   argTypes : Seq[String],
+   argDescs : Seq[String],
+   returnType : String) : String = {
+val desc = functionDesc.split("\n") map { currStr =>
+  s"  * $currStr"
+}
+val params =
+  (argNames zip argTypes zip argDescs) 

[GitHub] Roshrini commented on issue #10543: Failed to build from source when set USE_CPP_PACKAGE = 1, fatal error C1083: unabel to open file: “mxnet-cpp/op.h”: No such file or directory

2018-04-13 Thread GitBox
Roshrini commented on issue #10543: Failed to build from source when set 
USE_CPP_PACKAGE = 1, fatal error C1083: unabel to open file: “mxnet-cpp/op.h”: 
No such file or directory
URL: 
https://github.com/apache/incubator-mxnet/issues/10543#issuecomment-381215800
 
 
   @nswamy Can you please add labels-
   CPP package, Installation


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 opened a new issue #10544: name_scope/prefix doesn't work

2018-04-13 Thread GitBox
zhanghang1989 opened a new issue #10544: name_scope/prefix doesn't work
URL: https://github.com/apache/incubator-mxnet/issues/10544
 
 
   ## Description
   
   name_scope/prefix doesn't work
   @piiswrong 
   
   ## Minimum reproducible example
   
   ```python
   import gluonvision
   net = gluonvision.model_zoo.FCN(10)
   net2 = gluonvision.model_zoo.FCN(10)
   net.save_params('test.params') 
   net2.load_params('test.params')
   ```
   
   ## Error Message:
   (Paste the complete error message, including stack trace.)
   
   ```bash
   ---
   TypeError Traceback (most recent call last)
in ()
 4 net2 = gluonvision.model_zoo.FCN(10)
 5 net.save_params('test.params')
   > 6 net2.load_params('test.params')
   
   
~/anaconda3/lib/python3.6/site-packages/mxnet-1.2.0-py3.6.egg/mxnet/gluon/block.py
 in load_params(self, filename, ctx, allow_missing, ignore_extra)
   316 """
   317 self.collect_params().load(filename, ctx, allow_missing, 
ignore_extra,
   --> 318self.prefix)
   319 
   320 def register_child(self, block):
   
   
~/anaconda3/lib/python3.6/site-packages/mxnet-1.2.0-py3.6.egg/mxnet/gluon/parameter.py
 in load(self, filename, ctx, allow_missing, ignore_extra, restore_prefix)
   775 "Parameter '%s' is missing in file '%s', which 
contains parameters: %s. " \
   776 "Please make sure source and target networks 
have the same prefix."%(
   --> 777 name[lprefix:], filename, 
_brief_print_list(arg_dict.keys()))
   778 for name in arg_dict:
   779 if name not in self._params:
   
   
~/anaconda3/lib/python3.6/site-packages/mxnet-1.2.0-py3.6.egg/mxnet/gluon/parameter.py
 in _brief_print_list(lst, limit)
   502 """Print at most `limit` elements of list."""
   503 if len(lst) > limit:
   --> 504 return _brief_print_list(lst[:limit//2], limit) + ', ..., ' 
+ \
   505 _brief_print_list(lst[-limit//2:], limit)
   506 return ', '.join(["'%s'"%str(i) for i in lst])
   
   TypeError: 'dict_keys' object is not subscriptable
   ```
   
   ## other output
   ```python
   import mxnet as mx
   params = mx.nd.load('test.params')
   for key, val in params.items():
   print (key)  
   ```
   Terminal ourputs:
   ```bash
   ..
   fcn0__fcnhead0_batchnorm0_gamma
   fcn0__fcnhead0_batchnorm0_beta
   fcn0__fcnhead0_batchnorm0_running_mean
   fcn0__fcnhead0_batchnorm0_running_var
   fcn0__fcnhead0_conv0_weight
   fcn0__fcnhead0_conv0_bias
   fcn0__fcnhead0_batchnorm1_gamma
   fcn0__fcnhead0_batchnorm1_beta
   fcn0__fcnhead0_batchnorm1_running_mean
   fcn0__fcnhead0_batchnorm1_running_var
   fcn0__fcnhead0_conv1_weight
   fcn0__fcnhead0_conv1_bias
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] bradcar commented on issue #9662: Gluon PReLU, ELU, SELU, Swish

2018-04-13 Thread GitBox
bradcar commented on issue #9662: Gluon PReLU, ELU, SELU, Swish
URL: https://github.com/apache/incubator-mxnet/pull/9662#issuecomment-381225149
 
 
   @piiswrong @szha what is the status of 9662 and prelu working? When I 
naively put PReLU (into a hybrid block (mxnet 1.2.0) and look at the source 
(activations.py) it seems that PReLU only has one learnable alpha per layer. 
Shouldn't each 'neuron' have its own learnable alpha?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10545: [WIP] Add NEWS and README

2018-04-13 Thread GitBox
marcoabreu commented on issue #10545: [WIP] Add NEWS and README
URL: https://github.com/apache/incubator-mxnet/pull/10545#issuecomment-381260926
 
 
   @KellenSunderland @larroy shall we mention that there's now a new way to 
build ARM and Jetson and that we output wheels?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 opened a new pull request #10547: Remove empty file from examples

2018-04-13 Thread GitBox
anirudh2290 opened a new pull request #10547: Remove empty file from examples
URL: https://github.com/apache/incubator-mxnet/pull/10547
 
 
   ## Description ##
   This fails the rat license check. Removing this since it is empty.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #10545: [WIP] Add NEWS and README

2018-04-13 Thread GitBox
anirudhacharya commented on a change in pull request #10545: [WIP] Add NEWS and 
README
URL: https://github.com/apache/incubator-mxnet/pull/10545#discussion_r181503057
 
 

 ##
 File path: NEWS.md
 ##
 @@ -1,5 +1,126 @@
 MXNet Change Log
 
+## 1.2.0
+### New Features - Added Scala Inference APIs
+- Implemented new [Scala Inference 
APIs](https://cwiki.apache.org/confluence/display/MXNET/MXNetScalaInferenceAPI) 
which offer an easy-to-use, and Scala Idiomatic and thread-safe high level APIs 
for performing predictions with deep learning models trained with MXNet 
(#9678). Implemented new ImageClassifier class provides APIs for classification 
tasks on a Java BufferedImage using a pre-trained model you provide (#10054). 
Implemented new ObjectDetector class provides APIs for object and boundary 
detections on a Java BufferedImage using a pre-trained model you provide 
(#10229).
+
+### New Features - Added module to import ONNX models into MXNet
+- Implemented new ONNX module in MXNet offers an easy to use API to import 
ONNX models into MXNet's symbolic interface (#9963). Checkout the example on 
how you could use this API to import ONNX models and perform inference on 
MXNet. 
 
 Review comment:
   And in "Checkout the example ..." part. The example should link to one of 
these two - 
https://github.com/apache/incubator-mxnet/blob/master/example/onnx/super_resolution.py
 or 
https://github.com/apache/incubator-mxnet/blob/master/docs/tutorials/onnx/super_resolution.md


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #10545: [WIP] Add NEWS and README

2018-04-13 Thread GitBox
anirudhacharya commented on a change in pull request #10545: [WIP] Add NEWS and 
README
URL: https://github.com/apache/incubator-mxnet/pull/10545#discussion_r181503799
 
 

 ##
 File path: NEWS.md
 ##
 @@ -1,5 +1,126 @@
 MXNet Change Log
 
+## 1.2.0
+### New Features - Added Scala Inference APIs
+- Implemented new [Scala Inference 
APIs](https://cwiki.apache.org/confluence/display/MXNET/MXNetScalaInferenceAPI) 
which offer an easy-to-use, and Scala Idiomatic and thread-safe high level APIs 
for performing predictions with deep learning models trained with MXNet 
(#9678). Implemented new ImageClassifier class provides APIs for classification 
tasks on a Java BufferedImage using a pre-trained model you provide (#10054). 
Implemented new ObjectDetector class provides APIs for object and boundary 
detections on a Java BufferedImage using a pre-trained model you provide 
(#10229).
+
+### New Features - Added module to import ONNX models into MXNet
+- Implemented new ONNX module in MXNet offers an easy to use API to import 
ONNX models into MXNet's symbolic interface (#9963). Checkout the example on 
how you could use this API to import ONNX models and perform inference on 
MXNet. 
 
 Review comment:
   link the API to this - 
https://cwiki.apache.org/confluence/display/MXNET/ONNX-MXNet+API+Design


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #10545: [WIP] Add NEWS and README

2018-04-13 Thread GitBox
anirudhacharya commented on a change in pull request #10545: [WIP] Add NEWS and 
README
URL: https://github.com/apache/incubator-mxnet/pull/10545#discussion_r181499542
 
 

 ##
 File path: NEWS.md
 ##
 @@ -1,5 +1,126 @@
 MXNet Change Log
 
+## 1.2.0
+### New Features - Added Scala Inference APIs
+- Implemented new [Scala Inference 
APIs](https://cwiki.apache.org/confluence/display/MXNET/MXNetScalaInferenceAPI) 
which offer an easy-to-use, and Scala Idiomatic and thread-safe high level APIs 
for performing predictions with deep learning models trained with MXNet 
(#9678). Implemented new ImageClassifier class provides APIs for classification 
tasks on a Java BufferedImage using a pre-trained model you provide (#10054). 
Implemented new ObjectDetector class provides APIs for object and boundary 
detections on a Java BufferedImage using a pre-trained model you provide 
(#10229).
+
+### New Features - Added module to import ONNX models into MXNet
+- Implemented new ONNX module in MXNet offers an easy to use API to import 
ONNX models into MXNet's symbolic interface (#9963). Checkout the example on 
how you could use this API to import ONNX models and perform inference on 
MXNet. 
 
 Review comment:
   grammar mistake. It should be - "Implemented new ONNX module in **MXNet. It 
offers** an easy ...".


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jaredleekatzman opened a new pull request #10548: Fix bug in mx.symbol.Group

2018-04-13 Thread GitBox
jaredleekatzman opened a new pull request #10548: Fix bug in mx.symbol.Group
URL: https://github.com/apache/incubator-mxnet/pull/10548
 
 
   
   ## Description ##
   Providing mx.symbol.Group an empty list causes a segmentation fault. This 
fix adds a check for empty arrays. 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10445: Make the numpy version compatible with Official Mac OS system python-2.7.10's numpy

2018-04-13 Thread GitBox
marcoabreu commented on issue #10445: Make the numpy version compatible with 
Official Mac OS system python-2.7.10's numpy
URL: https://github.com/apache/incubator-mxnet/pull/10445#issuecomment-381197108
 
 
   I see, thanks. Is there anything keeping people from updating Python on 
their Mac?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini commented on issue #10542: Intel MKL-DNN RNN Support

2018-04-13 Thread GitBox
Roshrini commented on issue #10542: Intel MKL-DNN RNN Support
URL: 
https://github.com/apache/incubator-mxnet/issues/10542#issuecomment-381218788
 
 
   @eric-haibin-lin Can you please add labels-
   FeatureRequest, RNN


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] bradcar commented on issue #9662: Gluon PReLU, ELU, SELU, Swish

2018-04-13 Thread GitBox
bradcar commented on issue #9662: Gluon PReLU, ELU, SELU, Swish
URL: https://github.com/apache/incubator-mxnet/pull/9662#issuecomment-381224686
 
 
   @piiswrong @szha what is the status of 9662 and prelu working? When I 
naively put PReLU (into a hybrid block and look at this code it seems that 
PReLU only has one learnable alpha per layer.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 opened a new pull request #10546: [MXNET-319] Add Autocomplete Macros in Scala

2018-04-13 Thread GitBox
lanking520 opened a new pull request #10546: [MXNET-319] Add Autocomplete 
Macros in Scala
URL: https://github.com/apache/incubator-mxnet/pull/10546
 
 
   ## Description ##
   Add SymbolBaseMacros File: It will generate a SymbolBase trait with full 
documentation on the Symbol APIs. Also enable the "return_type" Parameters that 
will be helpful to generate the docs.
   
   @nswamy @yzhliu @Roshrini 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 opened a new pull request #10538: [MXNET-318] Allow dot for fp16 on GPU

2018-04-13 Thread GitBox
rahul003 opened a new pull request #10538: [MXNET-318] Allow dot for fp16 on GPU
URL: https://github.com/apache/incubator-mxnet/pull/10538
 
 
   ## Description ##
   Dot for fp16 on GPU is supported internally by the 
[BLASEngine](https://github.com/dmlc/mshadow/blob/0b4cedd7015cc69191f8338a8feaacda90697758/mshadow/dot_engine-inl.h#L417
   ) , but a check was blocking that. 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Changed CHECK and type switches 
   - [x] Added test for fp16
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on issue #10538: [MXNET-318] Allow dot for fp16 on GPU

2018-04-13 Thread GitBox
rahul003 commented on issue #10538: [MXNET-318] Allow dot for fp16 on GPU
URL: https://github.com/apache/incubator-mxnet/pull/10538#issuecomment-381224946
 
 
   Updated


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10530: Jetson build with cmake and CUDA

2018-04-13 Thread GitBox
marcoabreu commented on issue #10530: Jetson build with cmake and CUDA
URL: https://github.com/apache/incubator-mxnet/pull/10530#issuecomment-381230763
 
 
   Please add a jira since this is not a minor change


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gautamkmr commented on issue #10445: Make the numpy version compatible with Official Mac OS system python-2.7.10's numpy

2018-04-13 Thread GitBox
gautamkmr commented on issue #10445: Make the numpy version compatible with 
Official Mac OS system python-2.7.10's numpy
URL: https://github.com/apache/incubator-mxnet/pull/10445#issuecomment-381194887
 
 
   @marcoabreu  the issue is with python 2.7.10
   
   @cjolivier01  yes, there is github issue, will update here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #10435: [MXNET-289] Update example to resize data iterator to fix hang in dist sync for last epoch

2018-04-13 Thread GitBox
eric-haibin-lin commented on a change in pull request #10435: [MXNET-289] 
Update example to resize data iterator to fix hang in dist sync for last epoch
URL: https://github.com/apache/incubator-mxnet/pull/10435#discussion_r181479187
 
 

 ##
 File path: example/image-classification/common/fit.py
 ##
 @@ -155,9 +159,16 @@ def fit(args, network, data_loader, **kwargs):
 head = '%(asctime)-15s Node[' + str(kv.rank) + '] %(message)s'
 logging.basicConfig(level=logging.DEBUG, format=head)
 logging.info('start with arguments %s', args)
+
+epoch_size = get_epoch_size(args, kv)
 
 # data iterators
 (train, val) = data_loader(args, kv)
+if 'dist' in args.kv_store and 'sync' in args.kv_store:
 
 Review comment:
   Please check "async" instead 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #10371: [MXNET-263] [WIP] Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU

2018-04-13 Thread GitBox
haojin2 commented on a change in pull request #10371: [MXNET-263] [WIP] Support 
for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU
URL: https://github.com/apache/incubator-mxnet/pull/10371#discussion_r181446729
 
 

 ##
 File path: tests/python/unittest/test_sparse_operator.py
 ##
 @@ -1286,10 +1309,18 @@ def test_sparse_dot_zero_output(lhs_shape, trans_lhs, 
rhs_num_cols):
 test_dot_csr(lhs_shape, (lhs_shape[1], rnd.randint(5, 10)), 'default', 
False, lhs_d, rhs_d)  # test gpu SpMM
 test_dot_csr(lhs_shape, (lhs_shape[0], rnd.randint(5, 10)), 'default', 
True, lhs_d, rhs_d)  # (scalar kernel)
 test_dot_dns_csr(lhs_shape, (lhs_shape[1], rnd.randint(50, 200)), 
lhs_d, lhs_d)
+test_dot_dns_csr(lhs_shape, (rnd.randint(50, 200), lhs_shape[1]), 
lhs_d, lhs_d, trans_rhs=True)
 for rhs_d in density:
 test_dot_csr(lhs_shape, (lhs_shape[1], rnd.randint(1, 10)), 
'row_sparse', False, lhs_d, rhs_d)
 test_dot_csr(lhs_shape, (lhs_shape[0], rnd.randint(1, 10)), 
'row_sparse', True, lhs_d, rhs_d)
-
+test_infer_forward_stype(lhs_shape, (lhs_shape[1], rnd.randint(10, 
20)),
+ lhs_d, rhs_d, False, False)
+test_infer_forward_stype(lhs_shape, (rnd.randint(10, 20), 
lhs_shape[1]),
+ lhs_d, rhs_d, False, True)
+test_infer_forward_stype(lhs_shape, (lhs_shape[0], rnd.randint(10, 
20)),
+ lhs_d, rhs_d, True, False)
+test_infer_forward_stype(lhs_shape, (rnd.randint(10, 20), 
lhs_shape[0]),
+ lhs_d, rhs_d, True, True)
 
 Review comment:
   Sure


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 commented on issue #10511: add naming tutorial

2018-04-13 Thread GitBox
zhanghang1989 commented on issue #10511: add naming tutorial
URL: https://github.com/apache/incubator-mxnet/pull/10511#issuecomment-381218359
 
 
   Still have problem save and load. 
https://github.com/apache/incubator-mxnet/issues/10544


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on issue #10545: [WIP] Add NEWS and README

2018-04-13 Thread GitBox
anirudh2290 commented on issue #10545: [WIP] Add NEWS and README
URL: https://github.com/apache/incubator-mxnet/pull/10545#issuecomment-381222884
 
 
   @nswamy @spidyDev @anirudhacharya @reminisce @zheng-da @rahul003 
@cjolivier01 @eric-haibin-lin @aaronmarkham @piiswrong Can you please help 
review ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #9662: Gluon PReLU, ELU, SELU, Swish

2018-04-13 Thread GitBox
szha commented on issue #9662: Gluon PReLU, ELU, SELU, Swish
URL: https://github.com/apache/incubator-mxnet/pull/9662#issuecomment-381229927
 
 
   @bradcar the leaky relu operator in 'prelu' mode supports any broadcast-able 
alpha shapes. Since it's impossible to infer the shape of parameter until it 
sees the first input, we chose to put the simplest case in the constructor.
   
   For your use case when you need more than one alpha parameters, you can 
simply use the operator.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong closed pull request #10540: Fix typo in image.py

2018-04-13 Thread GitBox
piiswrong closed pull request #10540: Fix typo in image.py
URL: https://github.com/apache/incubator-mxnet/pull/10540
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/ndarray/image.py b/python/mxnet/ndarray/image.py
index 0afab24326c..627c0d47437 100644
--- a/python/mxnet/ndarray/image.py
+++ b/python/mxnet/ndarray/image.py
@@ -19,7 +19,7 @@
 # pylint: disable=wildcard-import, unused-wildcard-import
 """Image NDArray API of MXNet."""
 try:
-from .gen_iamge import *
+from .gen_image import *
 except ImportError:
 pass
 


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Fix typo in image.py (#10540)

2018-04-13 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new f220ad0  Fix typo in image.py (#10540)
f220ad0 is described below

commit f220ad07bb0cf572e66392f52acf6215ef0d1190
Author: daquexian 
AuthorDate: Sat Apr 14 01:15:10 2018 +0800

Fix typo in image.py (#10540)

Fix typo in image.py
---
 python/mxnet/ndarray/image.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/python/mxnet/ndarray/image.py b/python/mxnet/ndarray/image.py
index 0afab24..627c0d4 100644
--- a/python/mxnet/ndarray/image.py
+++ b/python/mxnet/ndarray/image.py
@@ -19,7 +19,7 @@
 # pylint: disable=wildcard-import, unused-wildcard-import
 """Image NDArray API of MXNet."""
 try:
-from .gen_iamge import *
+from .gen_image import *
 except ImportError:
 pass
 

-- 
To stop receiving notification emails like this one, please contact
j...@apache.org.


[GitHub] gautamkmr commented on issue #10445: Make the numpy version compatible with Official Mac OS system python-2.7.10's numpy

2018-04-13 Thread GitBox
gautamkmr commented on issue #10445: Make the numpy version compatible with 
Official Mac OS system python-2.7.10's numpy
URL: https://github.com/apache/incubator-mxnet/pull/10445#issuecomment-381219462
 
 
   @cjolivier01 https://github.com/apache/incubator-mxnet/issues/9949


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 opened a new pull request #10545: [WIP] Add NEWS and README

2018-04-13 Thread GitBox
anirudh2290 opened a new pull request #10545: [WIP] Add NEWS and README
URL: https://github.com/apache/incubator-mxnet/pull/10545
 
 
   ## Description ##
   Added NEWS and README for 1.2.0 release.
   
   @mbaijal Let me know on the decision with legal-discuss@ so we can update 
the release notes with known issues for the license.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 commented on issue #10544: name_scope/prefix doesn't work

2018-04-13 Thread GitBox
zhanghang1989 commented on issue #10544: name_scope/prefix doesn't work
URL: 
https://github.com/apache/incubator-mxnet/issues/10544#issuecomment-381232441
 
 
   Resolved with help from Aston and Sheng. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 closed issue #10544: name_scope/prefix doesn't work

2018-04-13 Thread GitBox
zhanghang1989 closed issue #10544: name_scope/prefix doesn't work
URL: https://github.com/apache/incubator-mxnet/issues/10544
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong closed pull request #10511: add naming tutorial

2018-04-13 Thread GitBox
piiswrong closed pull request #10511: add naming tutorial
URL: https://github.com/apache/incubator-mxnet/pull/10511
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/tutorials/gluon/datasets.md b/docs/tutorials/gluon/datasets.md
index 248ea02f5c1..0c9b5375d20 100644
--- a/docs/tutorials/gluon/datasets.md
+++ b/docs/tutorials/gluon/datasets.md
@@ -33,7 +33,7 @@ print(sample)
 
 (
  [ 0.4375872   0.29753461  0.89177299]
- , 
+ ,
  [ 0.83261985]
  )
 
@@ -60,7 +60,7 @@ for X_batch, y_batch in data_loader:
 X_batch has shape (5, 3), and y_batch has shape (5, 1)
 
 
-We can see 2 mini-batches of data (and labels), each with 5 samples, which 
makes sense given we started with a dataset of 10 samples. When comparing the 
shape of the batches to the samples returned by the 
[`Dataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataset#mxnet.gluon.data.Dataset),
 we've gained an extra dimension at the start which is sometimes called the 
batch axis. 
+We can see 2 mini-batches of data (and labels), each with 5 samples, which 
makes sense given we started with a dataset of 10 samples. When comparing the 
shape of the batches to the samples returned by the 
[`Dataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataset#mxnet.gluon.data.Dataset),
 we've gained an extra dimension at the start which is sometimes called the 
batch axis.
 
 Our `data_loader` loop will stop when every sample of `dataset` has been 
returned as part of a batch. Sometimes the dataset length isn't divisible by 
the mini-batch size, leaving a final batch with a smaller number of samples. 
[`DataLoader`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataloader#mxnet.gluon.data.DataLoader)'s
 default behavior is to return this smaller mini-batch, but this can be changed 
by setting the `last_batch` parameter to `discard` (which ignores the last 
batch) or `rollover` (which starts the next epoch with the remaining samples).
 
@@ -137,7 +137,7 @@ def construct_net():
 ctx = mx.cpu()
 net = construct_net()
 net.hybridize()
-net.collect_params().initialize(mx.init.Xavier())
+net.initialize(mx.init.Xavier())
 # define loss and trainer.
 criterion = gluon.loss.SoftmaxCrossEntropyLoss()
 trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.1})
@@ -159,7 +159,7 @@ for epoch in range(epochs):
 cumulative_train_loss += loss.sum()
 training_samples += data.shape[0]
 train_loss = cumulative_train_loss.asscalar()/training_samples
-
+
 # validation loop
 cumulative_valid_loss = mx.nd.array([0])
 valid_samples = 0
@@ -171,7 +171,7 @@ for epoch in range(epochs):
 cumulative_valid_loss += loss.sum()
 valid_samples += data.shape[0]
 valid_loss = cumulative_valid_loss.asscalar()/valid_samples
-
+
 print("Epoch {}, training loss: {:.2f}, validation loss: 
{:.2f}".format(epoch, train_loss, valid_loss))
 ```
 
@@ -184,7 +184,7 @@ for epoch in range(epochs):
 
 # Using own data with included `Dataset`s
 
-Gluon has a number of different 
[`Dataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataset#mxnet.gluon.data.Dataset)
 classes for working with your own image data straight out-of-the-box. You can 
get started quickly using the 
[`mxnet.gluon.data.vision.datasets.ImageFolderDataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=imagefolderdataset#mxnet.gluon.data.vision.datasets.ImageFolderDataset)
 which loads images directly from a user-defined folder, and infers the label 
(i.e. class) from the folders. 
+Gluon has a number of different 
[`Dataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataset#mxnet.gluon.data.Dataset)
 classes for working with your own image data straight out-of-the-box. You can 
get started quickly using the 
[`mxnet.gluon.data.vision.datasets.ImageFolderDataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=imagefolderdataset#mxnet.gluon.data.vision.datasets.ImageFolderDataset)
 which loads images directly from a user-defined folder, and infers the label 
(i.e. class) from the folders.
 
 We will run through an example for image classification, but a similar process 
applies for other vision tasks. If you already have your own collection of 
images to work with you should partition your data into training and test sets, 
and place all objects of the same class into seperate folders. Similar to:
 
@@ -307,4 +307,4 @@ data_iter_loader = DataIterLoader(data_iter)
 for X_batch, y_batch in data_iter_loader:
 assert X_batch.shape == (5, 3)
 assert y_batch.shape == (5, 1)
-```
\ No 

[incubator-mxnet] branch master updated: add naming tutorial (#10511)

2018-04-13 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new b929892  add naming tutorial (#10511)
b929892 is described below

commit b9298924043b9c8316613f5bdf9bc467cb140197
Author: Eric Junyuan Xie 
AuthorDate: Fri Apr 13 10:06:39 2018 -0700

add naming tutorial (#10511)

* add naming tutorial

* fix

* Update naming.md

* Update index.md

* fix save load

* fix

* fix

* fix

* fix
---
 docs/tutorials/gluon/datasets.md|  14 +-
 docs/tutorials/gluon/gluon.md   |   2 +-
 docs/tutorials/gluon/hybrid.md  |   4 +-
 docs/tutorials/gluon/mnist.md   |   5 +-
 docs/tutorials/gluon/naming.md  | 255 
 docs/tutorials/index.md |   2 +
 docs/tutorials/onnx/fine_tuning_gluon.md|  24 +--
 example/gluon/embedding_learning/train.py   |   2 +-
 example/gluon/kaggle_k_fold_cross_validation.py |   2 +-
 example/gluon/learning_rate_manipulation.py |   6 +-
 example/gluon/lstm_crf.py   |   2 +-
 example/gluon/style_transfer/main.py|  16 +-
 example/gluon/super_resolution.py   |   2 +-
 example/gluon/tree_lstm/main.py |   4 +-
 python/mxnet/gluon/block.py | 132 
 python/mxnet/gluon/contrib/nn/basic_layers.py   |   4 +-
 python/mxnet/gluon/nn/basic_layers.py   |  21 +-
 python/mxnet/gluon/parameter.py |  37 ++--
 python/mxnet/gluon/rnn/rnn_cell.py  |  28 +--
 python/mxnet/gluon/utils.py |   7 +
 tests/python/unittest/test_gluon.py |  14 +-
 21 files changed, 451 insertions(+), 132 deletions(-)

diff --git a/docs/tutorials/gluon/datasets.md b/docs/tutorials/gluon/datasets.md
index 248ea02..0c9b537 100644
--- a/docs/tutorials/gluon/datasets.md
+++ b/docs/tutorials/gluon/datasets.md
@@ -33,7 +33,7 @@ print(sample)
 
 (
  [ 0.4375872   0.29753461  0.89177299]
- , 
+ ,
  [ 0.83261985]
  )
 
@@ -60,7 +60,7 @@ for X_batch, y_batch in data_loader:
 X_batch has shape (5, 3), and y_batch has shape (5, 1)
 
 
-We can see 2 mini-batches of data (and labels), each with 5 samples, which 
makes sense given we started with a dataset of 10 samples. When comparing the 
shape of the batches to the samples returned by the 
[`Dataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataset#mxnet.gluon.data.Dataset),
 we've gained an extra dimension at the start which is sometimes called the 
batch axis. 
+We can see 2 mini-batches of data (and labels), each with 5 samples, which 
makes sense given we started with a dataset of 10 samples. When comparing the 
shape of the batches to the samples returned by the 
[`Dataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataset#mxnet.gluon.data.Dataset),
 we've gained an extra dimension at the start which is sometimes called the 
batch axis.
 
 Our `data_loader` loop will stop when every sample of `dataset` has been 
returned as part of a batch. Sometimes the dataset length isn't divisible by 
the mini-batch size, leaving a final batch with a smaller number of samples. 
[`DataLoader`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataloader#mxnet.gluon.data.DataLoader)'s
 default behavior is to return this smaller mini-batch, but this can be changed 
by setting the `last_batch` parameter to `discard` (which [...]
 
@@ -137,7 +137,7 @@ def construct_net():
 ctx = mx.cpu()
 net = construct_net()
 net.hybridize()
-net.collect_params().initialize(mx.init.Xavier())
+net.initialize(mx.init.Xavier())
 # define loss and trainer.
 criterion = gluon.loss.SoftmaxCrossEntropyLoss()
 trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.1})
@@ -159,7 +159,7 @@ for epoch in range(epochs):
 cumulative_train_loss += loss.sum()
 training_samples += data.shape[0]
 train_loss = cumulative_train_loss.asscalar()/training_samples
-
+
 # validation loop
 cumulative_valid_loss = mx.nd.array([0])
 valid_samples = 0
@@ -171,7 +171,7 @@ for epoch in range(epochs):
 cumulative_valid_loss += loss.sum()
 valid_samples += data.shape[0]
 valid_loss = cumulative_valid_loss.asscalar()/valid_samples
-
+
 print("Epoch {}, training loss: {:.2f}, validation loss: 
{:.2f}".format(epoch, train_loss, valid_loss))
 ```
 
@@ -184,7 +184,7 @@ for epoch in range(epochs):
 
 # Using own data with included `Dataset`s
 
-Gluon has a number of different 

[GitHub] Roshrini commented on issue #10541: Keep illegal hardware instruction. How can I build mxnet from source without avx support?

2018-04-13 Thread GitBox
Roshrini commented on issue #10541: Keep illegal hardware instruction. How can 
I build mxnet from source without avx support?
URL: 
https://github.com/apache/incubator-mxnet/issues/10541#issuecomment-381217831
 
 
   @nswamy Can you please add label-
   Installation


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] bradcar commented on issue #9662: Gluon PReLU, ELU, SELU, Swish

2018-04-13 Thread GitBox
bradcar commented on issue #9662: Gluon PReLU, ELU, SELU, Swish
URL: https://github.com/apache/incubator-mxnet/pull/9662#issuecomment-381224686
 
 
   @piiswrong @szha what is the status of 9662 and prelu working? When I 
naively put PReLU (into a hybrid block and look at this code it seems that 
PReLU only has one learnable alpha per layer.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on a change in pull request #10435: [MXNET-289] Update example to resize data iterator to fix hang in dist sync for last epoch

2018-04-13 Thread GitBox
rahul003 commented on a change in pull request #10435: [MXNET-289] Update 
example to resize data iterator to fix hang in dist sync for last epoch
URL: https://github.com/apache/incubator-mxnet/pull/10435#discussion_r181481901
 
 

 ##
 File path: example/image-classification/common/fit.py
 ##
 @@ -155,9 +159,16 @@ def fit(args, network, data_loader, **kwargs):
 head = '%(asctime)-15s Node[' + str(kv.rank) + '] %(message)s'
 logging.basicConfig(level=logging.DEBUG, format=head)
 logging.info('start with arguments %s', args)
+
+epoch_size = get_epoch_size(args, kv)
 
 # data iterators
 (train, val) = data_loader(args, kv)
+if 'dist' in args.kv_store and 'sync' in args.kv_store:
 
 Review comment:
   Good catch Haibin, fixed it!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 opened a new pull request #10550: [MXNET-320] Support elemwise_add/sub/max/min/hypot between dense and csr tensors

2018-04-13 Thread GitBox
haojin2 opened a new pull request #10550: [MXNET-320] Support 
elemwise_add/sub/max/min/hypot between dense and csr tensors
URL: https://github.com/apache/incubator-mxnet/pull/10550
 
 
   ## Description ##
   Support elemwise_add/sub/max/min/hypot between dense and csr
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] The PR title starts with [MXNET-320]
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Support elemwise_add/sub/max/min/hypot between dense and csr matrices
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #10545: [WIP] Add NEWS and README

2018-04-13 Thread GitBox
eric-haibin-lin commented on a change in pull request #10545: [WIP] Add NEWS 
and README
URL: https://github.com/apache/incubator-mxnet/pull/10545#discussion_r181539201
 
 

 ##
 File path: NEWS.md
 ##
 @@ -1,5 +1,126 @@
 MXNet Change Log
 
+## 1.2.0
+### New Features - Added Scala Inference APIs
+- Implemented new [Scala Inference 
APIs](https://cwiki.apache.org/confluence/display/MXNET/MXNetScalaInferenceAPI) 
which offer an easy-to-use, Scala Idiomatic and thread-safe high level APIs for 
performing predictions with deep learning models trained with MXNet (#9678). 
Implemented a new ImageClassifier class which provides APIs for classification 
tasks on a Java BufferedImage using a pre-trained model you provide (#10054). 
Implemented a new ObjectDetector class which provides APIs for object and 
boundary detections on a Java BufferedImage using a pre-trained model you 
provide (#10229).
+
+### New Features - Added module to import ONNX models into MXNet
 
 Review comment:
   Added a Module to Import ONNX Models into MXNet


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #10545: [WIP] Add NEWS and README

2018-04-13 Thread GitBox
eric-haibin-lin commented on a change in pull request #10545: [WIP] Add NEWS 
and README
URL: https://github.com/apache/incubator-mxnet/pull/10545#discussion_r181539255
 
 

 ##
 File path: NEWS.md
 ##
 @@ -1,5 +1,126 @@
 MXNet Change Log
 
+## 1.2.0
+### New Features - Added Scala Inference APIs
+- Implemented new [Scala Inference 
APIs](https://cwiki.apache.org/confluence/display/MXNET/MXNetScalaInferenceAPI) 
which offer an easy-to-use, Scala Idiomatic and thread-safe high level APIs for 
performing predictions with deep learning models trained with MXNet (#9678). 
Implemented a new ImageClassifier class which provides APIs for classification 
tasks on a Java BufferedImage using a pre-trained model you provide (#10054). 
Implemented a new ObjectDetector class which provides APIs for object and 
boundary detections on a Java BufferedImage using a pre-trained model you 
provide (#10229).
+
+### New Features - Added module to import ONNX models into MXNet
+- Implemented a new ONNX module in MXNet which offers an easy to use API to 
import ONNX models into MXNet's symbolic interface (#9963). Checkout the 
[example](https://github.com/apache/incubator-mxnet/blob/master/example/onnx/super_resolution.py)
 on how you could use this 
[API](https://cwiki.apache.org/confluence/display/MXNET/ONNX-MXNet+API+Design) 
to import ONNX models and perform inference on MXNet. 
+
+### New Features - Added support for Model Quantization with Calibration
+- Implemented model quantization by adopting the [TensorFlow 
approach](https://www.tensorflow.org/performance/quantization) with calibration 
by borrowing the idea from Nvidia's 
[TensorRT](http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf).
 The focus of this work is on keeping quantized models (ConvNets for now) 
inference accuracy loss under control when compared to their corresponding FP32 
models. Please see the 
[example](https://github.com/apache/incubator-mxnet/tree/master/example/quantization)
 on how to quantize a FP32 model with or without calibration (#9552).
+
+### New Features - MKL-DNN Integration
+- MXNet now integrates with Intel MKL-DNN to accelerate neural network 
operators: Convolution, Deconvolution, FullyConnected, Pooling, Batch 
Normalization, Activation, LRN, Softmax, as well as some common operators: sum 
and concat (#9677). This integration allows NDArray to contain data with 
MKL-DNN layouts and reduces data layout conversion to get the maximal 
performance from MKL-DNN.
+
+### New Features - Added Exception Handling Support for Operators
+- Implemented [Exception Handling Support for 
Operators](https://cwiki.apache.org/confluence/display/MXNET/Improved+exception+handling+in+MXNet)
 in MXNet. MXNet now transports backend C++ exceptions to the different 
language front-ends and prevents crashes when exceptions are thrown during 
operator execution (#9681).
+
+### New Features - Enhanced FP16 support
+- Added support for distributed mixed precision training with FP16. It 
supports storing of master copy of weights in float32 with the multi_precision 
mode of optimizers (#10183). Improved speed of float16 operations on x86 CPU by 
8 times through F16C instruction set. Added support for more operators to work 
with FP16 inputs (#10125, #10078, #10169). Added a tutorial on using mixed 
precision with FP16 (#10391).
+
+### New Features - Added Profiling Enhancements
+- Enhanced built-in profiler to support native Intel:registered: VTune:tm: 
Amplifier objects such as Task, Frame, Event, Counter and Marker from both C++ 
and Python -- which is also visible in the Chrome tracing view(#8972). Added 
Runtime tracking of symbolic and imperative operators as well as memory and API 
calls. Added Tracking and dumping of aggregate profiling data. Profiler also no 
longer affects runtime performance when not in use. 
+
+### Breaking Changes
+- Changed Namespace for MXNet scala from `ml.dmlc.mxnet` to `org.apache.mxnet` 
(#10284).
+- Changed API for the Pooling operator from `mxnet.symbol.Pooling(data=None, 
global_pool=_Null, cudnn_off=_Null, kernel=_Null, pool_type=_Null, 
pooling_convention=_Null, stride=_Null, pad=_Null, name=None, attr=None, 
out=None, **kwargs)` to  `mxnet.symbol.Pooling(data=None,  kernel=_Null, 
pool_type=_Null, global_pool=_Null, cudnn_off=_Null, pooling_convention=_Null, 
stride=_Null, pad=_Null, name=None, attr=None, out=None, **kwargs)`. This is a 
breaking change when kwargs are not provided since the new api expects the 
arguments starting from `global_pool` at the fourth position instead of the 
second position. (#1).
+
+### Bug Fixes
+- Fixed tests - Flakiness/Bugs - (#9598, #9951, #10259, #10197, #10136, 
#10422). Please see: [Tests Improvement 
Project](https://github.com/apache/incubator-mxnet/projects/9)
+- Fixed `cudnn_conv` and `cudnn_deconv` deadlock (#10392).
+- Fixed a race condition in `io.LibSVMIter` when batch size is large (#10124).
+- 

[GitHub] eric-haibin-lin commented on a change in pull request #10545: [WIP] Add NEWS and README

2018-04-13 Thread GitBox
eric-haibin-lin commented on a change in pull request #10545: [WIP] Add NEWS 
and README
URL: https://github.com/apache/incubator-mxnet/pull/10545#discussion_r181539250
 
 

 ##
 File path: NEWS.md
 ##
 @@ -1,5 +1,126 @@
 MXNet Change Log
 
+## 1.2.0
+### New Features - Added Scala Inference APIs
+- Implemented new [Scala Inference 
APIs](https://cwiki.apache.org/confluence/display/MXNET/MXNetScalaInferenceAPI) 
which offer an easy-to-use, Scala Idiomatic and thread-safe high level APIs for 
performing predictions with deep learning models trained with MXNet (#9678). 
Implemented a new ImageClassifier class which provides APIs for classification 
tasks on a Java BufferedImage using a pre-trained model you provide (#10054). 
Implemented a new ObjectDetector class which provides APIs for object and 
boundary detections on a Java BufferedImage using a pre-trained model you 
provide (#10229).
+
+### New Features - Added module to import ONNX models into MXNet
+- Implemented a new ONNX module in MXNet which offers an easy to use API to 
import ONNX models into MXNet's symbolic interface (#9963). Checkout the 
[example](https://github.com/apache/incubator-mxnet/blob/master/example/onnx/super_resolution.py)
 on how you could use this 
[API](https://cwiki.apache.org/confluence/display/MXNET/ONNX-MXNet+API+Design) 
to import ONNX models and perform inference on MXNet. 
+
+### New Features - Added support for Model Quantization with Calibration
+- Implemented model quantization by adopting the [TensorFlow 
approach](https://www.tensorflow.org/performance/quantization) with calibration 
by borrowing the idea from Nvidia's 
[TensorRT](http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf).
 The focus of this work is on keeping quantized models (ConvNets for now) 
inference accuracy loss under control when compared to their corresponding FP32 
models. Please see the 
[example](https://github.com/apache/incubator-mxnet/tree/master/example/quantization)
 on how to quantize a FP32 model with or without calibration (#9552).
+
+### New Features - MKL-DNN Integration
+- MXNet now integrates with Intel MKL-DNN to accelerate neural network 
operators: Convolution, Deconvolution, FullyConnected, Pooling, Batch 
Normalization, Activation, LRN, Softmax, as well as some common operators: sum 
and concat (#9677). This integration allows NDArray to contain data with 
MKL-DNN layouts and reduces data layout conversion to get the maximal 
performance from MKL-DNN.
+
+### New Features - Added Exception Handling Support for Operators
+- Implemented [Exception Handling Support for 
Operators](https://cwiki.apache.org/confluence/display/MXNET/Improved+exception+handling+in+MXNet)
 in MXNet. MXNet now transports backend C++ exceptions to the different 
language front-ends and prevents crashes when exceptions are thrown during 
operator execution (#9681).
+
+### New Features - Enhanced FP16 support
+- Added support for distributed mixed precision training with FP16. It 
supports storing of master copy of weights in float32 with the multi_precision 
mode of optimizers (#10183). Improved speed of float16 operations on x86 CPU by 
8 times through F16C instruction set. Added support for more operators to work 
with FP16 inputs (#10125, #10078, #10169). Added a tutorial on using mixed 
precision with FP16 (#10391).
+
+### New Features - Added Profiling Enhancements
+- Enhanced built-in profiler to support native Intel:registered: VTune:tm: 
Amplifier objects such as Task, Frame, Event, Counter and Marker from both C++ 
and Python -- which is also visible in the Chrome tracing view(#8972). Added 
Runtime tracking of symbolic and imperative operators as well as memory and API 
calls. Added Tracking and dumping of aggregate profiling data. Profiler also no 
longer affects runtime performance when not in use. 
+
+### Breaking Changes
+- Changed Namespace for MXNet scala from `ml.dmlc.mxnet` to `org.apache.mxnet` 
(#10284).
+- Changed API for the Pooling operator from `mxnet.symbol.Pooling(data=None, 
global_pool=_Null, cudnn_off=_Null, kernel=_Null, pool_type=_Null, 
pooling_convention=_Null, stride=_Null, pad=_Null, name=None, attr=None, 
out=None, **kwargs)` to  `mxnet.symbol.Pooling(data=None,  kernel=_Null, 
pool_type=_Null, global_pool=_Null, cudnn_off=_Null, pooling_convention=_Null, 
stride=_Null, pad=_Null, name=None, attr=None, out=None, **kwargs)`. This is a 
breaking change when kwargs are not provided since the new api expects the 
arguments starting from `global_pool` at the fourth position instead of the 
second position. (#1).
+
+### Bug Fixes
+- Fixed tests - Flakiness/Bugs - (#9598, #9951, #10259, #10197, #10136, 
#10422). Please see: [Tests Improvement 
Project](https://github.com/apache/incubator-mxnet/projects/9)
+- Fixed `cudnn_conv` and `cudnn_deconv` deadlock (#10392).
+- Fixed a race condition in `io.LibSVMIter` when batch size is large (#10124).
+- 

[GitHub] eric-haibin-lin commented on a change in pull request #10545: [WIP] Add NEWS and README

2018-04-13 Thread GitBox
eric-haibin-lin commented on a change in pull request #10545: [WIP] Add NEWS 
and README
URL: https://github.com/apache/incubator-mxnet/pull/10545#discussion_r181539216
 
 

 ##
 File path: NEWS.md
 ##
 @@ -1,5 +1,126 @@
 MXNet Change Log
 
+## 1.2.0
+### New Features - Added Scala Inference APIs
+- Implemented new [Scala Inference 
APIs](https://cwiki.apache.org/confluence/display/MXNET/MXNetScalaInferenceAPI) 
which offer an easy-to-use, Scala Idiomatic and thread-safe high level APIs for 
performing predictions with deep learning models trained with MXNet (#9678). 
Implemented a new ImageClassifier class which provides APIs for classification 
tasks on a Java BufferedImage using a pre-trained model you provide (#10054). 
Implemented a new ObjectDetector class which provides APIs for object and 
boundary detections on a Java BufferedImage using a pre-trained model you 
provide (#10229).
+
+### New Features - Added module to import ONNX models into MXNet
+- Implemented a new ONNX module in MXNet which offers an easy to use API to 
import ONNX models into MXNet's symbolic interface (#9963). Checkout the 
[example](https://github.com/apache/incubator-mxnet/blob/master/example/onnx/super_resolution.py)
 on how you could use this 
[API](https://cwiki.apache.org/confluence/display/MXNET/ONNX-MXNet+API+Design) 
to import ONNX models and perform inference on MXNet. 
+
+### New Features - Added support for Model Quantization with Calibration
 
 Review comment:
   support -> Support


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #10550: [MXNET-320] Support elemwise_add/sub/max/min/hypot between dense and csr tensors

2018-04-13 Thread GitBox
haojin2 commented on a change in pull request #10550: [MXNET-320] Support 
elemwise_add/sub/max/min/hypot between dense and csr tensors
URL: https://github.com/apache/incubator-mxnet/pull/10550#discussion_r181539305
 
 

 ##
 File path: src/operator/tensor/elemwise_binary_op-inl.h
 ##
 @@ -374,6 +374,72 @@ void ElemwiseBinaryOp::CsrCsrOp(mshadow::Stream *s,
   }
 }
 
+template
+struct ElemwiseDnsMapKernel {
 
 Review comment:
   I do not have a better name for this kernel, this kernel is computing the 
output value for the dense value with 0. Any suggestions?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #10545: [WIP] Add NEWS and README

2018-04-13 Thread GitBox
eric-haibin-lin commented on a change in pull request #10545: [WIP] Add NEWS 
and README
URL: https://github.com/apache/incubator-mxnet/pull/10545#discussion_r181539594
 
 

 ##
 File path: NEWS.md
 ##
 @@ -1,5 +1,126 @@
 MXNet Change Log
 
+## 1.2.0
+### New Features - Added Scala Inference APIs
+- Implemented new [Scala Inference 
APIs](https://cwiki.apache.org/confluence/display/MXNET/MXNetScalaInferenceAPI) 
which offer an easy-to-use, Scala Idiomatic and thread-safe high level APIs for 
performing predictions with deep learning models trained with MXNet (#9678). 
Implemented a new ImageClassifier class which provides APIs for classification 
tasks on a Java BufferedImage using a pre-trained model you provide (#10054). 
Implemented a new ObjectDetector class which provides APIs for object and 
boundary detections on a Java BufferedImage using a pre-trained model you 
provide (#10229).
+
+### New Features - Added module to import ONNX models into MXNet
+- Implemented a new ONNX module in MXNet which offers an easy to use API to 
import ONNX models into MXNet's symbolic interface (#9963). Checkout the 
[example](https://github.com/apache/incubator-mxnet/blob/master/example/onnx/super_resolution.py)
 on how you could use this 
[API](https://cwiki.apache.org/confluence/display/MXNET/ONNX-MXNet+API+Design) 
to import ONNX models and perform inference on MXNet. 
+
+### New Features - Added support for Model Quantization with Calibration
+- Implemented model quantization by adopting the [TensorFlow 
approach](https://www.tensorflow.org/performance/quantization) with calibration 
by borrowing the idea from Nvidia's 
[TensorRT](http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf).
 The focus of this work is on keeping quantized models (ConvNets for now) 
inference accuracy loss under control when compared to their corresponding FP32 
models. Please see the 
[example](https://github.com/apache/incubator-mxnet/tree/master/example/quantization)
 on how to quantize a FP32 model with or without calibration (#9552).
+
+### New Features - MKL-DNN Integration
+- MXNet now integrates with Intel MKL-DNN to accelerate neural network 
operators: Convolution, Deconvolution, FullyConnected, Pooling, Batch 
Normalization, Activation, LRN, Softmax, as well as some common operators: sum 
and concat (#9677). This integration allows NDArray to contain data with 
MKL-DNN layouts and reduces data layout conversion to get the maximal 
performance from MKL-DNN.
+
+### New Features - Added Exception Handling Support for Operators
+- Implemented [Exception Handling Support for 
Operators](https://cwiki.apache.org/confluence/display/MXNET/Improved+exception+handling+in+MXNet)
 in MXNet. MXNet now transports backend C++ exceptions to the different 
language front-ends and prevents crashes when exceptions are thrown during 
operator execution (#9681).
+
+### New Features - Enhanced FP16 support
+- Added support for distributed mixed precision training with FP16. It 
supports storing of master copy of weights in float32 with the multi_precision 
mode of optimizers (#10183). Improved speed of float16 operations on x86 CPU by 
8 times through F16C instruction set. Added support for more operators to work 
with FP16 inputs (#10125, #10078, #10169). Added a tutorial on using mixed 
precision with FP16 (#10391).
+
+### New Features - Added Profiling Enhancements
+- Enhanced built-in profiler to support native Intel:registered: VTune:tm: 
Amplifier objects such as Task, Frame, Event, Counter and Marker from both C++ 
and Python -- which is also visible in the Chrome tracing view(#8972). Added 
Runtime tracking of symbolic and imperative operators as well as memory and API 
calls. Added Tracking and dumping of aggregate profiling data. Profiler also no 
longer affects runtime performance when not in use. 
+
+### Breaking Changes
 
 Review comment:
   We have breaking changes?!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] fevemania commented on issue #10541: Keep illegal hardware instruction. How can I build mxnet from source without avx support?

2018-04-13 Thread GitBox
fevemania commented on issue #10541: Keep illegal hardware instruction. How can 
I build mxnet from source without avx support?
URL: 
https://github.com/apache/incubator-mxnet/issues/10541#issuecomment-381295505
 
 
   @Roshrini 
   Pardon me? Could you explain what is label-Installation, please? I didn't 
get the idea about that word.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #10550: [MXNET-320] Support elemwise_add/sub/max/min/hypot between dense and csr tensors

2018-04-13 Thread GitBox
haojin2 commented on a change in pull request #10550: [MXNET-320] Support 
elemwise_add/sub/max/min/hypot between dense and csr tensors
URL: https://github.com/apache/incubator-mxnet/pull/10550#discussion_r181539348
 
 

 ##
 File path: src/operator/tensor/elemwise_binary_op-inl.h
 ##
 @@ -374,6 +374,72 @@ void ElemwiseBinaryOp::CsrCsrOp(mshadow::Stream *s,
   }
 }
 
+template
+struct ElemwiseDnsMapKernel {
+  template
+  static void inline Map(int i, const OpReqType req, DType* out, const DType* 
dns_data,
+ const nnvm::dim_t num_rows, const nnvm::dim_t 
num_cols) {
+if (i < num_rows*num_cols) {
+  KERNEL_ASSIGN(out[i], req, OP::Map(dns_data[i], DType(0.0f)));
+}
+  }
+};
+
+template
+struct ElemwiseDnsCsrDnsKernel {
+  template
+  static void inline Map(int i, const OpReqType req, DType* out, DType* 
dns_data,
+ const DType* csr_data, const IType* csr_indices, 
const CType* csr_indptr,
+ const nnvm::dim_t num_rows, const nnvm::dim_t 
num_cols) {
+if (i < num_rows) {
+  for (int j = csr_indptr[i]; j < csr_indptr[i+1]; ++j) {
+KERNEL_ASSIGN(out[i * num_cols + csr_indices[j]], req,
+  OP::Map(dns_data[i * num_cols + csr_indices[j]], 
csr_data[j]));
+  }
+}
+  }
+};
+
+/*! \brief CSR -op- CSR binary operator for non-canonical NDArray */
 
 Review comment:
   Will change the comment soon, forgot to change it when copying it over from 
above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on issue #10547: Remove empty file from examples

2018-04-13 Thread GitBox
anirudh2290 commented on issue #10547: Remove empty file from examples
URL: https://github.com/apache/incubator-mxnet/pull/10547#issuecomment-381272505
 
 
   @marcoabreu do you happen to know why ci is failing here ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] smpawlowski opened a new issue #10549: scala-package 1.1.0 build instruction Windows VS2015

2018-04-13 Thread GitBox
smpawlowski opened a new issue #10549: scala-package 1.1.0 build instruction 
Windows VS2015
URL: https://github.com/apache/incubator-mxnet/issues/10549
 
 
   ## Description
   I'd like to build mxnet for Scala on Windows from source. I can build 
libmxnet.dll successfully, but there are no instructions how to build 
scala-package on Windows. 
[](https://mxnet.incubator.apache.org/install/windows_setup.html#install-the-mxnet-package-for-scala)
   refers to make only.
   
   Could you provide instructions relevant for windows or mxnet-scala.dll?
   
   ## Environment info (Required)
   
   Windows 10.
   mxnet 1.1.0
   
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio):
   Visual Studio 14 2015 Win64
   
   MXNet commit hash: 07a83a0325a3d782513a04f47d711710972cb144
   (Paste the output of `git rev-parse HEAD` here.)
   
   Build config:
   mxnet_option(USE_CPP_PACKAGE  "Build C++ Package" ON)
   
   ## Steps to reproduce
   1. Build libmxnet.dll
   2. Try to build scala-package??
   
   ## What have you tried to solve it?
   
   1. I tried to compile dll from files in 
   scala-package\native\src
   but failed to resolve all dependencies correctly
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 commented on issue #10546: [MXNET-319] Add Autocomplete Macros in Scala

2018-04-13 Thread GitBox
lanking520 commented on issue #10546: [MXNET-319] Add Autocomplete Macros in 
Scala
URL: https://github.com/apache/incubator-mxnet/pull/10546#issuecomment-381282015
 
 
   Tested with `make docs`. The Function definitions were successfully 
generated on Scala docs


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10547: Remove empty file from examples

2018-04-13 Thread GitBox
marcoabreu commented on issue #10547: Remove empty file from examples
URL: https://github.com/apache/incubator-mxnet/pull/10547#issuecomment-381278663
 
 
   Yeah, sorry - there has been a small hiccup. I've sent an email to dev@
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 closed pull request #10538: [MXNET-318] Allow dot for fp16 on GPU

2018-04-13 Thread GitBox
rahul003 closed pull request #10538: [MXNET-318] Allow dot for fp16 on GPU
URL: https://github.com/apache/incubator-mxnet/pull/10538
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/tensor/dot-inl.h b/src/operator/tensor/dot-inl.h
index c5f278e78a4..a6833c288cc 100644
--- a/src/operator/tensor/dot-inl.h
+++ b/src/operator/tensor/dot-inl.h
@@ -69,9 +69,10 @@ void DotForward_(const nnvm::NodeAttrs& attrs,
   << "Binary function only support input/output with the same type";
   CHECK_EQ(outputs[0].type_flag_, inputs[1].type_flag_)
   << "Binary function only support input/output with the same type";
-  CHECK(outputs[0].type_flag_ == kFloat32 || outputs[0].type_flag_ == kFloat64)
-  << "dot only supports float32 and float64";
-  MSHADOW_SGL_DBL_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+  CHECK(outputs[0].type_flag_ == kFloat32 || outputs[0].type_flag_ == kFloat64 
|| 
+ctx.run_ctx.ctx.dev_mask() == mshadow::gpu::kDevMask)
+  << "dot only supports float32/float64 for CPU, and 
float16/float32/float64 for GPU";
+  MSHADOW_REAL_TYPE_SWITCH(outputs[0].type_flag_, DType, {
 if (inputs[0].ndim() == 1 && inputs[1].ndim() == 1) {
   CHECK_NE(req[0], kAddTo) << "AddTo not yet supported";
   Tensor out = outputs[0].get(s);
@@ -129,7 +130,7 @@ void DotBackward_(const nnvm::NodeAttrs& attrs,
   Stream *s = ctx.get_stream();
   CHECK_NE(req[0], kWriteInplace);
   CHECK_NE(req[1], kWriteInplace);
-  MSHADOW_SGL_DBL_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+  MSHADOW_REAL_TYPE_SWITCH(outputs[0].type_flag_, DType, {
 if (inputs[1].ndim() == 1 && inputs[2].ndim() == 1) {
   Tensor mout_grad = inputs[0].get(s);
   Tensor mlhs_data = inputs[1].get(s);
diff --git a/tests/python/unittest/test_operator.py 
b/tests/python/unittest/test_operator.py
index 5d382220a7a..78fd84145ec 100644
--- a/tests/python/unittest/test_operator.py
+++ b/tests/python/unittest/test_operator.py
@@ -2070,7 +2070,7 @@ def test_stn():
 @with_seed(1234)
 def test_dot():
 ctx=default_context()
-dtypes = ['float32', 'float64']
+dtypes = ['float16', 'float32', 'float64']
 
 # Test normal dot.
 for data_type in dtypes:
@@ -2094,10 +2094,16 @@ def test_dot():
 c = mx.sym.dot(a, b)
 exe = c.simple_bind(ctx=ctx, a=a_npy.shape, b=b_npy.shape)
 outputs = exe.forward(is_train=True, a=a_npy, b=b_npy)
-assert_almost_equal(outputs[0].asnumpy(), c_npy, rtol=1e-3)
-exe.backward(out_grads=[mx.nd.array(ograd_npy, mx.cpu())])
-assert_almost_equal(exe.grad_dict['a'].asnumpy(), 
agrad_npy, rtol=1e-3)
-assert_almost_equal(exe.grad_dict['b'].asnumpy(), 
bgrad_npy, rtol=1e-3)
+assert_almost_equal(outputs[0].asnumpy(), c_npy, 
+rtol=1e-2 if data_type == 'float16' 
else 1e-3,
+atol=1e-2 if data_type == 'float16' 
else 1e-3)
+exe.backward(out_grads=[mx.nd.array(ograd_npy, 
mx.cpu()).astype(data_type)])
+assert_almost_equal(exe.grad_dict['a'].asnumpy(), 
agrad_npy, 
+rtol=1e-2 if data_type == 'float16' 
else 1e-3,
+atol=1e-2 if data_type == 'float16' 
else 1e-3)
+assert_almost_equal(exe.grad_dict['b'].asnumpy(), 
bgrad_npy, 
+rtol=1e-2 if data_type == 'float16' 
else 1e-3,
+atol=1e-2 if data_type == 'float16' 
else 1e-3)
 
 # Test dot with transpose flag using gradient checker.
 def dot_sym(data_type):


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on issue #10435: [MXNET-289] Update example to resize data iterator to fix hang in dist sync for last epoch

2018-04-13 Thread GitBox
rahul003 commented on issue #10435: [MXNET-289] Update example to resize data 
iterator to fix hang in dist sync for last epoch
URL: https://github.com/apache/incubator-mxnet/pull/10435#issuecomment-381037235
 
 
   Yes, there's no need to do it in that case.
   
   On Thu, Apr 12, 2018, 8:46 PM Haibin Lin  wrote:
   
   > *@eric-haibin-lin* commented on this pull request.
   > --
   >
   > In example/image-classification/common/fit.py
   > 

   > :
   >
   > >
   >  # data iterators
   >  (train, val) = data_loader(args, kv)
   > +if 'dist' in args.kv_store and 'sync' in args.kv_store:
   >
   > so there's no need to resize for async right?
   >
   > —
   > You are receiving this because you authored the thread.
   > Reply to this email directly, view it on GitHub
   > 
,
   > or mute the thread
   > 

   > .
   >
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on issue #10538: [MXNET-318] Allow dot for fp16 on GPU

2018-04-13 Thread GitBox
rahul003 commented on issue #10538: [MXNET-318] Allow dot for fp16 on GPU
URL: https://github.com/apache/incubator-mxnet/pull/10538#issuecomment-381037280
 
 
   Yes, I'll update the PR. Thanks!
   
   On Thu, Apr 12, 2018, 7:52 PM Anirudh Subramanian 
   wrote:
   
   > *@anirudh2290* commented on this pull request.
   > --
   >
   > In src/operator/tensor/dot-inl.h
   > 

   > :
   >
   > > @@ -69,9 +69,10 @@ void DotForward_(const nnvm::NodeAttrs& attrs,
   ><< "Binary function only support input/output with the same type";
   >CHECK_EQ(outputs[0].type_flag_, inputs[1].type_flag_)
   ><< "Binary function only support input/output with the same type";
   > -  CHECK(outputs[0].type_flag_ == kFloat32 || outputs[0].type_flag_ == 
kFloat64)
   > -  << "dot only supports float32 and float64";
   > -  MSHADOW_SGL_DBL_TYPE_SWITCH(outputs[0].type_flag_, DType, {
   > +  CHECK(outputs[0].type_flag_ == kFloat32 || outputs[0].type_flag_ == 
kFloat64 ||
   >
   > don't we need to do the same for BatchDotForward?
   >
   > —
   > You are receiving this because you authored the thread.
   > Reply to this email directly, view it on GitHub
   > 
,
   > or mute the thread
   > 

   > .
   >
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #10528: [MXNET-316] Remove empty buckets causing index errors

2018-04-13 Thread GitBox
ThomasDelteil commented on issue #10528: [MXNET-316] Remove empty buckets 
causing index errors
URL: https://github.com/apache/incubator-mxnet/pull/10528#issuecomment-380875511
 
 
   Thanks @harusametime ! Could we get a test to make sure it works and we 
don't introduce it this bug in the future?
   - [x] Could you please create a Jira ticket and add it to the title of your 
PR? Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] guoswang commented on issue #10521: How can I get the output of the net's last layer- symbol.LinearRegressionOutput?

2018-04-13 Thread GitBox
guoswang commented on issue #10521: How can I get the output of the net's last 
layer- symbol.LinearRegressionOutput?
URL: 
https://github.com/apache/incubator-mxnet/issues/10521#issuecomment-381063806
 
 
   Thanks,I will try it.@Roshrini


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] daquexian opened a new pull request #10540: Fix typo in image.py

2018-04-13 Thread GitBox
daquexian opened a new pull request #10540: Fix typo in image.py
URL: https://github.com/apache/incubator-mxnet/pull/10540
 
 
   Fix typo in image.py
   
   ## Description ##
   There is a typo in `image.py` so that IDE can't auto-complete functions in 
`gen_image.py`
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Fix typo
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10537: [MX-307] Add .md tutorials to .ipynb for CI integration

2018-04-13 Thread GitBox
ThomasDelteil commented on a change in pull request #10537: [MX-307] Add .md 
tutorials to .ipynb for CI integration
URL: https://github.com/apache/incubator-mxnet/pull/10537#discussion_r181295076
 
 

 ##
 File path: tests/nightly/test_tutorial_config.txt
 ##
 @@ -1,20 +1,31 @@
 basic/ndarray
+basic/ndarray_indexing
 basic/symbol
 basic/module
 basic/data
-python/linear-regression
-python/mnist
-python/predict_image
-onnx/super_resolution
-onnx/fine_tuning_gluon
-onnx/inference_on_onnx_model
-basic/ndarray_indexing
-python/matrix_factorization
+gluon/customop
+gluon/data_augmentation
+gluon/datasets
 gluon/ndarray
 gluon/mnist
 gluon/autograd
 gluon/gluon
 gluon/hybrid
+nlp/cnn
+onnx/super_resolution
+onnx/fine_tuning_gluon
+onnx/inference_on_onnx_model
+python/matrix_factorization
+python/linear-regression
+python/mnist
+python/predict_image
+python/data_augmentation
+python/data_augmentation_with_masks
+python/kvstore
+python/types_of_data_augmentation
 sparse/row_sparse
 sparse/csr
-sparse/train
 
 Review comment:
   Indeed, going forward, there will be a one individual test per tutorial, to 
allow the use of annotation like `@highCpu`, `@highMemory`, `@gpu`. And there 
will be an integration test that will check that each notebook has been added 
to the test suite. 
   
   This will be part of my next PR, as part of this work of integrating 
tutorials to the CI  


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gramce opened a new issue #10539: how to save 'gluon net' after calling hybirdize?

2018-04-13 Thread GitBox
gramce opened a new issue #10539: how to save 'gluon net' after calling 
hybirdize?
URL: https://github.com/apache/incubator-mxnet/issues/10539
 
 
   The program cored after calling the origin method of net.save_params.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10537: [MX-307] Add .md tutorials to .ipynb for CI integration

2018-04-13 Thread GitBox
ThomasDelteil commented on a change in pull request #10537: [MX-307] Add .md 
tutorials to .ipynb for CI integration
URL: https://github.com/apache/incubator-mxnet/pull/10537#discussion_r181295076
 
 

 ##
 File path: tests/nightly/test_tutorial_config.txt
 ##
 @@ -1,20 +1,31 @@
 basic/ndarray
+basic/ndarray_indexing
 basic/symbol
 basic/module
 basic/data
-python/linear-regression
-python/mnist
-python/predict_image
-onnx/super_resolution
-onnx/fine_tuning_gluon
-onnx/inference_on_onnx_model
-basic/ndarray_indexing
-python/matrix_factorization
+gluon/customop
+gluon/data_augmentation
+gluon/datasets
 gluon/ndarray
 gluon/mnist
 gluon/autograd
 gluon/gluon
 gluon/hybrid
+nlp/cnn
+onnx/super_resolution
+onnx/fine_tuning_gluon
+onnx/inference_on_onnx_model
+python/matrix_factorization
+python/linear-regression
+python/mnist
+python/predict_image
+python/data_augmentation
+python/data_augmentation_with_masks
+python/kvstore
+python/types_of_data_augmentation
 sparse/row_sparse
 sparse/csr
-sparse/train
 
 Review comment:
   Indeed, going forward, there will be a one individual test per tutorial, to 
allow the use of annotation like `@highCpu`, `@hihMemory`. And there will be an 
integration test that will check that each notebook has been added to the test 
suite. 
   
   This will be part of my next PR, as part of this work of integrating 
tutorials to the CI  


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #10490: [MXNET-313] Python: Resolve some of the undefined names

2018-04-13 Thread GitBox
ThomasDelteil commented on issue #10490: [MXNET-313] Python: Resolve some of 
the undefined names
URL: https://github.com/apache/incubator-mxnet/pull/10490#issuecomment-380877338
 
 
   ```
   C: 28, 0: third party import "import numpy as np" should be placed before 
"import mxnet as mx" (wrong-import-order)
   ```
   
   - [x] could you please create a JIRA ticket for your PR? Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asitstands commented on issue #10528: [MXNET-316] Remove empty buckets causing index errors

2018-04-13 Thread GitBox
asitstands commented on issue #10528: [MXNET-316] Remove empty buckets causing 
index errors
URL: https://github.com/apache/incubator-mxnet/pull/10528#issuecomment-381066304
 
 
   
https://github.com/apache/incubator-mxnet/blob/master/tests/python/train/test_bucketing.py
 has tests for `BucketSentenceIter`. You can add your test there. To execute 
the test, install nosetests and run `nosetests -v path_to_the_test`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asitstands commented on issue #10528: [MXNET-316] Remove empty buckets causing index errors

2018-04-13 Thread GitBox
asitstands commented on issue #10528: [MXNET-316] Remove empty buckets causing 
index errors
URL: https://github.com/apache/incubator-mxnet/pull/10528#issuecomment-381066304
 
 
   
https://github.com/apache/incubator-mxnet/blob/master/tests/python/train/test_bucketing.py
 has tests for BucketSentenceIter. You can add your test there. To execute the 
test, install nosetests and run `nosetests -v path_to_the_test`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] harusametime commented on issue #10528: [MXNET-316] Remove empty buckets causing index errors

2018-04-13 Thread GitBox
harusametime commented on issue #10528: [MXNET-316] Remove empty buckets 
causing index errors
URL: https://github.com/apache/incubator-mxnet/pull/10528#issuecomment-381074339
 
 
   @asitstands Thanks. That must be helpful for me, but I am not sure what 
exactly I have to do for testing.  I also need to modify `test_bucketing.py` to 
check whether my code can work for empty buckets, which have not been 
considered in `test_bucketing.py`. Then I think what I have to do are
   - Modify  and execute `test_bucketing.py` locally
   - Report the result of the test here?
   - Pull request of new `test_bucketing.py`
   
   Is it correct?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asitstands commented on issue #10528: [MXNET-316] Remove empty buckets causing index errors

2018-04-13 Thread GitBox
asitstands commented on issue #10528: [MXNET-316] Remove empty buckets causing 
index errors
URL: https://github.com/apache/incubator-mxnet/pull/10528#issuecomment-381086368
 
 
   Modify `test_bucketing.py` and test it locally, then push it to your 
repository (the branch of this PR). It will modity this PR and run all tests, 
including the one you add, again. To see the test results, click "show all 
checks - details" below.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] fevemania opened a new issue #10541: Keep illegal hardware instruction. How can I build mxnet from source without avx support?

2018-04-13 Thread GitBox
fevemania opened a new issue #10541: Keep illegal hardware instruction. How can 
I build mxnet from source without avx support?
URL: https://github.com/apache/incubator-mxnet/issues/10541
 
 
   Note: Providing complete information in the most concise form is the best 
way to get help. This issue template serves as the checklist for essential 
information to most of the technical issues and bug reports. For non-technical 
issues and feature requests, feel free to present the information in what you 
believe is the best form.
   
   For Q & A and discussion, please start a discussion thread at 
https://discuss.mxnet.io 
   
   ## Description
   (Brief description of the problem in no more than 2 sentences.)
   
   Dear Mxnet community,
   
   The computer in company I work with has the legacy CPU.
   https://user-images.githubusercontent.com/16337667/38730723-3d830166-3f4a-11e8-9f9b-2f595b9a2b2d.png;>
   
   Cause I need to study mxnet-yolo repo. And it say that I must build from 
source to get custom op support.
   
   However, after one afternoon trying, it still gives me error message as 
below,
   https://user-images.githubusercontent.com/16337667/38730803-8dfe94e8-3f4a-11e8-8338-ab66c199d635.png;>
   
   I wonder that is that becase about new mxnet-1.2.0 gpu version need avx ?
   
   Is there any way I could build from source without losing any 3rdparty with 
this cpu ?
   maybe check commit to some sort of stable version ?
   
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sbodenstein commented on issue #10531: Float16 Support for dot

2018-04-13 Thread GitBox
sbodenstein commented on issue #10531: Float16 Support for dot
URL: 
https://github.com/apache/incubator-mxnet/issues/10531#issuecomment-381082167
 
 
   @rahul003: great, thanks for fixing it!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] MccreeZhao opened a new issue #10543: Failed to compile from source when set USE_CPP_PACKAGE = 1, fatal error C1083: unabel to open file: “mxnet-cpp/op.h”: No such file or directory

2018-04-13 Thread GitBox
MccreeZhao opened a new issue #10543: Failed to compile from source when set 
USE_CPP_PACKAGE = 1, fatal error C1083: unabel to open file: “mxnet-cpp/op.h”: 
No such file or directory
URL: https://github.com/apache/incubator-mxnet/issues/10543
 
 
   ## Description
   
   I 'm trying to compile mxnet from source code. However, when I set 
USE_CPP_PACKAGE=1, VS gives me 12 same errors showing below: 
   **\mxnet\cpp-package\example\..\include\mxnet-cpp/optimizer.hpp(37): fatal 
error C1083: unabel to open file: “mxnet-cpp/op.h”: No such file or 
directory.** 
   
   In the build tutorial, op.h should be generate dynamicaly while compiling. 
If there has other option besides USE_CPP_PACKAGE?
   
   If i set USE_CPP_PACKAGE=0, then the compile can be done succesfully. And it 
does generate libmxnet.dll. So I think the configuration of other dependent 
library is ok.
   
   And also, there is a warning when I generate .sln by cmake:
   **CMake Warning at tests/CMakeLists.txt:61 (message):
 Google Test not found**
   
   Belows are my setting in cmake:
   
![1](https://user-images.githubusercontent.com/26180075/38744130-8f9b85ee-3f73-11e8-9113-3ff8bcd12dad.PNG)
   
   
![2](https://user-images.githubusercontent.com/26180075/38744139-939c356c-3f73-11e8-8c9f-da05f9bd738c.PNG)
   
   ## Environment info (Required)
   windows 10
   VS2015 
   CMake 3.11.0 (gui)
   OpenCV 3.2
   OpenBlas
   No CUDA No CuDNN
   
   
   ## Build info (Required if built from source)
   
   Compiler : visual studio 2015
   
   ## What have you tried to solve it?
   
   1. I have tried to paste an op.h file from other's project to my mxnet-cpp 
folder. But when I compile the code in VS2015, the error happens again and it 
delete the op.h which I put automatically.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sbodenstein opened a new issue #10542: Intel MKL-DNN RNN Support

2018-04-13 Thread GitBox
sbodenstein opened a new issue #10542: Intel MKL-DNN RNN Support
URL: https://github.com/apache/incubator-mxnet/issues/10542
 
 
   MKL-DNN [recently got support for 
RNNs](https://github.com/intel/mkl-dnn/commit/f35779d62a0b3a2e0f6be79a647b1e3acf02129b).
 Are there any plans for adding this as a backend to `mx.sym.RNN`, and what 
sort of timeframe? Thanks!
   
   @zheng-da 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] fevemania commented on issue #10541: Keep illegal hardware instruction. How can I build mxnet from source without avx support?

2018-04-13 Thread GitBox
fevemania commented on issue #10541: Keep illegal hardware instruction. How can 
I build mxnet from source without avx support?
URL: 
https://github.com/apache/incubator-mxnet/issues/10541#issuecomment-381161247
 
 
   Hi @marcoabreu,
   
   Here is my detail,
   
   ### Diagonose
   https://user-images.githubusercontent.com/16337667/38741276-2bcb40f6-3f6c-11e8-8e24-3e611e2d2047.png;>
   
   ## Build info
   The OS: ubuntu 16.04 with kernel 4.4.0
   The compiler: gcc 5.4.0 
   The commit hash of MxNet: fb50257aeb3281d7fc90abc38162edbbbe4a5cb2
   
   I didn't change any line in config.mk. And the command I use to build is 
simply copy from official as below: 
   `make -j $(nproc) USE_OPENCV=1 USE_BLAS=openblas USE_CUDA=1 
USE_CUDA_PATH=/usr/local/cuda-9.0 USE_CUDNN=1`
   
   And it pass the compile.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] fevemania commented on issue #10541: Keep illegal hardware instruction. How can I build mxnet from source without avx support?

2018-04-13 Thread GitBox
fevemania commented on issue #10541: Keep illegal hardware instruction. How can 
I build mxnet from source without avx support?
URL: 
https://github.com/apache/incubator-mxnet/issues/10541#issuecomment-381161247
 
 
   Hi @marcoabreu,
   
   Here is my detail,
   
   ### Diagonose
   https://user-images.githubusercontent.com/16337667/38741276-2bcb40f6-3f6c-11e8-8e24-3e611e2d2047.png;>
   
   ## Build info
   The OS: ubuntu 16.04 with kernel 4.4.0
   The compiler: gcc 5.4.0 
   The commit hash of MxNet: fb50257aeb3281d7fc90abc38162edbbbe4a5cb2
   The cuda version I have tried is cuda 8.0 with cudnn 6.0,  and cuda 9.0 with 
cudnn 7.0. They are separate in the /usr/local folder with /usr/local/cuda 
point to /usr/local/cuda-9.0. 
   https://user-images.githubusercontent.com/16337667/38742100-3377bc56-3f6e-11e8-9597-270d087b2d6a.png;>
   
   I didn't change any line in config.mk. And the command I use to build is 
simply copy from official as below: 
   `make -j $(nproc) USE_OPENCV=1 USE_BLAS=openblas USE_CUDA=1 
USE_CUDA_PATH=/usr/local/cuda-9.0 USE_CUDNN=1`
   
   The one  pass the compile is cuda 9.0 with cudnn 7.0


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #10542: Intel MKL-DNN RNN Support

2018-04-13 Thread GitBox
TaoLv commented on issue #10542: Intel MKL-DNN RNN Support
URL: 
https://github.com/apache/incubator-mxnet/issues/10542#issuecomment-381163353
 
 
   @sbodenstein Thanks for asking that. It's already on our plan list for Q2. 
Since it's still an experimental interface, we will keep tracking the 
development progress from mkldnn team.
   FYI, we are working intensively on PR #10104 and #10311 to provide fused RNN 
operator for mxnet before mkldnn RNN interfaces become mature. Hopefully, after 
mkldnn RNN is integrated into mxnet, code of #10104 and #10311 would still 
exsit as a reference implementation for cpu. @pengzhao-intel


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] fevemania commented on issue #10541: Keep illegal hardware instruction. How can I build mxnet from source without avx support?

2018-04-13 Thread GitBox
fevemania commented on issue #10541: Keep illegal hardware instruction. How can 
I build mxnet from source without avx support?
URL: 
https://github.com/apache/incubator-mxnet/issues/10541#issuecomment-381161247
 
 
   Hi @marcoabreu,
   
   Here is my detail,
   
   ### Diagonose
   https://user-images.githubusercontent.com/16337667/38741276-2bcb40f6-3f6c-11e8-8e24-3e611e2d2047.png;>
   
   ## Build info
   The OS: ubuntu 16.04 with kernel 4.4.0
   The compiler: gcc 5.4.0 
   The commit hash of MxNet: fb50257aeb3281d7fc90abc38162edbbbe4a5cb2
   The cuda version I have tried is cuda 8.0 with cudnn 6.0,  and cuda 9.0 with 
cudnn 7.0. They are separate in the /usr/local folder with /usr/local/cuda 
point to /usr/local/cuda-9.0. 
   https://user-images.githubusercontent.com/16337667/38742100-3377bc56-3f6e-11e8-9597-270d087b2d6a.png;>
   
   I didn't change any line in config.mk. And the command I use to build is 
simply copy from official as below: 
   `make -j $(nproc) USE_OPENCV=1 USE_BLAS=openblas USE_CUDA=1 
USE_CUDA_PATH=/usr/local/cuda-9.0 USE_CUDNN=1`
   
   The one pass the compile is cuda 9.0 with cudnn 7.0
   
   And after I type `python setup.py install` in python folder inside mxnet 
repo, I got the following message at the end.
   
   Installed 
/home/`usr`/.virtualenvs/mxnet_src/lib/python3.5/site-packages/certifi-2018.1.18-py3.5.egg
   Finished processing dependencies for mxnet==1.2.0
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10541: Keep illegal hardware instruction. How can I build mxnet from source without avx support?

2018-04-13 Thread GitBox
marcoabreu commented on issue #10541: Keep illegal hardware instruction. How 
can I build mxnet from source without avx support?
URL: 
https://github.com/apache/incubator-mxnet/issues/10541#issuecomment-381137608
 
 
   Hi @fevemania,
   
   could you please provide us with your build configuration?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dabraude commented on issue #10525: Fix NNPACK header file position error

2018-04-13 Thread GitBox
dabraude commented on issue #10525: Fix NNPACK header file position error
URL: https://github.com/apache/incubator-mxnet/pull/10525#issuecomment-381138143
 
 
   I haven't had much spare time to work on it, but I have been looking into it
   on PR #9860


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10537: [MX-307] Add .md tutorials to .ipynb for CI integration

2018-04-13 Thread GitBox
marcoabreu commented on a change in pull request #10537: [MX-307] Add .md 
tutorials to .ipynb for CI integration
URL: https://github.com/apache/incubator-mxnet/pull/10537#discussion_r181393037
 
 

 ##
 File path: tests/nightly/test_tutorial_config.txt
 ##
 @@ -1,20 +1,31 @@
 basic/ndarray
+basic/ndarray_indexing
 basic/symbol
 basic/module
 basic/data
-python/linear-regression
-python/mnist
-python/predict_image
-onnx/super_resolution
-onnx/fine_tuning_gluon
-onnx/inference_on_onnx_model
-basic/ndarray_indexing
-python/matrix_factorization
+gluon/customop
+gluon/data_augmentation
+gluon/datasets
 gluon/ndarray
 gluon/mnist
 gluon/autograd
 gluon/gluon
 gluon/hybrid
+nlp/cnn
+onnx/super_resolution
+onnx/fine_tuning_gluon
+onnx/inference_on_onnx_model
+python/matrix_factorization
+python/linear-regression
+python/mnist
+python/predict_image
+python/data_augmentation
+python/data_augmentation_with_masks
+python/kvstore
+python/types_of_data_augmentation
 sparse/row_sparse
 sparse/csr
-sparse/train
 
 Review comment:
   Yeah, sorry @rahul003. Thomas suggested the same as you, but we will have to 
be able to annotate the tests and give the possibility to add custom testing 
behaviour - e.g. methods to validate a tutorial which are different for each 
one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10445: Make the numpy version compatible with Official Mac OS system python-2.7.10's numpy

2018-04-13 Thread GitBox
marcoabreu commented on issue #10445: Make the numpy version compatible with 
Official Mac OS system python-2.7.10's numpy
URL: https://github.com/apache/incubator-mxnet/pull/10445#issuecomment-381178198
 
 
   Hello Gautam, I checked the error about scipy and tried to produce a fix 
locally. To me, it seems like 1.8.2 as minimum boundary for NumPy was set on 
purpose and we're not able to lower it as other dependencies rely on it. I 
think we should work on other solutions and revisit this approach.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10445: Make the numpy version compatible with Official Mac OS system python-2.7.10's numpy

2018-04-13 Thread GitBox
marcoabreu commented on issue #10445: Make the numpy version compatible with 
Official Mac OS system python-2.7.10's numpy
URL: https://github.com/apache/incubator-mxnet/pull/10445#issuecomment-381178198
 
 
   Hello Gautam, I checked the error about scipy and tried to produce a fix 
locally. To me, it seems like 1.8.2 as minimum boundary for NumPy was set on 
purpose and we're not able to lower it as other dependencies rely on it. I 
think we should work on other solutions and revisit this action.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10445: Make the numpy version compatible with Official Mac OS system python-2.7.10's numpy

2018-04-13 Thread GitBox
marcoabreu commented on issue #10445: Make the numpy version compatible with 
Official Mac OS system python-2.7.10's numpy
URL: https://github.com/apache/incubator-mxnet/pull/10445#issuecomment-381178198
 
 
   Hello Gautam, I checked the error about scipy and tried to produce a fix 
locally. To me, it seems like 1.8.2 as minimum boundary for NumPy was set on 
purpose and we're not able to lower it as other dependencies rely on it. 
Otherwise, we'd have to downgrade more and more dependencies and could run into 
compatibility problems. I think we should work on other solutions and revisit 
this approach.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10445: Make the numpy version compatible with Official Mac OS system python-2.7.10's numpy

2018-04-13 Thread GitBox
marcoabreu commented on issue #10445: Make the numpy version compatible with 
Official Mac OS system python-2.7.10's numpy
URL: https://github.com/apache/incubator-mxnet/pull/10445#issuecomment-381178198
 
 
   Hello Gautam, I checked the error about scipy and tried to produce a fix 
locally. To me, it seems like 1.8.2 as minimum boundary for NumPy was set on 
purpose and we're not able to lower it as other dependencies rely on it. 
Otherwise, we'd have to downgrade more and more dependencies and could run into 
compatibility problems. I think we should work on other solutions and revisit 
this approach.
   
   Also, numpys latest version is currently 1.14.2. I don't think it makes 
sense to adding support for a version that has been released **4 1/2 years** 
ago.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #10445: Make the numpy version compatible with Official Mac OS system python-2.7.10's numpy

2018-04-13 Thread GitBox
cjolivier01 commented on issue #10445: Make the numpy version compatible with 
Official Mac OS system python-2.7.10's numpy
URL: https://github.com/apache/incubator-mxnet/pull/10445#issuecomment-381181179
 
 
   Have people actually complained about this? What are the github issues?
   Also, what version of numpy loads into anaconda 2 versions?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nswamy closed issue #10521: How can I get the output of the net's last layer- symbol.LinearRegressionOutput?

2018-04-13 Thread GitBox
nswamy closed issue #10521: How can I get the output of the net's last layer- 
symbol.LinearRegressionOutput?
URL: https://github.com/apache/incubator-mxnet/issues/10521
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nswamy closed issue #10516: How to run model trained with python on Scala

2018-04-13 Thread GitBox
nswamy closed issue #10516: How to run model trained with python on Scala
URL: https://github.com/apache/incubator-mxnet/issues/10516
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10445: Make the numpy version compatible with Official Mac OS system python-2.7.10's numpy

2018-04-13 Thread GitBox
marcoabreu commented on issue #10445: Make the numpy version compatible with 
Official Mac OS system python-2.7.10's numpy
URL: https://github.com/apache/incubator-mxnet/pull/10445#issuecomment-381188614
 
 
   I just tried to install numpy on my Mac. Here are the available versions:
   
   ```
   Python 2.7.14 (default, Mar 10 2018, 00:01:04)
   [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)] on darwin
   pip2: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 
1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 
1.10.4, 1.11.0b3, 1.11.0rc1, 1.11.0rc2, 1.11.0, 1.11.1rc1, 1.11.1, 1.11.2rc1, 
1.11.2, 1.11.3, 1.12.0b1, 1.12.0rc1, 1.12.0rc2, 1.12.0, 1.12.1rc1, 1.12.1, 
1.13.0rc1, 1.13.0rc2, 1.13.0, 1.13.1, 1.13.3, 1.14.0rc1, 1.14.0, 1.14.1, 1.14.2
   ```
   
   
   ```
   Python 3.6.4 (v3.6.4:d48ecebad5, Dec 18 2017, 21:07:28)
   [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
   pip3: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 
1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 
1.10.4, 1.11.0b3, 1.11.0rc1, 1.11.0rc2, 1.11.0, 1.11.1rc1, 1.11.1, 1.11.2rc1, 
1.11.2, 1.11.3, 1.12.0b1, 1.12.0rc1, 1.12.0rc2, 1.12.0, 1.12.1rc1, 1.12.1, 
1.13.0rc1, 1.13.0rc2, 1.13.0, 1.13.1, 1.13.3, 1.14.0rc1, 1.14.0, 1.14.1, 1.14.2
   ```
   
   This means I don't have the problem of 1.8.0 being the only version I can 
install - 1.14.2 is available for both python environments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10445: Make the numpy version compatible with Official Mac OS system python-2.7.10's numpy

2018-04-13 Thread GitBox
marcoabreu commented on issue #10445: Make the numpy version compatible with 
Official Mac OS system python-2.7.10's numpy
URL: https://github.com/apache/incubator-mxnet/pull/10445#issuecomment-381188614
 
 
   I just tried to install numpy on my Mac. Here are the available versions:
   ```
   Python 2.7.14 (default, Mar 10 2018, 00:01:04)
   [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)] on darwin
   pip2: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 
1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 
1.10.4, 1.11.0b3, 1.11.0rc1, 1.11.0rc2, 1.11.0, 1.11.1rc1, 1.11.1, 1.11.2rc1, 
1.11.2, 1.11.3, 1.12.0b1, 1.12.0rc1, 1.12.0rc2, 1.12.0, 1.12.1rc1, 1.12.1, 
1.13.0rc1, 1.13.0rc2, 1.13.0, 1.13.1, 1.13.3, 1.14.0rc1, 1.14.0, 1.14.1, 1.14.2
   ```
   
   ```
   Python 3.6.4 (v3.6.4:d48ecebad5, Dec 18 2017, 21:07:28)
   [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
   pip3: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 
1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 
1.10.4, 1.11.0b3, 1.11.0rc1, 1.11.0rc2, 1.11.0, 1.11.1rc1, 1.11.1, 1.11.2rc1, 
1.11.2, 1.11.3, 1.12.0b1, 1.12.0rc1, 1.12.0rc2, 1.12.0, 1.12.1rc1, 1.12.1, 
1.13.0rc1, 1.13.0rc2, 1.13.0, 1.13.1, 1.13.3, 1.14.0rc1, 1.14.0, 1.14.1, 1.14.2
   ```
   
   This means I don't have the problem of 1.8.0 being the only version I can 
install - 1.14.2 is available for both python environments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10445: Make the numpy version compatible with Official Mac OS system python-2.7.10's numpy

2018-04-13 Thread GitBox
marcoabreu commented on issue #10445: Make the numpy version compatible with 
Official Mac OS system python-2.7.10's numpy
URL: https://github.com/apache/incubator-mxnet/pull/10445#issuecomment-381188614
 
 
   I just tried to install numpy on my Mac. Here are the available versions:
   ```
   Python 2.7.14 (default, Mar 10 2018, 00:01:04)
   [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)] on darwin
   pip2: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 
1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 
1.10.4, 1.11.0b3, 1.11.0rc1, 1.11.0rc2, 1.11.0, 1.11.1rc1, 1.11.1, 1.11.2rc1, 
1.11.2, 1.11.3, 1.12.0b1, 1.12.0rc1, 1.12.0rc2, 1.12.0, 1.12.1rc1, 1.12.1, 
1.13.0rc1, 1.13.0rc2, 1.13.0, 1.13.1, 1.13.3, 1.14.0rc1, 1.14.0, 1.14.1, 1.14.2
   ```
   
   ```
   Python 3.6.4 (v3.6.4:d48ecebad5, Dec 18 2017, 21:07:28)
   [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
   pip3: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 
1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 
1.10.4, 1.11.0b3, 1.11.0rc1, 1.11.0rc2, 1.11.0, 1.11.1rc1, 1.11.1, 1.11.2rc1, 
1.11.2, 1.11.3, 1.12.0b1, 1.12.0rc1, 1.12.0rc2, 1.12.0, 1.12.1rc1, 1.12.1, 
1.13.0rc1, 1.13.0rc2, 1.13.0, 1.13.1, 1.13.3, 1.14.0rc1, 1.14.0, 1.14.1, 1.14.2
   ```
   
   This means I don't have the problem of 1.8.0 being the only version I can 
install.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services