[GitHub] [incubator-mxnet] apeforest commented on issue #16023: Revert "Refactor LibraryInitializer so it's thread safe. Fixes random sporadical concurrency crashes."

2019-08-27 Thread GitBox
apeforest commented on issue #16023: Revert "Refactor LibraryInitializer so 
it's thread safe. Fixes random sporadical concurrency crashes."
URL: https://github.com/apache/incubator-mxnet/pull/16023#issuecomment-525594224
 
 
   @marcoabreu Could you please help me to re-trigger this PR or let me know 
how I can do it? thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kshitij12345 commented on a change in pull request #15531: [MXNET-978] Higher Order Gradient Support `arctan`, `arctanh`, `radians`.

2019-08-27 Thread GitBox
kshitij12345 commented on a change in pull request #15531: [MXNET-978] Higher 
Order Gradient Support `arctan`, `arctanh`, `radians`.
URL: https://github.com/apache/incubator-mxnet/pull/15531#discussion_r318387478
 
 

 ##
 File path: src/nnvm/node_op_util.h
 ##
 @@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file node_op_util.h
+ * \brief abstraction for commonly nnvm::Node operations.
 
 Review comment:
   Should be
   `\brief abstraction for commonly used nnvm::Node operations.`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] phy12321 removed a comment on issue #13520: Check failed: b < len (21 vs. 21) slicing with begin[0]=21 exceends limit of 21

2019-08-27 Thread GitBox
phy12321 removed a comment on issue #13520: Check failed: b < len (21 vs. 21) 
slicing with begin[0]=21 exceends limit of 21
URL: 
https://github.com/apache/incubator-mxnet/issues/13520#issuecomment-519330892
 
 
Has the issue been resolved for you?  I got the same error when i want to 
print or slice the output of network,then i found not only the output of 
network cannot be printed  or sliced.I made a ndarray and it cannot be printed, 
either.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] YouhuiBai commented on issue #15674: Straggler in latest mxnet when training with distributed parameter server

2019-08-27 Thread GitBox
YouhuiBai commented on issue #15674: Straggler in latest mxnet when training 
with distributed parameter server
URL: 
https://github.com/apache/incubator-mxnet/issues/15674#issuecomment-525548537
 
 
   @apeforest I think it's probably not related to the Dataloader,  the pinned 
memory is used at worker to communicate with server through parameter server, 
all the workers use the pinned memory except the worker whose rank id is zero.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-08-27 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new c378e48  Bump the publish timestamp.
c378e48 is described below

commit c378e487f138a7fdf5be0286cf92fe5a690961e6
Author: mxnet-ci 
AuthorDate: Wed Aug 28 01:29:59 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..959f630
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Wed Aug 28 01:29:59 UTC 2019



[GitHub] [incubator-mxnet] ZhennanQin commented on a change in pull request #15993: Add a contrib operator for Constant

2019-08-27 Thread GitBox
ZhennanQin commented on a change in pull request #15993: Add a contrib operator 
for Constant
URL: https://github.com/apache/incubator-mxnet/pull/15993#discussion_r318353485
 
 

 ##
 File path: src/operator/contrib/constant-inl.h
 ##
 @@ -0,0 +1,91 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+/*!
+ * Copyright (c) 2018 by Contributors
+ * \file constant-inl.h
+*/
+
+#ifndef MXNET_OPERATOR_CONTRIB_CONSTANT_INL_H_
+#define MXNET_OPERATOR_CONTRIB_CONSTANT_INL_H_
+
+#include 
+#include "../mxnet_op.h"
+#include "../mshadow_op.h"
+#include "../tensor/matrix_op-inl.h"
+
+namespace mxnet {
+namespace op {
+
+struct ConstantParam : public dmlc::Parameter {
+  mxnet::Tuple value;
+  int dtype;
+  DMLC_DECLARE_PARAMETER(ConstantParam) {
+DMLC_DECLARE_FIELD(value)
 
 Review comment:
   The describe is 
   ```
   .describe("The target shape");
   ```
   Let's change this alternately?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] fierceX opened a new issue #14979: [BUG] Using a package with MKL and GPU versions, using python to open a new process will cause an error

2019-08-27 Thread GitBox
fierceX opened a new issue #14979: [BUG] Using a package with MKL and GPU 
versions, using python to open a new process will cause an error
URL: https://github.com/apache/incubator-mxnet/issues/14979
 
 
   Hardware and version information:
   
   --Python Info--
   Version  : 3.6.8
   Compiler : GCC 7.3.0
   Build: ('default', 'Dec 30 2018 01:22:34')
   Arch : ('64bit', '')
   Pip Info---
   Version  : 19.1.1
   Directory: 
/home/bird/miniconda3/envs/test/lib/python3.6/site-packages/pip
   --MXNet Info---
   Version  : 1.4.1
   Directory: 
/home/bird/miniconda3/envs/test/lib/python3.6/site-packages/mxnet
   Hashtag not found. Not installed from pre-built package.
   --System Info--
   Platform : Linux-4.15.0-50-generic-x86_64-with-debian-buster-sid
   system   : Linux
   node : ctmp
   release  : 4.15.0-50-generic
   version  : #54-Ubuntu SMP Mon May 6 18:46:08 UTC 2019
   --Hardware Info--
   machine  : x86_64
   processor: x86_64
   Architecture:x86_64
   CPU op-mode(s):  32-bit, 64-bit
   Byte Order:  Little Endian
   CPU(s):  8
   On-line CPU(s) list: 0-7
   Thread(s) per core:  2
   Core(s) per socket:  4
   Socket(s):   1
   NUMA node(s):1
   Vendor ID:   GenuineIntel
   CPU family:  6
   Model:   94
   Model name:  Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz
   Stepping:3
   CPU MHz: 800.218
   CPU max MHz: 4000.
   CPU min MHz: 800.
   BogoMIPS:6816.00
   Virtualization:  VT-x
   L1d cache:   32K
   L1i cache:   32K
   L2 cache:256K
   L3 cache:8192K
   NUMA node0 CPU(s):   0-7
   Flags:   fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx 
pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl 
xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 
monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 
x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 
3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp 
tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep 
bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec 
xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp 
md_clear flush_l1d
   
   Python package version
   ```
   PackageVersion 
   -- 
   certifi2019.3.9
   chardet3.0.4   
   gluonnlp   0.6.0   
   graphviz   0.8.4   
   idna   2.8 
   mxnet-cu100mkl 1.4.1   
   numpy  1.14.6  
   pip19.1.1  
   requests   2.22.0  
   setuptools 41.0.1  
   urllib31.25.2  
   wheel  0.33.4
   ```
   
   In a GPU package with MKL, if you create a new process in Python and use 
multiple processes to load data at the same time, you will get an error.
   
   ```python
   from multiprocessing import Process
   import gluonnlp as nlp
   import numpy as np
   from gluonnlp.data import SQuAD
   from mxnet import nd,gluon
   import mxnet as mx
   from mxnet.gluon import nn
   
   class Transform(object):
   def __init__(self):
   pass
   
   def __call__(self, record_index, question_id, question, context, 
answer_list,
answer_start_list):
   return np.ones((100,1)),np.ones((100,3))
   
   def train():
   train_data = SQuAD('train')
   dataloader = 
gluon.data.DataLoader(train_data.transform(Transform()),batch_size=128, 
shuffle=True, num_workers=4)
   net = nn.HybridSequential()
   net.add(nn.Dense(10))
   net.initialize(mx.init.Xavier(), ctx=mx.gpu(0))
   print(net)
   
   p = Process(target=train)
   p.start()
   p.join()
   ```
   
   ```
   Segmentation fault: 11
   
   Stack trace returned 10 entries:
   [bt] (0) 
/home/bird/miniconda3/envs/test/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x3f935a)
 [0x7ff39d25735a]
   [bt] (1) 
/home/bird/miniconda3/envs/test/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x3513b36)
 [0x7ff3a0371b36]
   [bt] (2) /lib/x86_64-linux-gnu/libc.so.6(+0x3ef20) [0x7ff3e124ff20]
   [bt] (3) 
/home/bird/miniconda3/envs/test/lib/python3.6/site-packages/mxnet/libiomp5.so(+0xa9ea5)
 [0x7ff3dce09ea5]
   [bt] (4) 
/home/bird/miniconda3/envs/test/lib/python3.6/site-packages/mxnet/libiomp5.so(+0xa9ba4)
 [0x7ff3dce09ba4]
   [bt] (5) 
/home/bird/miniconda3/envs/test/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2da4d13)
 [0x7ff39fc02d13]
   [bt] (6) 
/home/bird/miniconda3/envs/test/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2db56c8)
 [0x7ff39fc136c8]
   [bt] (7) 

[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #14979: [BUG] Using a package with MKL and GPU versions, using python to open a new process will cause an error

2019-08-27 Thread GitBox
pengzhao-intel commented on issue #14979: [BUG] Using a package with MKL and 
GPU versions, using python to open a new process will cause an error
URL: 
https://github.com/apache/incubator-mxnet/issues/14979#issuecomment-525537364
 
 
   As a request from @larroy, I am reopening this issue. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ZhennanQin commented on a change in pull request #15886: Graph Partition API

2019-08-27 Thread GitBox
ZhennanQin commented on a change in pull request #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r318351633
 
 

 ##
 File path: src/c_api/c_api_symbolic.cc
 ##
 @@ -1199,3 +1200,73 @@ int MXShallowCopySymbol(SymbolHandle src, SymbolHandle* 
out) {
   *out = out_sym;
   API_END_HANDLE_ERROR(delete out_sym);
 }
+
+int MXOptimizeForBackend(SymbolHandle sym_handle,
+ const char* backend_name,
+ SymbolHandle* ret_sym_handle,
+ const mx_uint len,
+ NDArrayHandle* in_args_handle,
+ const mx_uint num_options,
+ const char** keys,
+ const char** vals) {
+  nnvm::Symbol *s = new nnvm::Symbol();
+  API_BEGIN();
+  nnvm::Symbol *sym = static_cast(sym_handle);
+  *s = sym->Copy();
+  nnvm::Graph g = Symbol2Graph(*s);
+  if (len) {
+NDArray **in_args_ptr = reinterpret_cast(in_args_handle);
+Context default_ctx = in_args_ptr[0]->ctx();
 
 Review comment:
   For mkldnn backend, the correct ctx is important for getting expected result 
from `InferStorageTypes`, so please consider to accepts correct ctx from user. 
I expect the API would be like
   ```
   sym = sym.optimize_for('default', ctx= mx.cpu(), args) #same context with 
bind
   exe = sym.bind(ctx=mx.cpu(), args=args, aux_states=aux, grad_req='null')
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ZhennanQin commented on issue #15993: Add a contrib operator for Constant

2019-08-27 Thread GitBox
ZhennanQin commented on issue #15993: Add a contrib operator for Constant
URL: https://github.com/apache/incubator-mxnet/pull/15993#issuecomment-525535150
 
 
   I mean, If we did some constant folding optimization in the graph, and 
generate a new tensor which is not filled with same values. For example,
   ```
   scaled_weight = weight * scale, 
   out = convolution(weight = scaled_weight)
   ```
   For this code, that scale multiply will execute many times during each 
iteration of inference. Because weight and scale are all constant, so we can 
fold them before inference, and use a constant node to replace the folded 
result in the graph. The value holding by constant node is vary, which can't be 
generated with simple pattern. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ZhennanQin commented on a change in pull request #15993: Add a contrib operator for Constant

2019-08-27 Thread GitBox
ZhennanQin commented on a change in pull request #15993: Add a contrib operator 
for Constant
URL: https://github.com/apache/incubator-mxnet/pull/15993#discussion_r318349194
 
 

 ##
 File path: src/operator/contrib/constant.cc
 ##
 @@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+/*!
+ * Copyright (c) 2018 by Contributors
+ * \file constant.cc
+*/
+
+#include "./constant-inl.h"
+#include "../tensor/elemwise_binary_op.h"
+#include "../elemwise_op_common.h"
+
+namespace mxnet {
+namespace op {
+
+inline bool ConstantType(const nnvm::NodeAttrs& attrs,
+std::vector *in_attrs,
+std::vector *out_attrs) {
+  CHECK_EQ(out_attrs->size(), 1U);
+  const ConstantParam& param_ = nnvm::get(attrs.parsed);
+  TYPE_ASSIGN_CHECK(*out_attrs, 0, param_.dtype);
+  return true;
+}
+
+
+
+DMLC_REGISTER_PARAMETER(ConstantParam);
+NNVM_REGISTER_OP(_contrib_constant)
+.describe(R"code(Creates a constant tensor for a value.
+Example::
+
+  v1 = (1, 2)
+  constant_op = symbol.contrib.constant(value=v1)
+  executor = constant_op.simple_bind(ctx=cpu())
+  executor.forward(is_train=True)
+  executor.outputs
+  [ -1.  2.]
 
 Review comment:
   Here's the most confusing part. I assume v1 is the shape of constant output, 
but where to specify the output value? On the other words, why the output is 
[-1, 2]?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] john- commented on issue #15672: Fix transform call in ImageFolderDataset

2019-08-27 Thread GitBox
john- commented on issue #15672: Fix transform call in ImageFolderDataset
URL: https://github.com/apache/incubator-mxnet/pull/15672#issuecomment-525532358
 
 
   I did the trigger notification commit and pushed it to my branch.  I see it 
listed as a commit in the pull request.   Should I do something else to make 
this work?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on issue #16023: Revert "Refactor LibraryInitializer so it's thread safe. Fixes random sporadical concurrency crashes."

2019-08-27 Thread GitBox
apeforest commented on issue #16023: Revert "Refactor LibraryInitializer so 
it's thread safe. Fixes random sporadical concurrency crashes."
URL: https://github.com/apache/incubator-mxnet/pull/16023#issuecomment-525518593
 
 
   @marcoabreu could you please help to merge this PR? thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 opened a new pull request #16024: NumPy-compatible infrastructure on Gluon

2019-08-27 Thread GitBox
haojin2 opened a new pull request #16024: NumPy-compatible infrastructure on 
Gluon
URL: https://github.com/apache/incubator-mxnet/pull/16024
 
 
   ## Description ##
   As title.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Changes to gluon for NumPy-compatible experience
   - [ ] Unit tests
   
   ## Comments ##
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on issue #15589: [Discussion] 1.6.0 Roadmap

2019-08-27 Thread GitBox
apeforest commented on issue #15589: [Discussion] 1.6.0 Roadmap
URL: 
https://github.com/apache/incubator-mxnet/issues/15589#issuecomment-525490474
 
 
   Thanks to @ChaiBapchya we now have performance comparison data between int32 
and int64: 
https://docs.google.com/spreadsheets/d/1GpdNquQb71Is5B-li99JDuiLeEZd-eSjHIIowzGrwxc/edit#gid=843443107


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-08-27 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new bbfc2f5  Bump the publish timestamp.
bbfc2f5 is described below

commit bbfc2f54b5e2bdacb11a252d60dab6b1815a699b
Author: mxnet-ci 
AuthorDate: Tue Aug 27 21:26:00 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..1b5f952
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Aug 27 21:26:00 UTC 2019



[GitHub] [incubator-mxnet] leleamol commented on issue #16021: [CI] openblas build failed in static build

2019-08-27 Thread GitBox
leleamol commented on issue #16021: [CI] openblas build failed in static build
URL: 
https://github.com/apache/incubator-mxnet/issues/16021#issuecomment-525480159
 
 
   @mxnet-label-bot add [CI, Build]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leleamol commented on issue #16020: Is there any communication between parameter servers?

2019-08-27 Thread GitBox
leleamol commented on issue #16020: Is there any communication between 
parameter servers?
URL: 
https://github.com/apache/incubator-mxnet/issues/16020#issuecomment-525479866
 
 
   @mxnet-label-bot add [Question, Distributed]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leleamol commented on issue #15997: MxNet triggered Segmentation Fault when using together with Ray or PyTorch

2019-08-27 Thread GitBox
leleamol commented on issue #15997: MxNet triggered Segmentation Fault when 
using together with Ray or PyTorch
URL: 
https://github.com/apache/incubator-mxnet/issues/15997#issuecomment-525479325
 
 
   @mxnet-label-bot add [Bug, Memory]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leleamol commented on issue #15974: USE_NNPACK build flag not honored.

2019-08-27 Thread GitBox
leleamol commented on issue #15974: USE_NNPACK build flag not honored. 
URL: 
https://github.com/apache/incubator-mxnet/issues/15974#issuecomment-525477873
 
 
   @mxnet-label-bot add [Build]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy edited a comment on issue #16023: Revert "Refactor LibraryInitializer so it's thread safe. Fixes random sporadical concurrency crashes."

2019-08-27 Thread GitBox
larroy edited a comment on issue #16023: Revert "Refactor LibraryInitializer so 
it's thread safe. Fixes random sporadical concurrency crashes."
URL: https://github.com/apache/incubator-mxnet/pull/16023#issuecomment-525472670
 
 
   I think this was a mistake during rebase, sorry about that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 commented on a change in pull request #15941: Add Large tensor vector test cases

2019-08-27 Thread GitBox
anirudh2290 commented on a change in pull request #15941: Add Large tensor 
vector test cases
URL: https://github.com/apache/incubator-mxnet/pull/15941#discussion_r318286043
 
 

 ##
 File path: tests/nightly/test_large_array.py
 ##
 @@ -143,27 +144,43 @@ def test_ndarray_random_normal():
 loc_array = nd.random.uniform(shape=(MEDIUM_X, SMALL_Y))
 a = nd.random.normal(loc=loc_array, scale=scale_array,
  shape=(SMALL_X, SMALL_Y))
+a.wait_to_read()
 
 Review comment:
   won't import teardown in test_large_array run waitall after each nosetest. i 
think that should be enough.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #16023: Revert "Refactor LibraryInitializer so it's thread safe. Fixes random sporadical concurrency crashes."

2019-08-27 Thread GitBox
larroy commented on issue #16023: Revert "Refactor LibraryInitializer so it's 
thread safe. Fixes random sporadical concurrency crashes."
URL: https://github.com/apache/incubator-mxnet/pull/16023#issuecomment-525472670
 
 
   I think this was a mistake during rebase.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest opened a new pull request #16023: Revert "Refactor LibraryInitializer so it's thread safe. Fixes random sporadical concurrency crashes."

2019-08-27 Thread GitBox
apeforest opened a new pull request #16023: Revert "Refactor LibraryInitializer 
so it's thread safe. Fixes random sporadical concurrency crashes."
URL: https://github.com/apache/incubator-mxnet/pull/16023
 
 
   Reverts apache/incubator-mxnet#15762
   
   1) The change in CMakeList.txt has nothing to do with the LibraryInitializer 
refactoring as mentioned in PR description.
   
   2) The same author filed a separate PR 
(https://github.com/apache/incubator-mxnet/pull/15808) for that change which 
was requested changes from the community.
   
   It is in general not a good practice to sneak in some unrelated changes in a 
large PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] 01/01: Revert "Refactor LibraryInitializer so it's thread safe. Fixes random sporadical concurrency crashes. (#15762)"

2019-08-27 Thread apeforest
This is an automated email from the ASF dual-hosted git repository.

apeforest pushed a commit to branch revert-15762-getenv_fixes
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 8b174e2995c11ab36a6493228ecccdfd0e378cb0
Author: Lin Yuan 
AuthorDate: Tue Aug 27 13:12:15 2019 -0700

Revert "Refactor LibraryInitializer so it's thread safe. Fixes random 
sporadical concurrency crashes. (#15762)"

This reverts commit bfd3bb8972b7e4a9cd8487c4ab6e6583202f3259.
---
 CMakeLists.txt  |  12 +-
 docs/faq/env_var.md |  10 +-
 src/c_api/c_api.cc  |   4 +-
 src/common/library.cc   | 125 
 src/common/library.h|  57 
 src/common/utils.h  |  12 --
 src/engine/threaded_engine_perdevice.cc |   4 +-
 src/initialize.cc   | 247 +++-
 src/initialize.h| 126 
 src/profiler/profiler.h |  15 +-
 10 files changed, 253 insertions(+), 359 deletions(-)

diff --git a/CMakeLists.txt b/CMakeLists.txt
index 976c736..85f302f 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -24,7 +24,6 @@ mxnet_option(USE_OLDCMAKECUDA "Build with old cmake cuda" 
OFF)
 mxnet_option(USE_NCCL "Use NVidia NCCL with CUDA" OFF)
 mxnet_option(USE_OPENCV   "Build with OpenCV support" ON)
 mxnet_option(USE_OPENMP   "Build with Openmp support" ON)
-mxnet_option(USE_OPENMP_BUNDLED_LLVM "Build with bundled llvm openmp from 
3rdparty" OFF)
 mxnet_option(USE_CUDNN"Build with cudnn support"  ON) # one could 
set CUDNN_ROOT for search path
 mxnet_option(USE_SSE  "Build with x86 SSE instruction support" ON 
IF NOT ARM)
 mxnet_option(USE_F16C "Build with x86 F16C instruction support" 
ON) # autodetects support if ON
@@ -434,11 +433,11 @@ if(USE_OPENMP)
   find_package(OpenMP REQUIRED)
   # This should build on Windows, but there's some problem and I don't have a 
Windows box, so
   # could a Windows user please fix?
-  if(USE_OPENMP_BUNDLED_LLVM AND EXISTS 
${CMAKE_CURRENT_SOURCE_DIR}/3rdparty/openmp/CMakeLists.txt
-  AND SYSTEM_ARCHITECTURE STREQUAL "x86_64"
-  AND NOT MSVC
-  AND NOT CMAKE_CROSSCOMPILING)
-message("Using bundlded LLVM OpenMP")
+  if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/3rdparty/openmp/CMakeLists.txt
+ AND SYSTEM_ARCHITECTURE STREQUAL "x86_64"
+ AND NOT MSVC
+ AND NOT CMAKE_CROSSCOMPILING)
+
 # Intel/llvm OpenMP: https://github.com/llvm-mirror/openmp
 set(OPENMP_STANDALONE_BUILD TRUE)
 set(LIBOMP_ENABLE_SHARED TRUE)
@@ -452,7 +451,6 @@ if(USE_OPENMP)
 set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${OpenMP_CXX_FLAGS}")
 add_definitions(-DMXNET_USE_OPENMP=1)
   else()
-message("Using platform provided OpenMP")
 if(OPENMP_FOUND)
   set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${OpenMP_C_FLAGS}")
   set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${OpenMP_CXX_FLAGS}")
diff --git a/docs/faq/env_var.md b/docs/faq/env_var.md
index b33a104..24d62f3 100644
--- a/docs/faq/env_var.md
+++ b/docs/faq/env_var.md
@@ -39,9 +39,6 @@ $env:MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
 
 ## Set the Number of Threads
 
-* MXNET_OMP_MAX_THREADS
-  - Values: Int ```(default=Number of processors / Number of processors * 2 in 
X86)```
-  - Maximum number of threads to use in individual operators through OpenMP. 
If not set, OMP_NUM_THREADS is considered after.
 * MXNET_GPU_WORKER_NTHREADS
   - Values: Int ```(default=2)```
   - The maximum number of threads to use on each GPU. This parameter is used 
to parallelize the computation within a single GPU card.
@@ -50,7 +47,7 @@ $env:MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
   - The maximum number of concurrent threads that do the memory copy job on 
each GPU.
 * MXNET_CPU_WORKER_NTHREADS
   - Values: Int ```(default=1)```
-  - The maximum number of scheduling threads on CPU. It specifies how many 
operators can be run in parallel. Note that most CPU operators are parallelized 
by OpenMP. To change the number of threads used by individual operators, please 
set `MXNET_OMP_MAX_THREADS` instead.
+  - The maximum number of scheduling threads on CPU. It specifies how many 
operators can be run in parallel. Note that most CPU operators are parallelized 
by OpenMP. To change the number of threads used by individual operators, please 
set `OMP_NUM_THREADS` instead.
 * MXNET_CPU_PRIORITY_NTHREADS
   - Values: Int ```(default=4)```
   - The number of threads given to prioritized CPU jobs.
@@ -59,13 +56,10 @@ $env:MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
   - The number of threads used for NNPACK. NNPACK package aims to provide 
high-performance implementations of some layers for multi-core CPUs. Checkout 
[NNPACK](http://mxnet.io/faq/nnpack.html) to know more about it.
 * MXNET_MP_WORKER_NTHREADS
   - Values: Int ```(default=1)```
-  - The number of scheduling threads 

[incubator-mxnet] branch revert-15762-getenv_fixes created (now 8b174e2)

2019-08-27 Thread apeforest
This is an automated email from the ASF dual-hosted git repository.

apeforest pushed a change to branch revert-15762-getenv_fixes
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


  at 8b174e2  Revert "Refactor LibraryInitializer so it's thread safe. 
Fixes random sporadical concurrency crashes. (#15762)"

This branch includes the following new commits:

 new 8b174e2  Revert "Refactor LibraryInitializer so it's thread safe. 
Fixes random sporadical concurrency crashes. (#15762)"

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[incubator-mxnet] branch master updated (0e71fbd -> 8df9469)

2019-08-27 Thread reminisce
This is an automated email from the ASF dual-hosted git repository.

reminisce pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 0e71fbd  Added tests to verify Large Vector Support for initial set of 
ops  (#15943)
 add 8df9469  Refines NDArray indexing and adds numpy ndarray indexing 
[READY FOR REVIEW] (#15942)

No new revisions were added by this update.

Summary of changes:
 3rdparty/mshadow/mshadow/extension/slice.h  |   4 +-
 python/mxnet/ndarray/ndarray.py | 421 +++
 python/mxnet/ndarray/numpy/_op.py   |  57 +++-
 python/mxnet/numpy/multiarray.py| 423 +++
 python/mxnet/symbol/numpy/_symbol.py|  55 
 python/mxnet/test_utils.py  |   1 -
 src/c_api/c_api.cc  |   2 -
 src/ndarray/ndarray.cc  |   1 +
 src/operator/tensor/indexing_op.cc  |   2 +
 src/operator/tensor/init_op.cc  |   1 +
 src/operator/tensor/matrix_op-inl.h |  28 +-
 src/operator/tensor/matrix_op.cc|   3 +
 tests/python/unittest/test_ndarray.py   |  44 +--
 tests/python/unittest/test_numpy_ndarray.py | 425 
 14 files changed, 1094 insertions(+), 373 deletions(-)



[GitHub] [incubator-mxnet] reminisce merged pull request #15942: Refines NDArray indexing and adds numpy ndarray indexing [READY FOR REVIEW]

2019-08-27 Thread GitBox
reminisce merged pull request #15942: Refines NDArray indexing and adds numpy 
ndarray indexing [READY FOR REVIEW]
URL: https://github.com/apache/incubator-mxnet/pull/15942
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #14979: [BUG] Using a package with MKL and GPU versions, using python to open a new process will cause an error

2019-08-27 Thread GitBox
larroy commented on issue #14979: [BUG] Using a package with MKL and GPU 
versions, using python to open a new process will cause an error
URL: 
https://github.com/apache/incubator-mxnet/issues/14979#issuecomment-525453062
 
 
   Can a comitter please reopen this? @pengzhao-intel @TaoLv 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy edited a comment on issue #14979: [BUG] Using a package with MKL and GPU versions, using python to open a new process will cause an error

2019-08-27 Thread GitBox
larroy edited a comment on issue #14979: [BUG] Using a package with MKL and GPU 
versions, using python to open a new process will cause an error
URL: 
https://github.com/apache/incubator-mxnet/issues/14979#issuecomment-525103793
 
 
   I think it might be a bug in intel omp library or some sort of interaction, 
with gnuomp everything works as expected: I compiled without MKL.
   
   ```
   (py3_venv) piotr@ip-172-31-22-252:0: ~/mxnet_other [master]> python test.py
   parent pid: 42977
   train pid: 43182
   10 9...
   go
   HybridSequential(
 (0): Dense(None -> 10, linear)
   )
   (py3_venv) piotr@ip-172-31-22-252:0: ~/mxnet_other [master]> 
   
   ```
   
   ```
   (py3_venv) piotr@ip-172-31-22-252:0: ~/mxnet_other [master]> ldd 
build/libmxnet.so | grep omp
   libgomp.so.1 => /usr/lib/x86_64-linux-gnu/libgomp.so.1 
(0x7ff7a8b48000)
   ```
   
   Seems mixing omp implementations might not be safe after all. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 commented on a change in pull request #15886: Graph Partition API

2019-08-27 Thread GitBox
anirudh2290 commented on a change in pull request #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r318260613
 
 

 ##
 File path: python/mxnet/symbol/symbol.py
 ##
 @@ -1437,6 +1437,54 @@ def _gen_atomic_symbol(self):
 return Symbol(handle)
 
 
+def optimize_for(self, backend, args=None, **kwargs):
 
 Review comment:
   Thanks for the partial inference.
   > Would this be acceptable, or can you explain the need for it be available 
immediately?
   
   This is useful even for debugging a smaller hand created model where you 
want to provide shapes and types dict.
   
   My question for the reasoning was also because this is kind of an implicit 
api. I myself as a user would prefer an explicit api with a flag to 
infer_shape_type, and then provide args or shape_dict/type_dict.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-08-27 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 482f997  Bump the publish timestamp.
482f997 is described below

commit 482f9977401f879288c35f5b0c1a9b0d5f12f7d8
Author: mxnet-ci 
AuthorDate: Tue Aug 27 19:30:07 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..777a653
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Aug 27 19:30:07 UTC 2019



[GitHub] [incubator-mxnet] anirudhacharya commented on issue #15981: Disable test coverage of C++ codebase on CI

2019-08-27 Thread GitBox
anirudhacharya commented on issue #15981: Disable test coverage of C++ codebase 
on CI 
URL: https://github.com/apache/incubator-mxnet/pull/15981#issuecomment-525438311
 
 
   @marcoabreu apologies, did not see your review.
   @mxnet-label-bot add [pr-awaiting-merge] remove [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on issue #16001: Low kernel performance

2019-08-27 Thread GitBox
sxjscience commented on issue #16001: Low kernel performance
URL: 
https://github.com/apache/incubator-mxnet/issues/16001#issuecomment-525435126
 
 
   There's no need to uninstall cuda 10.1. You may install cuda 9.2 and switch 
to that version by changing the softlink or editing the bashrc.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zhreshold commented on a change in pull request #15947: [WIP] add dataset filter API

2019-08-27 Thread GitBox
zhreshold commented on a change in pull request #15947: [WIP] add dataset 
filter API
URL: https://github.com/apache/incubator-mxnet/pull/15947#discussion_r318241075
 
 

 ##
 File path: python/mxnet/gluon/data/dataset.py
 ##
 @@ -40,6 +40,9 @@ def __getitem__(self, idx):
 def __len__(self):
 raise NotImplementedError
 
+def filter(self, filter_fn):
 
 Review comment:
   many datasets are not derived from SimpleDataset but `Dataset` itself. Any 
plan for supporting filter in this base class?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on issue #15674: Straggler in latest mxnet when training with distributed parameter server

2019-08-27 Thread GitBox
apeforest commented on issue #15674: Straggler in latest mxnet when training 
with distributed parameter server
URL: 
https://github.com/apache/incubator-mxnet/issues/15674#issuecomment-525432330
 
 
   I think the reason is when using cpu pinned memory, the pinned device id is 
default to 0. The caller will need to pass in the device id: 
http://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=cpu%20pinned


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stu1130 commented on issue #15981: Disable test coverage of C++ codebase on CI

2019-08-27 Thread GitBox
stu1130 commented on issue #15981: Disable test coverage of C++ codebase on CI 
URL: https://github.com/apache/incubator-mxnet/pull/15981#issuecomment-525430455
 
 
   @yuxihu Sure


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-27 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r318234414
 
 

 ##
 File path: src/operator/nn/upsampling-inl.h
 ##
 @@ -48,16 +48,18 @@ enum UpSamplingMultiInputMode {kConcat, kSum};
 }  // namespace up_enum
 
 struct UpSamplingParam : public dmlc::Parameter {
-  int scale;
+  TShape scale;
   int num_filter;
   int sample_type;
   int num_args;
   int multi_input_mode;
   uint64_t workspace;
   DMLC_DECLARE_PARAMETER(UpSamplingParam) {
 DMLC_DECLARE_FIELD(scale)
-.set_range(1, 1000)
-.describe("Up sampling scale");
+.set_default(TShape())
 
 Review comment:
   Yep, made changes. Not sure if there's any historical or significant reason 
for the 1000 range limit though.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-27 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r318234175
 
 

 ##
 File path: src/operator/nn/upsampling.cc
 ##
 @@ -58,17 +61,19 @@ static bool UpSamplingShape(const nnvm::NodeAttrs& attrs,
 }
   } else {
 CHECK_EQ(in_shape->size(), 2U) << "Input:[data, weight]";
+CHECK_EQ(scale_h, scale_w) <<
+"UpSamplingBilinear: Scale should be the same along all dimensions for 
bilinear upsampling";
 CHECK_EQ(dshape.ndim(), 4U) << \
   "UpSamplingBilinear: Input data should be 4D in (batch, channel, y, x)";
 if (!shape_is_known(dshape)) return false;
-int kernel = 2 * param_.scale - param_.scale % 2;
+int kernel = static_cast(2.0 * scale_h - ::fmod(scale_h, 2));
 
 Review comment:
   Didn't think of that--thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-27 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r318234331
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -1597,16 +1605,29 @@ def _init_bilinear(arr, f):
 assert out.shape == data_shape[:2] + target_shape
 
 
+"""
+The test cases include integer, tuple, 
+and empty tuple scales on up to 3 shapes 
+at once with the shapes having various sizes 
+for their heights and widths
+"""
 @with_seed()
 def test_nearest_upsampling():
-for root_scale in [1,2,3]:
-for scale in [1,2,3]:
-for num_shape in [1,2,3]:
-for base in [1,2,3]:
-shapes = 
[(1,3,base*root_scale*scale**(num_shape-1-i),base*root_scale*scale**(num_shape-1-i))
 for i in range(num_shape)]
+for root_scale in [1, 2, (3), (2,3), (3,2), (1,1), (5,1), (2,2), ()]:
 
 Review comment:
   Cool! Made changes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-27 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r318234414
 
 

 ##
 File path: src/operator/nn/upsampling-inl.h
 ##
 @@ -48,16 +48,18 @@ enum UpSamplingMultiInputMode {kConcat, kSum};
 }  // namespace up_enum
 
 struct UpSamplingParam : public dmlc::Parameter {
-  int scale;
+  TShape scale;
   int num_filter;
   int sample_type;
   int num_args;
   int multi_input_mode;
   uint64_t workspace;
   DMLC_DECLARE_PARAMETER(UpSamplingParam) {
 DMLC_DECLARE_FIELD(scale)
-.set_range(1, 1000)
-.describe("Up sampling scale");
+.set_default(TShape())
 
 Review comment:
   Yep, made changes. Not sure if there's any historical or significant reason 
for the range limits though.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] yuxihu commented on issue #15981: Disable test coverage of C++ codebase on CI

2019-08-27 Thread GitBox
yuxihu commented on issue #15981: Disable test coverage of C++ codebase on CI 
URL: https://github.com/apache/incubator-mxnet/pull/15981#issuecomment-525425978
 
 
   @stu1130 It seems centos-gpu failed to build. I have re-triggered it for 
multiple times. Could you please take a look?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mahmoodn commented on issue #16001: Low kernel performance

2019-08-27 Thread GitBox
mahmoodn commented on issue #16001: Low kernel performance
URL: 
https://github.com/apache/incubator-mxnet/issues/16001#issuecomment-525426338
 
 
   Do you want me to uninstall cuda 10.1 and install 9.2 before that `pip`?
   Or you mean installing the binary 1.2.0 binary for 9.2 on my cuda 10.1?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ArmageddonKnight opened a new pull request #16022: [MXNET-1421] Added (CuDNN)BatchNorm operator to the list of mirrored operators

2019-08-27 Thread GitBox
ArmageddonKnight opened a new pull request #16022: [MXNET-1421] Added 
(CuDNN)BatchNorm operator to the list of mirrored operators
URL: https://github.com/apache/incubator-mxnet/pull/16022
 
 
   ## Description ##
   Added the (CuDNN)BatchNorm operator to the list of mirrored operators.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] In file `src/executor/graph_executor.cc` method 
`GraphExecutor::InitFullGraph`, removed the restriction on the 
`(CuDNN)BatchNorm` operator.
   - [x] In file `src/operator/cudnn_batch_norm-inl.h` class 
`CuDNNBatchNormOp`, added an internal lock on auxiliary states to prevent 
multiple consecutive updates on the moving mean and variance.
   
   ## Comments ##
   
   I have documented the changes 
[**here**](https://v2.overleaf.com/read/snhdnvhmjrcb). Any comment is welcome. 
Thanks.
   
   FYI, @antinucleon 
   
   S.A. Issue #14383 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] rondogency commented on a change in pull request #15921: [WIP] dynamic custom operator support

2019-08-27 Thread GitBox
rondogency commented on a change in pull request #15921: [WIP] dynamic custom 
operator support
URL: https://github.com/apache/incubator-mxnet/pull/15921#discussion_r318216694
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -302,6 +307,39 @@ extern "C" {
 return retval;
   }
 
+  /*!
+   * \brief returns status of calling InferType function for operator from 
library
+   */
+  int _opCallInferType(inferType_t inferType, const char* const* keys,
+const char* const* vals, int num,
+int* intypes, int num_in, int* outtypes, int num_out) {
+//create map of attributes from list
+std::map attrs;
+for (int i = 0; i < num; i++) {
+  attrs[std::string(keys[i])] = std::string(vals[i]);
+}
+
+// create a vector of types for inputs
+std::vector in_types(num_in);
+for (int i = 0; i < num_in; i++) {
+  in_types[i] = intypes[i];
+}
+
+// create a vector of types for outputs
+std::vector out_types(num_out);
 
 Review comment:
   that's a good point


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] rondogency commented on a change in pull request #15921: [WIP] dynamic custom operator support

2019-08-27 Thread GitBox
rondogency commented on a change in pull request #15921: [WIP] dynamic custom 
operator support
URL: https://github.com/apache/incubator-mxnet/pull/15921#discussion_r318215889
 
 

 ##
 File path: src/c_api/c_api.cc
 ##
 @@ -284,6 +287,36 @@ int MXLoadLib(const char *path) {
   return true;
 };
 
+// lambda function to call infer type
+auto infer_type = [=] (const nnvm::NodeAttrs& attrs,
+std::vector *in_type,
+std::vector *out_type) {
+  // convert attributes to vector of char*
+  std::vector attr_keys, attr_vals;
+  for (auto kv : attrs.dict) {
+attr_keys.push_back(kv.first.c_str());
+attr_vals.push_back(kv.second.c_str());
+  }
+
+  // copy input types from in_type
+  std::vector intypes(*in_type);
 
 Review comment:
   I think we want to make sure the library author won't modify the input, so 
passing a copy is a better approach


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-27 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r318215704
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -1597,16 +1605,29 @@ def _init_bilinear(arr, f):
 assert out.shape == data_shape[:2] + target_shape
 
 
+"""
+The test cases include integer, tuple, 
+and empty tuple scales on up to 3 shapes 
+at once with the shapes having various sizes 
+for their heights and widths
+"""
 @with_seed()
 def test_nearest_upsampling():
-for root_scale in [1,2,3]:
-for scale in [1,2,3]:
-for num_shape in [1,2,3]:
-for base in [1,2,3]:
-shapes = 
[(1,3,base*root_scale*scale**(num_shape-1-i),base*root_scale*scale**(num_shape-1-i))
 for i in range(num_shape)]
+for root_scale in [1, 2, (3), (2,3), (3,2), (1,1), (5,1), (2,2), ()]:
+for scale in [1, 2, 3]:
+for num_shape in [1, 2, 3]:
+for base in [1, 2, 3]:
+root_h = root_w = 1
+if type(root_scale) is int:
+root_h = root_w = root_scale
+elif len(root_scale) == 1:
+root_h = root_w = root_scale[0]
+elif len(root_scale) >= 2:
+root_h = root_scale[0]
+root_w = root_scale[1]
+shapes = [(1, 3, base*root_h*scale**(num_shape-1-i), 
base*root_w*scale**(num_shape-1-i)) for i in range(num_shape)]
 check_nearest_upsampling_with_shape(shapes, scale, 
root_scale)
 
-
 
 Review comment:
   I see, changes made.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-27 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r318215183
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -1559,15 +1559,23 @@ def check_deconvolution_forward_with_bias(shape=(1, 
16, 5, 5), num_filter=32, nu
 def check_nearest_upsampling_with_shape(shapes, scale, root_scale):
 arr = {'arg_%d'%i: mx.random.uniform(-10.0, 10.0, shape, 
ctx=mx.cpu()).copyto(default_context()) for i, shape in zip(range(len(shapes)), 
shapes)}
 arr_grad = {'arg_%d'%i: mx.nd.zeros(shape) for i, shape in 
zip(range(len(shapes)), shapes)}
-
 up = mx.sym.UpSampling(*[mx.sym.Variable('arg_%d'%i) for i in 
range(len(shapes))], sample_type='nearest', scale=root_scale)
 exe = up.bind(default_context(), args=arr, args_grad=arr_grad)
 exe.forward(is_train=True)
 exe.backward(exe.outputs)
 for k in range(len(shapes)):
 name = 'arg_%d'%k
-assert_allclose(arr[name].asnumpy()*root_scale**2*scale**(2*k), 
arr_grad[name].asnumpy(), rtol=1e-4)
-
+out = arr_grad[name].asnumpy()
+root_h = root_w = 1
+if type(root_scale) is int:
+root_h = root_w = root_scale
+elif len(root_scale) == 1:
+root_h = root_w = root_scale[0]
+elif len(root_scale) >= 2:
+root_h = root_scale[0]
+root_w = root_scale[1]
+exp = arr[name].asnumpy()*root_h*root_w*scale**(2*k)
 
 Review comment:
   Sure, changes made.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mseth10 commented on issue #15886: Graph Partition API

2019-08-27 Thread GitBox
mseth10 commented on issue #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#issuecomment-525411885
 
 
   Can you please start a review and approve the PR? @ZhennanQin 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-27 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r318214363
 
 

 ##
 File path: src/operator/nn/upsampling.cc
 ##
 @@ -37,13 +37,16 @@ static bool UpSamplingShape(const nnvm::NodeAttrs& attrs,
   CHECK_GE(in_shape->size(), 1U);
   const mxnet::TShape  = (*in_shape)[0];
   mxnet::TShape oshape = dshape;
+  mxnet::TShape scale_hw = scaleComp(param_);
+  int scale_h = scale_hw[0];
+  int scale_w = scale_hw[1];
   if (param_.sample_type == up_enum::kNearest) {
 CHECK_EQ(in_shape->size(), static_cast(param_.num_args));
 oshape[1] = 0;
 for (auto& shape : *in_shape) {
   CHECK_EQ(shape.ndim(), 4U) << \
 "UpSamplingNearest: Input data should be 4D in (batch, channel, y, x)";
-  int oh = dshape[2]*param_.scale, ow = dshape[3]*param_.scale;
+  int oh = dshape[2]*scale_h, ow = dshape[3]*scale_w;
 
 Review comment:
   Changes made.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-27 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r318213793
 
 

 ##
 File path: src/operator/nn/upsampling-inl.h
 ##
 @@ -84,6 +86,21 @@ struct UpSamplingParam : public 
dmlc::Parameter {
   }
 };  // struct UpSamplingParam
 
+inline std::vector scaleComp(const UpSamplingParam ) {
 
 Review comment:
   Changing back to vector after another discussion with @apeforest.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mseth10 commented on a change in pull request #15886: Graph Partition API

2019-08-27 Thread GitBox
mseth10 commented on a change in pull request #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r318211857
 
 

 ##
 File path: python/mxnet/symbol/symbol.py
 ##
 @@ -1437,6 +1437,54 @@ def _gen_atomic_symbol(self):
 return Symbol(handle)
 
 
+def optimize_for(self, backend, args=None, **kwargs):
 
 Review comment:
   Enabled partial inference by removing `HandleInferShapeError` check.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cjolivier01 closed issue #9545: Profiling discussion

2019-08-27 Thread GitBox
cjolivier01 closed issue #9545: Profiling discussion
URL: https://github.com/apache/incubator-mxnet/issues/9545
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cjolivier01 closed issue #8709: Test prints too many errors: test_kvstore.test_invalid_pull

2019-08-27 Thread GitBox
cjolivier01 closed issue #8709: Test prints too many errors: 
test_kvstore.test_invalid_pull
URL: https://github.com/apache/incubator-mxnet/issues/8709
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] vandanavk commented on issue #15993: Add a contrib operator for Constant

2019-08-27 Thread GitBox
vandanavk commented on issue #15993: Add a contrib operator for Constant
URL: https://github.com/apache/incubator-mxnet/pull/15993#issuecomment-525407311
 
 
   > For performance optimization purpose, I think it's better to have 
`constant` against with `variable`. Make `constant` as a operator is an option, 
but the question is, how to set the value for it? Especially when the value is 
not a broad casted value.
   
   One of the main reasons for adding this as an operator was for ONNX 
conversion. I tried blockgrad with constant initializer - but it dint convert 
well. 
   Could you give an example for the second part of your question - the one 
about when the value is not broadcasted.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] vandanavk commented on issue #15996: ONNX import/export: Dynamic reshape

2019-08-27 Thread GitBox
vandanavk commented on issue #15996: ONNX import/export: Dynamic reshape
URL: https://github.com/apache/incubator-mxnet/pull/15996#issuecomment-525405079
 
 
   @mxnet-label-bot update [ONNX, pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.5.x updated: Revert "Fix a memory misalignment in topk operator" (#15999)

2019-08-27 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch v1.5.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.5.x by this push:
 new 33f4de1  Revert "Fix a memory misalignment in topk operator" (#15999)
33f4de1 is described below

commit 33f4de13d3909fc356ace8ff7a5c9665a651fc63
Author: Lin Yuan 
AuthorDate: Tue Aug 27 10:28:05 2019 -0700

Revert "Fix a memory misalignment in topk operator" (#15999)

* Revert "Fix a memory misalignment in topk operator (#15948)"

This reverts commit 42746bc73e8bcb75bfcadd1398e6f71bc170fa10.
---
 src/operator/tensor/ordering_op-inl.h | 30 +++---
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/src/operator/tensor/ordering_op-inl.h 
b/src/operator/tensor/ordering_op-inl.h
index bd27441..1dda901 100644
--- a/src/operator/tensor/ordering_op-inl.h
+++ b/src/operator/tensor/ordering_op-inl.h
@@ -385,8 +385,8 @@ void TopKImpl(const RunContext ,
   int axis = 0;
   bool do_transpose = false;
   bool is_ascend = false;
-  index_t k = 0;
-  size_t alignment = std::max(sizeof(DType), sizeof(index_t));
+  int k = 0;
+  size_t alignment = std::max(sizeof(DType), sizeof(int));
   mxnet::TShape target_shape;
   ParseTopKParam(src.shape_, param,
  _shape, _size, _num, , , 
_transpose, _ascend);
@@ -395,31 +395,31 @@ void TopKImpl(const RunContext ,
 << "The total element_num is " << element_num << ", but the selected 
IDType can only represent "
 << mxnet::common::MaxIntegerValue() << " elements";
   Tensor dat = src.FlatTo3D(axis, axis, s);
-  // Temp space needed by the full sorts.
-  size_t temp_size = std::max(
-  mxnet::op::SortByKeyWorkspaceSize(src.Size()),
-  mxnet::op::SortByKeyWorkspaceSize(src.Size()));
-
+  size_t temp_size = 0;
+  // Temp space needed by the gpu-based full sorts.
+  temp_size = std::max(temp_size,
+mxnet::op::SortByKeyWorkspaceSize(src.Size()));
   temp_size = std::max(temp_size,
-  mxnet::op::SortByKeyWorkspaceSize(src.Size()));
+mxnet::op::SortByKeyWorkspaceSize(src.Size()));
+  temp_size = std::max(temp_size,
+mxnet::op::SortByKeyWorkspaceSize(src.Size()));
   // Additional temp space for gpu full sorts for batch ids.
   temp_size += PadBytes(sizeof(int) * src.Size(), alignment);
   // Temp space for cpu sorts.
-  temp_size = std::max(temp_size, sizeof(DType) * src.Size());
-
+  temp_size = std::max(temp_size, static_cast(sizeof(DType) * 
src.Size()));
   size_t workspace_size = temp_size + PadBytes(sizeof(DType) * src.Size(), 
alignment)
 + PadBytes(sizeof(int) * src.Size(), 
alignment);
   if (param.ret_typ == topk_enum::kReturnMask) {
-workspace_size += PadBytes(sizeof(index_t) * batch_size * k, alignment);
+workspace_size += PadBytes(sizeof(int) * batch_size * k, alignment);
   }
   workspace = resource.get_space_typed(Shape1(workspace_size), 
s);
   char* workspace_curr_ptr = workspace.dptr_;
   sorted_dat = Tensor(reinterpret_cast(workspace_curr_ptr),
-  Shape1(src.Size()), s);  // contain sorted dat
+  Shape1(src.Size()), s);  // contain 
sorted dat
   workspace_curr_ptr += PadBytes(sizeof(DType) * src.Size(), alignment);
-  indices = Tensor(reinterpret_cast(workspace_curr_ptr),
-  Shape1(src.Size()), s);  // indices in the original matrix
-  workspace_curr_ptr += PadBytes(sizeof(index_t) * src.Size(), alignment);
+  indices = Tensor(reinterpret_cast(workspace_curr_ptr),
+Shape1(src.Size()), s);  // indices in the 
original matrix
+  workspace_curr_ptr += PadBytes(sizeof(int) * src.Size(), alignment);
 
   if (param.ret_typ == topk_enum::kReturnMask) {
 sel_indices = Tensor(reinterpret_cast(workspace_curr_ptr),



[GitHub] [incubator-mxnet] sxjscience merged pull request #15999: Revert "Fix a memory misalignment in topk operator"

2019-08-27 Thread GitBox
sxjscience merged pull request #15999: Revert "Fix a memory misalignment in 
topk operator"
URL: https://github.com/apache/incubator-mxnet/pull/15999
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #15997: MxNet triggered Segmentation Fault when using together with Ray or PyTorch

2019-08-27 Thread GitBox
larroy commented on issue #15997: MxNet triggered Segmentation Fault when using 
together with Ray or PyTorch
URL: 
https://github.com/apache/incubator-mxnet/issues/15997#issuecomment-525402399
 
 
   Can you please provide a stack trace with symbols?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-27 Thread GitBox
apeforest commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r318200332
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -1559,15 +1559,23 @@ def check_deconvolution_forward_with_bias(shape=(1, 
16, 5, 5), num_filter=32, nu
 def check_nearest_upsampling_with_shape(shapes, scale, root_scale):
 arr = {'arg_%d'%i: mx.random.uniform(-10.0, 10.0, shape, 
ctx=mx.cpu()).copyto(default_context()) for i, shape in zip(range(len(shapes)), 
shapes)}
 arr_grad = {'arg_%d'%i: mx.nd.zeros(shape) for i, shape in 
zip(range(len(shapes)), shapes)}
-
 up = mx.sym.UpSampling(*[mx.sym.Variable('arg_%d'%i) for i in 
range(len(shapes))], sample_type='nearest', scale=root_scale)
 exe = up.bind(default_context(), args=arr, args_grad=arr_grad)
 exe.forward(is_train=True)
 exe.backward(exe.outputs)
 for k in range(len(shapes)):
 name = 'arg_%d'%k
-assert_allclose(arr[name].asnumpy()*root_scale**2*scale**(2*k), 
arr_grad[name].asnumpy(), rtol=1e-4)
-
+out = arr_grad[name].asnumpy()
+root_h = root_w = 1
+if type(root_scale) is int:
+root_h = root_w = root_scale
+elif len(root_scale) == 1:
+root_h = root_w = root_scale[0]
+elif len(root_scale) >= 2:
+root_h = root_scale[0]
+root_w = root_scale[1]
+exp = arr[name].asnumpy()*root_h*root_w*scale**(2*k)
 
 Review comment:
   nit: space between operators


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-27 Thread GitBox
apeforest commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r318199733
 
 

 ##
 File path: src/operator/nn/upsampling.cc
 ##
 @@ -58,17 +61,19 @@ static bool UpSamplingShape(const nnvm::NodeAttrs& attrs,
 }
   } else {
 CHECK_EQ(in_shape->size(), 2U) << "Input:[data, weight]";
+CHECK_EQ(scale_h, scale_w) <<
+"UpSamplingBilinear: Scale should be the same along all dimensions for 
bilinear upsampling";
 CHECK_EQ(dshape.ndim(), 4U) << \
   "UpSamplingBilinear: Input data should be 4D in (batch, channel, y, x)";
 if (!shape_is_known(dshape)) return false;
-int kernel = 2 * param_.scale - param_.scale % 2;
+int kernel = static_cast(2.0 * scale_h - ::fmod(scale_h, 2));
 
 Review comment:
   why replace % 2 with ::fmod. If just mod 2, another trick is (scale_h & 1)? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-27 Thread GitBox
apeforest commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r318198645
 
 

 ##
 File path: src/operator/nn/upsampling.cc
 ##
 @@ -37,13 +37,16 @@ static bool UpSamplingShape(const nnvm::NodeAttrs& attrs,
   CHECK_GE(in_shape->size(), 1U);
   const mxnet::TShape  = (*in_shape)[0];
   mxnet::TShape oshape = dshape;
+  mxnet::TShape scale_hw = scaleComp(param_);
+  int scale_h = scale_hw[0];
+  int scale_w = scale_hw[1];
   if (param_.sample_type == up_enum::kNearest) {
 CHECK_EQ(in_shape->size(), static_cast(param_.num_args));
 oshape[1] = 0;
 for (auto& shape : *in_shape) {
   CHECK_EQ(shape.ndim(), 4U) << \
 "UpSamplingNearest: Input data should be 4D in (batch, channel, y, x)";
-  int oh = dshape[2]*param_.scale, ow = dshape[3]*param_.scale;
+  int oh = dshape[2]*scale_h, ow = dshape[3]*scale_w;
 
 Review comment:
   add space around *


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-27 Thread GitBox
apeforest commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r317811212
 
 

 ##
 File path: 3rdparty/mshadow/mshadow/extension/spatial_upsampling_nearest.h
 ##
 @@ -24,47 +24,55 @@ struct UpSamplingNearestExp :
   /*! \brief source oprand */
   const SrcExp _;
   /*! \brief up sampling scale */
-  index_t scale_;
+  index_t scale_h_;
 
 Review comment:
   wouldn't it be better to use a mxnet::Shape object, or mxnet::Tuple object 
to store scale_h_ and scale_w_ instead of two integers?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kshitij12345 commented on issue #15909: [numpy] random.rand

2019-08-27 Thread GitBox
kshitij12345 commented on issue #15909: [numpy] random.rand
URL: https://github.com/apache/incubator-mxnet/pull/15909#issuecomment-525390593
 
 
   @haojin2 
   Sure will do that and update. 
   
   Rebase with master or numpy branch?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #16021: [CI] openblas build failed in static build

2019-08-27 Thread GitBox
TaoLv commented on issue #16021: [CI] openblas build failed in static build
URL: 
https://github.com/apache/incubator-mxnet/issues/16021#issuecomment-525313200
 
 
   @zachgk @lanking520 @szha 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16021: [CI] openblas build failed in static build

2019-08-27 Thread GitBox
mxnet-label-bot commented on issue #16021: [CI] openblas build failed in static 
build
URL: 
https://github.com/apache/incubator-mxnet/issues/16021#issuecomment-525312352
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended label(s): CI, Build


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv opened a new issue #16021: [CI] openblas build failed in static build

2019-08-27 Thread GitBox
TaoLv opened a new issue #16021: [CI] openblas build failed in static build
URL: https://github.com/apache/incubator-mxnet/issues/16021
 
 
   I notice the openblas 0.3.5 build failed in static build but the CI still 
passed.
   
   ```
   ../libopenblasp-r0.3.5.a(cblas_sgemm.o): In function `cblas_sgemm':
   gemm.c:(.text+0x57f): undefined reference to `sgemm_kernel_direct_performant'
   gemm.c:(.text+0x5d8): undefined reference to `sgemm_kernel_direct'
   collect2: error: ld returned 1 exit status
   make[1]: *** [xscblat3] Error 1
   make[1]: *** Waiting for unfinished jobs
   make: *** [tests] Error 2
    make --quiet -j 72 PREFIX=/work/mxnet/staticdeps install
   make[1]: warning: -jN forced in submake: disabling jobserver mode.
   Generating openblas_config.h in /work/mxnet/staticdeps/include
   Generating f77blas.h in /work/mxnet/staticdeps/include
   Generating cblas.h in /work/mxnet/staticdeps/include
   Copying LAPACKE header files to /work/mxnet/staticdeps/include
   Copying the static library to /work/mxnet/staticdeps/lib
   Copying the shared library to /work/mxnet/staticdeps/lib
   install: cannot stat 'libopenblasp-r0.3.5.so': No such file or directory
   make[1]: *** [install] Error 1
   make: *** [install] Error 2
   ```
   For example:
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-cpu/detail/PR-15983/3/pipeline/297
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #16019: d

2019-08-27 Thread GitBox
TaoLv commented on issue #16019: d
URL: 
https://github.com/apache/incubator-mxnet/issues/16019#issuecomment-525310622
 
 
   Closing since no information was provided.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv closed issue #16019: d

2019-08-27 Thread GitBox
TaoLv closed issue #16019: d
URL: https://github.com/apache/incubator-mxnet/issues/16019
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-08-27 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new fa3b823  Bump the publish timestamp.
fa3b823 is described below

commit fa3b823c0ba3a87e10569c057306843a44dd2bf4
Author: mxnet-ci 
AuthorDate: Tue Aug 27 13:29:53 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..180967e
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Aug 27 13:29:53 UTC 2019



[GitHub] [incubator-mxnet] yangwenhuan opened a new issue #16020: Is there any communication between parameter servers?

2019-08-27 Thread GitBox
yangwenhuan opened a new issue #16020: Is there any communication between 
parameter servers?
URL: https://github.com/apache/incubator-mxnet/issues/16020
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] fsaina edited a comment on issue #15265: Run pretrained model on android

2019-08-27 Thread GitBox
fsaina edited a comment on issue #15265: Run pretrained model on android
URL: 
https://github.com/apache/incubator-mxnet/issues/15265#issuecomment-525214184
 
 
   @sheep94lion would you please provide some help how you managed to do that ? 
I would be very much grateful if you could share the path you took in order to 
get mxnet working on android.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] fsaina commented on issue #15265: Run pretrained model on android

2019-08-27 Thread GitBox
fsaina commented on issue #15265: Run pretrained model on android
URL: 
https://github.com/apache/incubator-mxnet/issues/15265#issuecomment-525214184
 
 
   @sheep94lion would you please provide some help how you managed to do that ? 
I would be very much grateful if you could take share to path you took in order 
to get mxnet working on android.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16019: d

2019-08-27 Thread GitBox
mxnet-label-bot commented on issue #16019: d
URL: 
https://github.com/apache/incubator-mxnet/issues/16019#issuecomment-525213776
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended label(s): Test


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zfatgxu opened a new issue #16019: d

2019-08-27 Thread GitBox
zfatgxu opened a new issue #16019: d
URL: https://github.com/apache/incubator-mxnet/issues/16019
 
 
   Note: Providing complete information in the most concise form is the best 
way to get help. This issue template serves as the checklist for essential 
information to most of the technical issues and bug reports. For non-technical 
issues and feature requests, feel free to present the information in what you 
believe is the best form.
   
   For Q & A and discussion, please start a discussion thread at 
https://discuss.mxnet.io 
   
   ## Description
   (Brief description of the problem in no more than 2 sentences.)
   
   ## Environment info (Required)
   
   ```
   What to do:
   1. Download the diagnosis script from 
https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py
   2. Run the script using `python diagnose.py` and paste its output here.
   
   ```
   
   Package used (Python/R/Scala/Julia):
   (I'm using ...)
   
   For Scala user, please provide:
   1. Java version: (`java -version`)
   2. Maven version: (`mvn -version`)
   3. Scala runtime if applicable: (`scala -version`)
   
   For R user, please provide R `sessionInfo()`:
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio):
   
   MXNet commit hash:
   (Paste the output of `git rev-parse HEAD` here.)
   
   Build config:
   (Paste the content of config.mk, or the build command.)
   
   ## Error Message:
   (Paste the complete error message, including stack trace.)
   
   ## Minimum reproducible example
   (If you are using your own code, please provide a short script that 
reproduces the error. Otherwise, please provide link to the existing example.)
   
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   
   1.
   2.
   
   ## What have you tried to solve it?
   
   1.
   2.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zoeygxy commented on a change in pull request #15942: Refines NDArray indexing and adds numpy ndarray indexing [READY FOR REVIEW]

2019-08-27 Thread GitBox
zoeygxy commented on a change in pull request #15942: Refines NDArray indexing 
and adds numpy ndarray indexing [READY FOR REVIEW]
URL: https://github.com/apache/incubator-mxnet/pull/15942#discussion_r317962419
 
 

 ##
 File path: src/operator/tensor/matrix_op-inl.h
 ##
 @@ -973,7 +981,7 @@ inline bool SliceAssignOpShape(const nnvm::NodeAttrs& 
attrs,
   CHECK_EQ(in_attrs->size(), 2U);
   CHECK_EQ(out_attrs->size(), 1U);
   const mxnet::TShape& dshape = (*in_attrs)[0];
-  if (dshape.ndim() == 0U || dshape.Size() == 0U) return false;
+  if (dshape.ndim() == 0U) return false;
 
 Review comment:
   Fixed. Thx!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zoeygxy commented on a change in pull request #15942: Refines NDArray indexing and adds numpy ndarray indexing [READY FOR REVIEW]

2019-08-27 Thread GitBox
zoeygxy commented on a change in pull request #15942: Refines NDArray indexing 
and adds numpy ndarray indexing [READY FOR REVIEW]
URL: https://github.com/apache/incubator-mxnet/pull/15942#discussion_r317962319
 
 

 ##
 File path: src/operator/tensor/matrix_op-inl.h
 ##
 @@ -1016,7 +1024,11 @@ void SliceAssignOpForward(const nnvm::NodeAttrs& attrs,
   const SliceParam& param = nnvm::get(attrs.parsed);
   MXNET_NDIM_SWITCH(data.ndim(), ndim, {
 common::StaticArray begin, end, step;
-GetIndexRange(data.shape_, param.begin, param.end, param.step, , 
, );
+bool non_zero_shape = GetIndexRange(data.shape_, param.begin, param.end, 
param.step,
 
 Review comment:
   Fixed. Thx!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zoeygxy commented on a change in pull request #15942: Refines NDArray indexing and adds numpy ndarray indexing [READY FOR REVIEW]

2019-08-27 Thread GitBox
zoeygxy commented on a change in pull request #15942: Refines NDArray indexing 
and adds numpy ndarray indexing [READY FOR REVIEW]
URL: https://github.com/apache/incubator-mxnet/pull/15942#discussion_r317962266
 
 

 ##
 File path: tests/python/unittest/test_numpy_ndarray.py
 ##
 @@ -421,208 +468,217 @@ def assert_same(np_array, np_index, mx_array, 
mx_index, mx_value, np_value=None)
 np_index.append(idx)
 np_index = tuple(np_index)
 
-mx_array = np.array(np_array, dtype=np_array.dtype)
-np_array = mx_array.asnumpy()
-indexed_array_shape = np_array[np_index].shape
-np_indexed_array = _np.random.randint(low=-1, high=0, 
size=indexed_array_shape)
-# test value is a numpy array without broadcast
-assert_same(np_array, np_index, mx_array, index, np_indexed_array)
-# test value is an numeric_type
-assert_same(np_array, np_index, mx_array, index, 
_np.random.randint(low=-1, high=0))
-if len(indexed_array_shape) > 1:
-# test ndarray with broadcast
-assert_same(np_array, np_index, mx_array, index,
-_np.random.uniform(low=-1, high=0, 
size=(indexed_array_shape[-1],)))
-# test numpy array with broadcast
-assert_same(np_array, np_index, mx_array, index,
-_np.random.randint(low=-1, high=0, 
size=(indexed_array_shape[-1],)))
-# test list with broadcast
-assert_same(np_array, np_index, mx_array, index,
-[_np.random.randint(low=-1, high=0)] * 
indexed_array_shape[-1])
+mx_array = np.array(np_array, dtype=np_array.dtype)  # mxnet.np.ndarray
+np_array = mx_array.asnumpy()  # native numpy array
+if is_scalar:
+# test value is a numeric type
+assert_same(np_array, np_index, mx_array, index, 
_np.random.randint(low=-1, high=0))
+value_nd = [_np.random.randint(low=-1, high=0)]
+assert_same(np_array, np_index, mx_array, index, value_nd, 
value_nd[0])
+else:
+indexed_array_shape = np_array[np_index].shape
+np_indexed_array = _np.random.randint(low=-1, high=0, 
size=indexed_array_shape)
+# test value is a native numpy array without broadcast
+assert_same(np_array, np_index, mx_array, index, np_indexed_array)
+# test value is a mxnet numpy array without broadcast
+assert_same(np_array, np_index, mx_array, index, 
np.array(np_indexed_array))
+# test value is an numeric_type
+assert_same(np_array, np_index, mx_array, index, 
_np.random.randint(low=-1, high=0))
+if len(indexed_array_shape) > 1:
+np_value = _np.random.randint(low=-1, high=0, 
size=(indexed_array_shape[-1],))
+# test mxnet ndarray with broadcast
+assert_same(np_array, np_index, mx_array, index, 
np.array(np_value))
+# test native numpy array with broadcast
+assert_same(np_array, np_index, mx_array, index, np_value)
+# test list with broadcast
+assert_same(np_array, np_index, mx_array, index,
+[_np.random.randint(low=-1, high=0)] * 
indexed_array_shape[-1])
 
 def test_getitem_autograd(np_array, index):
+"""
+np_array: native numpy array.
+"""
 x = np.array(np_array, dtype=np_array.dtype)
 x.attach_grad()
-with autograd.record():
+with mx.autograd.record():
 y = x[index]
+if not isinstance(y, np.ndarray):
+return
 y.backward()
 value = np.ones_like(y)
 x_grad = np.zeros_like(x)
 x_grad[index] = value
 assert same(x_grad.asnumpy(), x.grad.asnumpy())
 
 def test_setitem_autograd(np_array, index):
+"""
+np_array: native numpy array.
+"""
 x = np.array(np_array, dtype=np_array.dtype)
+if not isinstance(x[index], np.ndarray):
+return  # x[index] is scalar
 out_shape = x[index].shape
 y = np.array(_np.random.uniform(size=out_shape))
 y.attach_grad()
 try:
-with autograd.record():
+with mx.autograd.record():
 x[index] = y
-assert False  # should not reach here
+x.backward()
+y_grad = np.ones_like(y)
+assert same(y_grad.asnumpy(), y.grad.asnumpy())
 except mx.base.MXNetError as err:
 assert str(err).find('Inplace operations (+=, -=, x[:]=, etc) are 
not supported when recording with') != -1
 
-def np_int(index, int_type=_np.int32):
-def convert(num):
-if num is None:
-return num
-else:
-return int_type(num)
-
-if isinstance(index, slice):
-  

[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-08-27 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new b49d400  Bump the publish timestamp.
b49d400 is described below

commit b49d40025d7e1222645fccd741b3ec6d8d34aa4a
Author: mxnet-ci 
AuthorDate: Tue Aug 27 08:12:26 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..1523978
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Aug 27 08:12:26 UTC 2019



[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16017: Add RROIAlign

2019-08-27 Thread GitBox
pengzhao-intel commented on issue #16017: Add RROIAlign
URL: https://github.com/apache/incubator-mxnet/pull/16017#issuecomment-525176792
 
 
   btw, I think you still need to register CUDA version and error out with the 
hint like "No implementation" 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16017: Add RROIAlign

2019-08-27 Thread GitBox
pengzhao-intel commented on issue #16017: Add RROIAlign
URL: https://github.com/apache/incubator-mxnet/pull/16017#issuecomment-525175003
 
 
   > > Could you provide some performance data?
   > 
   > Could you give a hint on what kind of performance data can be provided?
   
   A simple way is to compare the performance with and without your omp pragma 
and see how much speedup from your parallelization.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ElaineBao commented on issue #16017: Add RROIAlign

2019-08-27 Thread GitBox
ElaineBao commented on issue #16017: Add RROIAlign
URL: https://github.com/apache/incubator-mxnet/pull/16017#issuecomment-525171865
 
 
   > Could you provide some performance data?
   
   Could you give a hint on what kind of performance data can be provided?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on a change in pull request #16017: Add RROIAlign

2019-08-27 Thread GitBox
pengzhao-intel commented on a change in pull request #16017: Add RROIAlign
URL: https://github.com/apache/incubator-mxnet/pull/16017#discussion_r317919578
 
 

 ##
 File path: src/operator/rroi_align.cc
 ##
 @@ -0,0 +1,313 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2015 by Contributors
+ * \file rroi_align.cc
+ * \brief rroi align operator
+ * \author Yixin Bao
+ * Adapted from Caffe2
+*/
+#include "./rroi_align-inl.h"
+#include 
+#include "math.h"
+
+using std::max;
+using std::min;
+using std::floor;
+using std::ceil;
+
+namespace mxnet {
+namespace op {
+
+template 
+struct position_for_bilinear_interpolate {
+  // 4 positions and corresponding weights for
+  // computing bilinear interpolation
+  int pos1, pos2, pos3, pos4;
+  DType w1, w2, w3, w4;
+};
+
+template 
+void pre_calc_for_bilinear_interpolate(
+const int height, const int width, const int pooled_height, const int 
pooled_width,
+const int iy_upper, const int ix_upper, DType roi_start_h, DType 
roi_start_w,
+DType bin_size_h, DType bin_size_w, int roi_bin_grid_h, int roi_bin_grid_w,
+DType roi_center_h, DType roi_center_w, DType theta,
+std::vector> *pre_calc) {
+  int pre_calc_index = 0;
+  DType cosTheta = cos(theta);
+  DType sinTheta = sin(theta);
+  for (int ph = 0; ph < pooled_height; ph++) {
 
 Review comment:
   Add parallelization for the loop?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on a change in pull request #16017: Add RROIAlign

2019-08-27 Thread GitBox
pengzhao-intel commented on a change in pull request #16017: Add RROIAlign
URL: https://github.com/apache/incubator-mxnet/pull/16017#discussion_r317920058
 
 

 ##
 File path: src/operator/rroi_align.cc
 ##
 @@ -0,0 +1,313 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2015 by Contributors
+ * \file rroi_align.cc
+ * \brief rroi align operator
+ * \author Yixin Bao
+ * Adapted from Caffe2
+*/
+#include "./rroi_align-inl.h"
+#include 
+#include "math.h"
+
+using std::max;
+using std::min;
+using std::floor;
+using std::ceil;
+
+namespace mxnet {
+namespace op {
+
+template 
+struct position_for_bilinear_interpolate {
+  // 4 positions and corresponding weights for
+  // computing bilinear interpolation
+  int pos1, pos2, pos3, pos4;
+  DType w1, w2, w3, w4;
+};
+
+template 
+void pre_calc_for_bilinear_interpolate(
+const int height, const int width, const int pooled_height, const int 
pooled_width,
+const int iy_upper, const int ix_upper, DType roi_start_h, DType 
roi_start_w,
+DType bin_size_h, DType bin_size_w, int roi_bin_grid_h, int roi_bin_grid_w,
+DType roi_center_h, DType roi_center_w, DType theta,
+std::vector> *pre_calc) {
+  int pre_calc_index = 0;
+  DType cosTheta = cos(theta);
+  DType sinTheta = sin(theta);
+  for (int ph = 0; ph < pooled_height; ph++) {
+for (int pw = 0; pw < pooled_width; pw++) {
+  // calc bin grid position (xx,yy)
+  for (int iy = 0; iy < iy_upper; iy++) {
+const DType yy = roi_start_h + ph * bin_size_h +
+static_cast(iy + .5f) * bin_size_h /
+static_cast(roi_bin_grid_h);  // e.g., 0.5, 1.5
+for (int ix = 0; ix < ix_upper; ix++) {
+  const DType xx = roi_start_w + pw * bin_size_w +
+  static_cast(ix + .5f) * bin_size_w /
+  static_cast(roi_bin_grid_w);
+
+  // Rotate by theta around the center and translate
+  DType x = xx * cosTheta + yy * sinTheta + roi_center_w;
+  DType y = yy * cosTheta - xx * sinTheta + roi_center_h;
+
+  // deal with: inverse elements are out of feature map boundary
+  if (y < -1.0 || y > height || x < -1.0 || x > width) {
+// empty
+position_for_bilinear_interpolate pc;
+pc.pos1 = 0;
+pc.pos2 = 0;
+pc.pos3 = 0;
+pc.pos4 = 0;
+pc.w1 = 0;
+pc.w2 = 0;
+pc.w3 = 0;
+pc.w4 = 0;
+pre_calc->at(pre_calc_index) = pc;
+pre_calc_index += 1;
+continue;
+  }
+  if (y <= 0) {
+y = 0;
+  }
+  if (x <= 0) {
+x = 0;
+  }
+
+  // calc 4 points for interpolation
+  int y_low = static_cast(y);
+  int x_low = static_cast(x);
+  int y_high;
+  int x_high;
+  if (y_low >= height - 1) {
+y_high = y_low = height - 1;
+y = (DType)y_low;
+  } else {
+y_high = y_low + 1;
+  }
+  if (x_low >= width - 1) {
+x_high = x_low = width - 1;
+x = (DType)x_low;
+  } else {
+x_high = x_low + 1;
+  }
+  DType ly = y - y_low;
+  DType lx = x - x_low;
+  DType hy = 1. - ly, hx = 1. - lx;
+  DType w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx;
+
+  // Save weights and indices
+  position_for_bilinear_interpolate pc;
+  pc.pos1 = y_low * width + x_low;
+  pc.pos2 = y_low * width + x_high;
+  pc.pos3 = y_high * width + x_low;
+  pc.pos4 = y_high * width + x_high;
+  pc.w1 = w1;
+  pc.w2 = w2;
+  pc.w3 = w3;
+  pc.w4 = w4;
+  pre_calc->at(pre_calc_index) = pc;
+
+  pre_calc_index += 1;
+}
+  }
+}
+  }
+}
+
+template 
+inline void RROIAlignForward(const OpContext , const RROIAlignParam ,
+ const std::vector _data, const 
std::vector ,
+ const std::vector _data) {
+  // data: [batch_size, c, h, w]
+  const TBlob  = in_data[rroialign::kData];
+  const TBlob  = 

[GitHub] [incubator-mxnet] xiezhq-hermann commented on a change in pull request #15906: [Numpy] Numpy operator diff

2019-08-27 Thread GitBox
xiezhq-hermann commented on a change in pull request #15906: [Numpy] Numpy 
operator diff
URL: https://github.com/apache/incubator-mxnet/pull/15906#discussion_r317920191
 
 

 ##
 File path: src/operator/numpy/np_diff-inl.h
 ##
 @@ -0,0 +1,215 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_diff-inl.h
+ * \brief Function definition of numpy-compatible diff operator
+ */
+
+#ifndef MXNET_OPERATOR_NUMPY_NP_DIFF_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_DIFF_INL_H_
+
+#include 
+#include 
+#include 
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+struct DiffParam : public dmlc::Parameter {
+  int n, axis;
+  dmlc::optional prepend;
+  dmlc::optional append;
+  DMLC_DECLARE_PARAMETER(DiffParam) {
+DMLC_DECLARE_FIELD(n).set_default(1).describe(
+"The number of times values are differenced."
+" If zero, the input is returned as-is.");
+DMLC_DECLARE_FIELD(axis).set_default(-1).describe(
+"Axis along which the cumulative sum is computed."
+" The default (None) is to compute the diff over the flattened 
array.");
+  }
+};
+
+inline void YanghuiTri(std::vector* buffer, int n) {
 
 Review comment:
   The `inline` is finally kept since the function here is really concise and 
no need to be put in .cc


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (79ed678 -> 0e71fbd)

2019-08-27 Thread apeforest
This is an automated email from the ASF dual-hosted git repository.

apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 79ed678  [Numpy] random.randint() implemented (#15956)
 add 0e71fbd  Added tests to verify Large Vector Support for initial set of 
ops  (#15943)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/test_utils.py  |  11 +++
 src/operator/softmax_output-inl.h   |  18 ++---
 src/operator/tensor/matrix_op-inl.h |   6 +-
 tests/nightly/test_large_array.py   |   8 +--
 tests/nightly/test_large_vector.py  | 139 +++-
 5 files changed, 162 insertions(+), 20 deletions(-)



[GitHub] [incubator-mxnet] zoeygxy commented on a change in pull request #15942: Refines NDArray indexing and adds numpy ndarray indexing [READY FOR REVIEW]

2019-08-27 Thread GitBox
zoeygxy commented on a change in pull request #15942: Refines NDArray indexing 
and adds numpy ndarray indexing [READY FOR REVIEW]
URL: https://github.com/apache/incubator-mxnet/pull/15942#discussion_r317918976
 
 

 ##
 File path: src/operator/tensor/matrix_op-inl.h
 ##
 @@ -1016,7 +1024,11 @@ void SliceAssignOpForward(const nnvm::NodeAttrs& attrs,
   const SliceParam& param = nnvm::get(attrs.parsed);
   MXNET_NDIM_SWITCH(data.ndim(), ndim, {
 common::StaticArray begin, end, step;
-GetIndexRange(data.shape_, param.begin, param.end, param.step, , 
, );
+bool non_zero_shape = GetIndexRange(data.shape_, param.begin, param.end, 
param.step,
 
 Review comment:
   Fixed. Thx!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest merged pull request #15943: Added tests to verify Large Vector Support for initial set of ops

2019-08-27 Thread GitBox
apeforest merged pull request #15943: Added tests to verify Large Vector 
Support for initial set of ops 
URL: https://github.com/apache/incubator-mxnet/pull/15943
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16017: Add RROIAlign

2019-08-27 Thread GitBox
pengzhao-intel commented on issue #16017: Add RROIAlign
URL: https://github.com/apache/incubator-mxnet/pull/16017#issuecomment-525164865
 
 
   @wkcn could you help to take a review?
   FYI, the backward will be submitted later.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on a change in pull request #16017: Add RROIAlign

2019-08-27 Thread GitBox
pengzhao-intel commented on a change in pull request #16017: Add RROIAlign
URL: https://github.com/apache/incubator-mxnet/pull/16017#discussion_r317918182
 
 

 ##
 File path: src/operator/rroi_align-inl.h
 ##
 @@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2015 by Contributors
+ * \file rroi_align-inl.h
+ * \brief rroi align operator and symbol
+ * \author Yixin Bao
+ * Adapted from Caffe2
 
 Review comment:
   Add a link of Caffe2 implementaiton


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on a change in pull request #16017: Add RROIAlign

2019-08-27 Thread GitBox
pengzhao-intel commented on a change in pull request #16017: Add RROIAlign
URL: https://github.com/apache/incubator-mxnet/pull/16017#discussion_r317918182
 
 

 ##
 File path: src/operator/rroi_align-inl.h
 ##
 @@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2015 by Contributors
+ * \file rroi_align-inl.h
+ * \brief rroi align operator and symbol
+ * \author Yixin Bao
+ * Adapted from Caffe2
 
 Review comment:
   Add a link of Caffe2 implementation


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce opened a new pull request #16018: Port ops from np branch

2019-08-27 Thread GitBox
reminisce opened a new pull request #16018: Port ops from np branch
URL: https://github.com/apache/incubator-mxnet/pull/16018
 
 
   ## Description ##
   Ported several ops from numpy branch:
   1. maximum
   2. minimum
   3. clip
   4. argmax
   5. swapaxes
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ElaineBao opened a new pull request #16017: Add RROIAlign

2019-08-27 Thread GitBox
ElaineBao opened a new pull request #16017: Add RROIAlign
URL: https://github.com/apache/incubator-mxnet/pull/16017
 
 
   ## Description ##
   Add RROIAlign operator and test case.
   Different from ROI Align, RROI Align uses rotated rois, which is suitable 
for text detection.
   Reference paper: Ma, Jianqi, et al. "Arbitrary-Oriented Scene Text Detection 
via Rotation Proposals."
   IEEE Transactions on Multimedia, 2018.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] RROIAlign
   - [ ] test case for RROIAlign
   
   ## Comments ##
   - Only forward pass of RROIAlign is implemented.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on issue #15943: Added tests to verify Large Vector Support for initial set of ops

2019-08-27 Thread GitBox
access2rohit commented on issue #15943: Added tests to verify Large Vector 
Support for initial set of ops 
URL: https://github.com/apache/incubator-mxnet/pull/15943#issuecomment-525160157
 
 
   @mxnet-label-bot add [pr-awaiting-merge]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] tingying2020 opened a new pull request #16016: [numpy] operator ravel, derive from reshape

2019-08-27 Thread GitBox
tingying2020 opened a new pull request #16016: [numpy] operator ravel, derive 
from reshape
URL: https://github.com/apache/incubator-mxnet/pull/16016
 
 
   Numpy operator ravel which is the same as reshape(x, -1).
   
   @haojin2 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ZHAIXINGZHAIYUE commented on issue #14434: could i set multi kv_store in distribute training program?

2019-08-27 Thread GitBox
ZHAIXINGZHAIYUE commented on issue #14434: could  i set multi kv_store in 
distribute training program?
URL: 
https://github.com/apache/incubator-mxnet/issues/14434#issuecomment-525156954
 
 
   @eric-haibin-lin @zachgk  当使用update_on_kvstore=False 时, mxnet 
如何保证初始化的权重在不同的节点上都是一样的?
   ```
   def _initialize_kvstore(kvstore, param_arrays, arg_params, param_names, 
update_on_kvstore):
   """Initialize kvstore"""
   for idx, param_on_devs in enumerate(param_arrays):
   name = param_names[idx]
   kvstore.init(name, arg_params[name])
   
   if update_on_kvstore:
   kvstore.pull(name, param_on_devs, priority=-idx)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ZHAIXINGZHAIYUE removed a comment on issue #14434: could i set multi kv_store in distribute training program?

2019-08-27 Thread GitBox
ZHAIXINGZHAIYUE removed a comment on issue #14434: could  i set multi kv_store 
in distribute training program?
URL: 
https://github.com/apache/incubator-mxnet/issues/14434#issuecomment-517586293
 
 
   @eric-haibin-lin 另外,为什么在多机训练时不能设置多个类型为dsit的kvstore?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ZHAIXINGZHAIYUE removed a comment on issue #14434: could i set multi kv_store in distribute training program?

2019-08-27 Thread GitBox
ZHAIXINGZHAIYUE removed a comment on issue #14434: could  i set multi kv_store 
in distribute training program?
URL: 
https://github.com/apache/incubator-mxnet/issues/14434#issuecomment-517569552
 
 
   @eric-haibin-lin 使用 update_on_kvstore 与不使用它,在理论上对分布式训练速度影响大吗?谢谢


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (ab60214 -> 79ed678)

2019-08-27 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from ab60214  Add Median,p50,p99 to python profiler (#15953)
 add 79ed678  [Numpy] random.randint() implemented (#15956)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/random.py   | 66 +-
 python/mxnet/numpy/random.py   | 55 +++-
 python/mxnet/symbol/numpy/random.py| 66 +-
 src/operator/random/sample_op.cc   |  1 +
 tests/python/unittest/test_numpy_op.py | 46 
 5 files changed, 231 insertions(+), 3 deletions(-)



[GitHub] [incubator-mxnet] haojin2 merged pull request #15956: [Numpy] random.randint() implemented

2019-08-27 Thread GitBox
haojin2 merged pull request #15956: [Numpy] random.randint() implemented
URL: https://github.com/apache/incubator-mxnet/pull/15956
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >