[GitHub] [incubator-mxnet] hgt312 commented on issue #16921: [NumPy][Typo][Doc] Fix typos and docs in MXNet NumPy symbolic interface
hgt312 commented on issue #16921: [NumPy][Typo][Doc] Fix typos and docs in MXNet NumPy symbolic interface URL: https://github.com/apache/incubator-mxnet/pull/16921#issuecomment-558971853 @mxnet-label-bot add [Numpy] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 109ed84 Bump the publish timestamp. 109ed84 is described below commit 109ed84b937d391b3126e7301909b7dd0740b549 Author: mxnet-ci AuthorDate: Wed Nov 27 07:33:56 2019 + Bump the publish timestamp. --- date.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/date.txt b/date.txt new file mode 100644 index 000..3e35fbb --- /dev/null +++ b/date.txt @@ -0,0 +1 @@ +Wed Nov 27 07:33:55 UTC 2019
[incubator-mxnet] branch master updated (8f10d55 -> a98cefc)
This is an automated email from the ASF dual-hosted git repository. sxjscience pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 8f10d55 [Numpy] Fix imperative basic indexing in numpy (#16902) add a98cefc [Numpy] Basic indexing in symbolic interface of DeepNumpy (#16621) No new revisions were added by this update. Summary of changes: python/mxnet/__init__.py | 1 + python/mxnet/_ctypes/ndarray.py | 4 +- python/mxnet/_ctypes/symbol.py| 8 +- python/mxnet/base.py | 18 +++ python/mxnet/cython/base.pyi | 2 + python/mxnet/cython/ndarray.pyx | 4 +- python/mxnet/cython/symbol.pyx| 12 +- python/mxnet/gluon/block.py | 21 ++- python/mxnet/ndarray/numpy/_op.py | 18 +-- python/mxnet/ndarray/register.py | 7 +- python/mxnet/symbol/numpy/_symbol.py | 212 +++--- python/mxnet/symbol/register.py | 11 +- python/mxnet/symbol/symbol.py | 2 +- python/mxnet/test_utils.py| 78 +++ src/operator/numpy/np_matrix_op.cc| 4 +- tests/python/unittest/test_numpy_gluon.py | 207 - 16 files changed, 553 insertions(+), 56 deletions(-)
[incubator-mxnet] branch master updated (8f10d55 -> a98cefc)
This is an automated email from the ASF dual-hosted git repository. sxjscience pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 8f10d55 [Numpy] Fix imperative basic indexing in numpy (#16902) add a98cefc [Numpy] Basic indexing in symbolic interface of DeepNumpy (#16621) No new revisions were added by this update. Summary of changes: python/mxnet/__init__.py | 1 + python/mxnet/_ctypes/ndarray.py | 4 +- python/mxnet/_ctypes/symbol.py| 8 +- python/mxnet/base.py | 18 +++ python/mxnet/cython/base.pyi | 2 + python/mxnet/cython/ndarray.pyx | 4 +- python/mxnet/cython/symbol.pyx| 12 +- python/mxnet/gluon/block.py | 21 ++- python/mxnet/ndarray/numpy/_op.py | 18 +-- python/mxnet/ndarray/register.py | 7 +- python/mxnet/symbol/numpy/_symbol.py | 212 +++--- python/mxnet/symbol/register.py | 11 +- python/mxnet/symbol/symbol.py | 2 +- python/mxnet/test_utils.py| 78 +++ src/operator/numpy/np_matrix_op.cc| 4 +- tests/python/unittest/test_numpy_gluon.py | 207 - 16 files changed, 553 insertions(+), 56 deletions(-)
[GitHub] [incubator-mxnet] sxjscience merged pull request #16621: [Numpy] Basic indexing in symbolic interface of DeepNumpy
sxjscience merged pull request #16621: [Numpy] Basic indexing in symbolic interface of DeepNumpy URL: https://github.com/apache/incubator-mxnet/pull/16621 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sxjscience closed issue #16279: [Bug] Inconsistency between HybridBlock and Block
sxjscience closed issue #16279: [Bug] Inconsistency between HybridBlock and Block URL: https://github.com/apache/incubator-mxnet/issues/16279 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] liuzh91 commented on issue #16918: Configurable log interval for estimator
liuzh91 commented on issue #16918: Configurable log interval for estimator URL: https://github.com/apache/incubator-mxnet/issues/16918#issuecomment-558956935 > @liuzh91 would you be interested to fix this given you're currently working on the Estimator API? I can do this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] hgt312 opened a new pull request #16921: [NumPy][Typo][Doc] Fix typos and docs in MXNet NumPy symbolic interface
hgt312 opened a new pull request #16921: [NumPy][Typo][Doc] Fix typos and docs in MXNet NumPy symbolic interface URL: https://github.com/apache/incubator-mxnet/pull/16921 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu edited a comment on issue #16893: Multi-tensor LAMB
leezu edited a comment on issue #16893: Multi-tensor LAMB URL: https://github.com/apache/incubator-mxnet/pull/16893#issuecomment-558954600 What's the relation with https://github.com/apache/incubator-mxnet/pull/16715/? #16715 recently added 'LAMB' optimizer to `python/mxnet/optimizer/optimizer.py`. Your code is currently in conflict. Please resolve the conflict by merging or rebasing on master. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu edited a comment on issue #16893: Multi-tensor LAMB
leezu edited a comment on issue #16893: Multi-tensor LAMB URL: https://github.com/apache/incubator-mxnet/pull/16893#issuecomment-558954600 What's the relation with https://github.com/apache/incubator-mxnet/pull/16715/? #16715 recently added 'LAMB' optimizer to `python/mxnet/optimizer/optimizer.py`. Your code is currently in conflict. Please resolve the conflict. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu commented on issue #16893: Multi-tensor LAMB
leezu commented on issue #16893: Multi-tensor LAMB URL: https://github.com/apache/incubator-mxnet/pull/16893#issuecomment-558954600 What's the relation with https://github.com/apache/incubator-mxnet/pull/16715/? #16715 recently added 'LAMB' optimizer to `python/mxnet/optimizer/optimizer.py`. Your code is currently in conflict. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu commented on issue #16918: Configurable log interval for estimator
leezu commented on issue #16918: Configurable log interval for estimator URL: https://github.com/apache/incubator-mxnet/issues/16918#issuecomment-558952259 @liuzh91 would you be interested to fix this given you're currently working on the Estimator API? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] zburning commented on issue #16880: Better to flatten the label array in metric.F1()
zburning commented on issue #16880: Better to flatten the label array in metric.F1() URL: https://github.com/apache/incubator-mxnet/issues/16880#issuecomment-558943470 @sxjscience Yes, I would like to work on it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] TaoLv commented on issue #16899: [WIP] Enable MKL-DNN in pip packages
TaoLv commented on issue #16899: [WIP] Enable MKL-DNN in pip packages URL: https://github.com/apache/incubator-mxnet/pull/16899#issuecomment-558939291 I'm not sure whether the backward compatibility matters for convenient binary releases or not. What about if downstream projects explicitly call `pip install mxnet-cu90mkl` in their scripts [1]? @pengzhao-intel suggested to keep these *mkl variants till 2.0 major release [2]. [1]https://github.com/dmlc/gluon-cv/blob/master/tests/py3.yml#L19 [2]https://lists.apache.org/thread.html/87282a60cd73257d3dd4b57401ffe2eabd293d77456d9966b1f91336@%3Cdev.mxnet.apache.org%3E This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sxjscience commented on issue #16880: Better to flatten the label array in metric.F1()
sxjscience commented on issue #16880: Better to flatten the label array in metric.F1() URL: https://github.com/apache/incubator-mxnet/issues/16880#issuecomment-558930931 @zburning Are you willing to fix this problem? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] cjolivier01 edited a comment on issue #16916: Support for XLA devices
cjolivier01 edited a comment on issue #16916: Support for XLA devices URL: https://github.com/apache/incubator-mxnet/issues/16916#issuecomment-558918339 As a start, it would be good to simply be able to generate an HloModuleProto protobuf file. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] cjolivier01 commented on issue #16916: Support for XLA devices
cjolivier01 commented on issue #16916: Support for XLA devices URL: https://github.com/apache/incubator-mxnet/issues/16916#issuecomment-558918339 As a start, it would be good to simply be able to generate an HloModuleProto protobuf file. Thisin itself would create an object that can be used to execute. Either via plugin (I would be willing to help with such a plugin if a generalized HloModuleProto could be generated) or that could be plugged back into the mxnet engine. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] cjolivier01 edited a comment on issue #16916: Support for XLA devices
cjolivier01 edited a comment on issue #16916: Support for XLA devices URL: https://github.com/apache/incubator-mxnet/issues/16916#issuecomment-558918339 As a start, it would be good to simply be able to generate an HloModuleProto protobuf file. This in itself would create an object that can be used to execute. Either via plugin (I would be willing to help with such a plugin if a generalized HloModuleProto could be generated) or that could be plugged back into the mxnet engine. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] liuzh91 commented on a change in pull request #16900: introduce gradient update handler to the base estimator
liuzh91 commented on a change in pull request #16900: introduce gradient update handler to the base estimator URL: https://github.com/apache/incubator-mxnet/pull/16900#discussion_r351081843 ## File path: python/mxnet/gluon/contrib/estimator/event_handler.py ## @@ -130,13 +130,15 @@ class MetricHandler(EpochBegin, BatchEnd): -- train_metrics : List of EvalMetrics Training metrics to be updated at batch end. +priority : scalar +Priority level of the MetricHandler Review comment: I have fixed these documentation issues. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #16750: [Numpy] add custom op sort
wkcn commented on a change in pull request #16750: [Numpy] add custom op sort URL: https://github.com/apache/incubator-mxnet/pull/16750#discussion_r351080020 ## File path: python/mxnet/numpy_op_fallback.py ## @@ -49,6 +49,49 @@ def _register_helper(prop_cls): return _register_helper +@use_np # enforce np shape and array semantics for all the methods in this class +class Sort(operator.CustomOp): +"""Fallback to NumPy sort operator.""" +def __init__(self, axis, kind): +super(Sort, self).__init__() +self._axis = axis +self._kind = kind + +def forward(self, is_train, req, in_data, out_data, aux): +out = np.sort(in_data[0].asnumpy(), self._axis, self._kind) +self.assign(out_data[0], req[0], _mx_np.array(out, dtype=out.dtype, ctx=out_data[0].ctx)) + +def backward(self, req, out_grad, in_data, out_data, in_grad, aux): +raise NotImplementedError('Operator sort does not support gradient computation') Review comment: I think it is suitable to use numpy.argsort to get the indices in forward. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #16750: [Numpy] add custom op sort
wkcn commented on a change in pull request #16750: [Numpy] add custom op sort URL: https://github.com/apache/incubator-mxnet/pull/16750#discussion_r351080020 ## File path: python/mxnet/numpy_op_fallback.py ## @@ -49,6 +49,49 @@ def _register_helper(prop_cls): return _register_helper +@use_np # enforce np shape and array semantics for all the methods in this class +class Sort(operator.CustomOp): +"""Fallback to NumPy sort operator.""" +def __init__(self, axis, kind): +super(Sort, self).__init__() +self._axis = axis +self._kind = kind + +def forward(self, is_train, req, in_data, out_data, aux): +out = np.sort(in_data[0].asnumpy(), self._axis, self._kind) +self.assign(out_data[0], req[0], _mx_np.array(out, dtype=out.dtype, ctx=out_data[0].ctx)) + +def backward(self, req, out_grad, in_data, out_data, in_grad, aux): +raise NotImplementedError('Operator sort does not support gradient computation') Review comment: I think it is suitable to use numpy.argsort to get the index in forward. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu commented on a change in pull request #16900: introduce gradient update handler to the base estimator
leezu commented on a change in pull request #16900: introduce gradient update handler to the base estimator URL: https://github.com/apache/incubator-mxnet/pull/16900#discussion_r351075164 ## File path: python/mxnet/gluon/contrib/estimator/event_handler.py ## @@ -706,3 +714,29 @@ def train_end(self, estimator, *args, **kwargs): estimator.logger.info('[Epoch %d] EarlyStoppingHanlder: ' 'early stopping due to %s not improving', self.stopped_epoch, self.monitor.get()[0]) + +class GradientUpdateHandler(BatchEnd): +"""Gradient Update Handler that apply gradients on network weights + +:py:class:`GradientUpdateHandler` takes the priority level. It updates weight parameters +at the end of each batch + +Parameters +-- +priority : scalar, default -np.Inf Review comment: Should a integer be used instead of `-np.Inf`? Maybe there are some use-cases where a handler should be run prior to the gradient handler? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu commented on a change in pull request #16900: introduce gradient update handler to the base estimator
leezu commented on a change in pull request #16900: introduce gradient update handler to the base estimator URL: https://github.com/apache/incubator-mxnet/pull/16900#discussion_r351074857 ## File path: python/mxnet/gluon/contrib/estimator/event_handler.py ## @@ -130,13 +130,15 @@ class MetricHandler(EpochBegin, BatchEnd): -- train_metrics : List of EvalMetrics Training metrics to be updated at batch end. +priority : scalar +Priority level of the MetricHandler Review comment: Can you explain that the priority is used for ascending sorting of the handlers? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu commented on a change in pull request #16900: introduce gradient update handler to the base estimator
leezu commented on a change in pull request #16900: introduce gradient update handler to the base estimator URL: https://github.com/apache/incubator-mxnet/pull/16900#discussion_r351074958 ## File path: python/mxnet/gluon/contrib/estimator/event_handler.py ## @@ -235,14 +240,17 @@ class LoggingHandler(TrainBegin, TrainEnd, EpochBegin, EpochEnd, BatchBegin, Bat Training metrics to be logged, logged at batch end, epoch end, train end. val_metrics : list of EvalMetrics Validation metrics to be logged, logged at epoch end, train end. +priority : scalar, default np.Inf +Priority level of the LoggingHandler Review comment: Explanation of the priority level would be helpful for users. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu commented on a change in pull request #16900: introduce gradient update handler to the base estimator
leezu commented on a change in pull request #16900: introduce gradient update handler to the base estimator URL: https://github.com/apache/incubator-mxnet/pull/16900#discussion_r351074874 ## File path: python/mxnet/gluon/contrib/estimator/event_handler.py ## @@ -176,14 +178,17 @@ class ValidationHandler(TrainBegin, BatchEnd, EpochEnd): batch_period : int, default None How often to run validation at batch end, by default :py:class:`ValidationHandler` does not validate at batch end. +priority: scalar, default -1000 +Priority level of the ValidataionHandler Review comment: Typo This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] 01/01: wip
This is an automated email from the ASF dual-hosted git repository. iblis pushed a commit to branch ib/autograd-custom-func in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git commit 9a9d5663d2c586df6d4a9190cbee668813e46b9d Author: Iblis Lin AuthorDate: Wed Nov 27 10:34:45 2019 +0800 wip --- julia/src/MXNet.jl| 14 +++ julia/src/autograd.jl | 236 -- julia/src/base.jl | 1 + 3 files changed, 243 insertions(+), 8 deletions(-) diff --git a/julia/src/MXNet.jl b/julia/src/MXNet.jl index 89ec88b..6a01fb0 100644 --- a/julia/src/MXNet.jl +++ b/julia/src/MXNet.jl @@ -64,6 +64,20 @@ export NDArray, broadcast_axis, broadcast_axes +# autograd.jl +export attach_grad!, + backward!, + getgrad, + is_recording, + is_training, + mark_variables, + pause, + predict_mode, + record, + symbol, + train_mode, + @custom + # executor.jl export Executor, bind, diff --git a/julia/src/autograd.jl b/julia/src/autograd.jl index 8b5edae..3a32c08 100644 --- a/julia/src/autograd.jl +++ b/julia/src/autograd.jl @@ -19,6 +19,9 @@ # this is a port of Python's autograd module # https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/autograd.py +using Base.Meta: isexpr +using Base.GC # FIXME + ### # Private util functions ### @@ -211,7 +214,7 @@ Compute the gradients of heads w.r.t previously marked variables. - `head::NDArray`: output NDArray -- `head_grad::NDArray` or `Cvoid`: gradient coefficient with respect to head. +- `head_grad::NDArray` or `Nothing`: gradient coefficient with respect to head. - `heads::Vector{NDArray}`: a list of output NDArray @@ -227,11 +230,14 @@ Compute the gradients of heads w.r.t previously marked variables. backward!(head::NDArray, head_grad::NDArray; kws...) = backward!([head], [head_grad]; kws...) -backward!(head::NDArray, head_grad::Cvoid = nothing; kws...) = +backward!(head::NDArray, head_grad::Nothing = nothing; kws...) = backward!([head], head_grad; kws...) -function backward!(heads::VecOfNDArray, head_grad::Cvoid; +function backward!(heads::VecOfNDArray, ::Nothing; retain_graph::Bool = false, train_mode::Bool = true) + cblist_ref = first(keys(_cblists)) + + # TODO check MXAutogradBackwardEx usage @mxcall( :MXAutogradBackwardEx, (MX_uint, @@ -242,8 +248,8 @@ function backward!(heads::VecOfNDArray, head_grad::Cvoid; Cint, Cint, Cint, - Ptr{MX_handle}, - Ptr{MX_handle}), + Ptr{Ptr{MX_handle}}, + Ptr{Ptr{Cint}}), length(heads), map(x -> x.handle, heads), C_NULL, @@ -279,8 +285,8 @@ function backward!(heads::VecOfNDArray, head_grads::Vector; Cint, Cint, Cint, - Ptr{MX_handle}, - Ptr{MX_handle}), + Ptr{Ptr{MX_handle}}, + Ptr{Ptr{Cint}}), length(output_handles), output_handles, ograd_handles, @@ -400,5 +406,219 @@ function symbol(x::NDArray) end ### -# TODO: User-defined differentiable function +# User-defined differentiable function ### + + +# gc-free holder +const _cbs_r = [Ref{Ptr{Cvoid}}(C_NULL), Ref{Ptr{Cvoid}}(C_NULL)] +const _cbs= [Ptr{Cvoid}(C_NULL), Ptr{Cvoid}(C_NULL)] +const _cbsref = Ref{Ptr{Ptr{Cvoid}}}(C_NULL) +const _frefs = Dict() # hold custom function instance and its args +const _conds = [] + +function _back_wrapper(num_ograds, num_igrads, ptrs, reqs, is_train, fptr::Ptr{Cvoid}) + # @info "_back_wrapper" + # hdls = unsafe_wrap(Array, ptrs, num_ograds + num_igrads) + # @info "_back_wrapper" hdls + # ograds = map(x -> NDArray(MX_NDArrayHandle(x), false), hdls[1:num_ograds]) + # @info "_back_wrapper" ograds + # igrads = map(NDArray ∘ MX_NDArrayHandle, hdls[num_ograds+1:num_ograds+num_igrads]) + # @info "_back_wrapper" igrads + # reqs = unsafe_wrap(Array, reqs, num_igrads) + # @info "_back_wrapper" reqs + # + # # passing closure via raw pointer + # f = unsafe_pointer_to_objref(fptr) + # + # Δs = backward!(f, ograds...) + # Δs = Δs isa NDArray ? [Δs] : Δs + # + # # update gradient + # for (i, Δ, req) ∈ zip(igrads, Δs, reqs) + # req = GRAD_REQ(req) + # if req == GRAD_NOP + # continue + # elseif req ∈ (GRAD_WRITE, GRAD_INPLACE) + # i[:] = Δ + # elseif req == GRAD_ADD + # i[:] += Δ + # end + # end + # + # # release ref for gc + # delete!(_frefs, f) + + Cint(true) +end + +function _back_wrapper(num_ograds, num_igrads, ptrs, reqs, is_train, handle) + ccall(:uv_async_send, Cint, (Ptr{Cvoid},), handle) +end + +function _del_wrapper(handle) + ccall(:uv_async_send, Cint, (Ptr{Cvoid},), han
[incubator-mxnet] branch ib/autograd-custom-func created (now 9a9d566)
This is an automated email from the ASF dual-hosted git repository. iblis pushed a change to branch ib/autograd-custom-func in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. at 9a9d566 wip This branch includes the following new commits: new 9a9d566 wip The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference.
[GitHub] [incubator-mxnet] szha edited a comment on issue #14883: [Discussion] Overhead in MXNet Execution
szha edited a comment on issue #14883: [Discussion] Overhead in MXNet Execution URL: https://github.com/apache/incubator-mxnet/issues/14883#issuecomment-558896605 One idea that gained some popularity after discussion is to introduce an engine-less mode to MXNet, in which the operators are exposed in API and dispatched in a similar way as pytorch. Given that Naive Engine option should be quite close to this already, we should verify the overhead in the Naive Engine mode and judge the necessity based on that result. Since the target is to measure the overhead, we will want to control the performance difference in operators. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] szha commented on issue #14883: [Discussion] Overhead in MXNet Execution
szha commented on issue #14883: [Discussion] Overhead in MXNet Execution URL: https://github.com/apache/incubator-mxnet/issues/14883#issuecomment-558896605 One idea that gained some popularity after discussion is to introduce an engine-less mode to MXNet, in which the operators are exposed in API and dispatched in a similar way as pytorch. Given that Naive Engine option should be quite close to this already, we should verify the overhead in the Naive Engine mode and judge the necessity based on that result. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16916: Support for XLA devices
pengzhao-intel commented on issue #16916: Support for XLA devices URL: https://github.com/apache/incubator-mxnet/issues/16916#issuecomment-558889568 Good idea! We're evaluating the possibility of XLA recently :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] rongzha1 edited a comment on issue #16891: Upgrading MKLDNN to 1.0 causes performance regression.
rongzha1 edited a comment on issue #16891: Upgrading MKLDNN to 1.0 causes performance regression. URL: https://github.com/apache/incubator-mxnet/issues/16891#issuecomment-558656729 cpu test on both v1.5.x and v1.6.x mkldnn + openblas, but no regression issue was found. So can you try to use USE_BLAS=mkl as Taolv said above and test again? I have tried to use build.sh but failed for: CMake Error at simd/CMakeLists.txt:41 (enable_language): No CMAKE_ASM_NASM_COMPILER could be found. So for v1.5 and v1.6 I build use cmd: make -j USE_MKLDNN=1 USE_BLAS=openblas USE_GPERFTOOLS=0 and setting openblas include and lib directory. platform: skx-8180 1.5: [rongzha1@mlt-ace ds2_training_inference]$ cd mxnet_1.5/ [rongzha1@mlt-ace mxnet_1.5]$ ldd lib/libmxnet.so | grep open libopenblas.so.0 => /lib64/libopenblas.so.0 (0x7f8db5ff9000) libopencv_highgui.so.2.4 => /lib64/libopencv_highgui.so.2.4 (0x7f8dacdaf000) libopencv_imgproc.so.2.4 => /lib64/libopencv_imgproc.so.2.4 (0x7f8dac931000) libopencv_core.so.2.4 => /lib64/libopencv_core.so.2.4 (0x7f8dac4f7000) [rongzha1@mlt-ace mxnet_1.5]$ ldd lib/libmxnet.so | grep mkl libmklml_intel.so => /home/rongzha1/project/mxnet/ds2_training_inference/mxnet_1.5/lib/libmklml_intel.so (0x7f9707c8d000) libmkldnn.so.0 => /home/rongzha1/project/mxnet/ds2_training_inference/mxnet_1.5/lib/libmkldnn.so.0 (0x7f970671d000) (mxnet) [rongzha1@mlt-ace mxnet_1.5]$ ldd lib/libmxnet.so | grep omp libiomp5.so => /home/rongzha1/project/mxnet/ds2_training_inference/mxnet_1.5/lib/libiomp5.so (0x7f75cbc42000) libXcomposite.so.1 => /lib64/libXcomposite.so.1 (0x7f75c2647000) 1.6.x: [rongzha1@mlt-skx141 perf_regression]$ ldd lib/libmxnet.so | grep open libopenblas.so.0 => /usr/lib64/libopenblas.so.0 (0x7fc101c03000) libopencv_highgui.so.2.4 => /usr/lib64/libopencv_highgui.so.2.4 (0x7fc1004cf000) libopencv_imgproc.so.2.4 => /usr/lib64/libopencv_imgproc.so.2.4 (0x7fc100051000) libopencv_core.so.2.4 => /usr/lib64/libopencv_core.so.2.4 (0x7fc0ffc18000) [rongzha1@mlt-skx141 perf_regression]$ ldd lib/libmxnet.so | grep mkl libmkldnn.so.1 => /home/rongzha1/project/mxnet/ds2_training_inference/perf_regression/lib/libmkldnn.so.1 (0x7f837824) [rongzha1@mlt-skx141 perf_regression]$ ldd lib/libmxnet.so | grep omp libgomp.so.1 => /usr/lib64/libgomp.so.1 (0x7f1357b17000) libXcomposite.so.1 => /usr/lib64/libXcomposite.so.1 (0x7f13509a1000) v1.5.x: OMP=56 1 [21:43:26] src/io/iter_image_recordio_2.cc:172: ImageRecordIOParser2: data/cifar/train.rec, use 4 threads for decoding.. 2 [21:43:26] src/io/iter_image_recordio_2.cc:172: ImageRecordIOParser2: data/cifar/test.rec, use 4 threads for decoding.. 3 INFO:root:Epoch[0] Batch [0-50] Speed: 1668.60 samples/sec accuracy=0.273897 4 INFO:root:Epoch[0] Batch [50-100] Speed: 1699.64 samples/sec accuracy=0.380312 5 INFO:root:Epoch[0] Batch [100-150] Speed: 1692.57 samples/sec accuracy=0.425000 6 INFO:root:Epoch[0] Batch [150-200] Speed: 1696.67 samples/sec accuracy=0.444063 7 INFO:root:Epoch[0] Batch [200-250] Speed: 1698.27 samples/sec accuracy=0.465000 8 INFO:root:Epoch[0] Batch [250-300] Speed: 1693.87 samples/sec accuracy=0.497812 9 INFO:root:Epoch[0] Batch [300-350] Speed: 1698.26 samples/sec accuracy=0.505625 10 INFO:root:Epoch[0] Batch [350-400] Speed: 1691.21 samples/sec accuracy=0.52 11 INFO:root:Epoch[0] Batch [400-450] Speed: 1694.42 samples/sec accuracy=0.538750 12 INFO:root:Epoch[0] Batch [450-500] Speed: 1693.73 samples/sec accuracy=0.576875 13 INFO:root:Epoch[0] Batch [500-550] Speed: 1688.67 samples/sec accuracy=0.579063 14 INFO:root:Epoch[0] Batch [550-600] Speed: 1686.91 samples/sec accuracy=0.585313 15 INFO:root:Epoch[0] Batch [600-650] Speed: 1691.39 samples/sec accuracy=0.605313 16 INFO:root:Epoch[0] Batch [650-700] Speed: 1693.22 samples/sec accuracy=0.612812 17 INFO:root:Epoch[0] Batch [700-750] Speed: 1692.32 samples/sec accuracy=0.603750 18 INFO:root:Epoch[0] Train-accuracy=0.511549 19 INFO:root:Epoch[0] Time cost=29.955 20 INFO:root:Epoch[0] Validation-accuracy=0.642317 OMP=36 1 [22:10:31] src/io/iter_image_recordio_2.cc:172: ImageRecordIOParser2: data/cifar/train.rec, use 4 threads for decoding.. 2 [22:10:31] src/io/iter_image_recordio_2.cc:172: ImageRecordIOParser2: data/cifar/test.rec, use 4 threads for decoding.. 3 INFO:root:Epoch[0] Batch [0-50] Speed: 1969.98 samples/sec accuracy=0.279412 4 INFO:root:Epoch[0] Batch [50-100] Speed: 2014.50 samples/sec accuracy=0.380937 5 INFO:root:Epoch[0] Batch [100-150] Speed: 2009.43 samples/sec accuracy=0.428125 6
[GitHub] [incubator-mxnet] larroy commented on issue #16796: Add support for boolean inputs to FusedOp
larroy commented on issue #16796: Add support for boolean inputs to FusedOp URL: https://github.com/apache/incubator-mxnet/pull/16796#issuecomment-558881215 @ptrendx is in the container in ci/ folder. You can use the exact same run by using ci/build.py you have the same instruction in the output of CI run either at the top or the bottom. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] xinyu-intel commented on issue #16866: [WIP]Enhance quantized_conv case
xinyu-intel commented on issue #16866: [WIP]Enhance quantized_conv case URL: https://github.com/apache/incubator-mxnet/pull/16866#issuecomment-558881282 Some flaky tests always fail in CI but cannot reproduce easily from local even with docker env on EC2. So debug with the CI environment can be more efficiency to find the root cause. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16866: [WIP]Enhance quantized_conv case
ChaiBapchya commented on issue #16866: [WIP]Enhance quantized_conv case URL: https://github.com/apache/incubator-mxnet/pull/16866#issuecomment-558879178 Are you trying to use this PR for testing the issue? If that's the case it doesn't make sense coz its a costly experiment in that case. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] Kh4L commented on issue #16881: Add TypeFlag=>string macro
Kh4L commented on issue #16881: Add TypeFlag=>string macro URL: https://github.com/apache/incubator-mxnet/pull/16881#issuecomment-558878148 @wkcn @haojin2 understood, I will change it to use this This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 4245d3b Bump the publish timestamp. 4245d3b is described below commit 4245d3bdb551a0f095941aafce6ad00852a98c5f Author: mxnet-ci AuthorDate: Wed Nov 27 00:36:26 2019 + Bump the publish timestamp. --- date.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/date.txt b/date.txt new file mode 100644 index 000..73a8b54 --- /dev/null +++ b/date.txt @@ -0,0 +1 @@ +Wed Nov 27 00:36:26 UTC 2019
[GitHub] [incubator-mxnet] zhreshold commented on issue #16708: Training an FPN model using grad_req="add" causes rapid divergence, while manually implemented gradient accumulation works fine
zhreshold commented on issue #16708: Training an FPN model using grad_req="add" causes rapid divergence, while manually implemented gradient accumulation works fine URL: https://github.com/apache/incubator-mxnet/issues/16708#issuecomment-558876214 After digging a while, I found several confusing facts about this bug. 1. @nickguletskii is correct, it's not about `ElementWiseSum` 2. Duplicating same node with (1, 2, 3, 4, 8, 9, 10...) times, the loss and gradients are always GOOD, however, with (5, 6, 7), the gradients will diverge at the first iteration This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] yzhliu opened a new pull request #16920: [DO NOT MERGE] test fp16 cpu runtime
yzhliu opened a new pull request #16920: [DO NOT MERGE] test fp16 cpu runtime URL: https://github.com/apache/incubator-mxnet/pull/16920 ## Description ## (Brief description on what this PR is about) ## Checklist ## ### Essentials ### Please feel free to remove inapplicable items for your PR. - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) created (except PRs with tiny changes) - [ ] Changes are complete (i.e. I finished coding on this PR) - [ ] All changes have test coverage: - Unit tests are added for small changes to verify correctness (e.g. adding a new operator) - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore) - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL) - [ ] Code is well-documented: - For user-facing API changes, API doc string has been updated. - For new C++ functions in header files, their functionalities and arguments are documented. - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable - Check the API doc at https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html - [ ] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change ### Changes ### - [ ] Feature1, tests, (and when applicable, API doc) - [ ] Feature2, tests, (and when applicable, API doc) ## Comments ## - If this change is a backward incompatible change, why must this change be made. - Interesting edge cases to note here This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ptrendx opened a new pull request #16919: Backport #16902 to 1.6
ptrendx opened a new pull request #16919: Backport #16902 to 1.6 URL: https://github.com/apache/incubator-mxnet/pull/16919 Backport #16902 to 1.6 @sxjscience FYI This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] szha commented on issue #16754: Is mirroring working with MXNet 1.5.1 Gluon ?
szha commented on issue #16754: Is mirroring working with MXNet 1.5.1 Gluon ? URL: https://github.com/apache/incubator-mxnet/issues/16754#issuecomment-558869103 mirror option isn't added to Gluon yet. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch master updated (d2d4876 -> 8f10d55)
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from d2d4876 Fix memory leak reported by ASAN in NNVM to ONNX conversion (#15516) add 8f10d55 [Numpy] Fix imperative basic indexing in numpy (#16902) No new revisions were added by this update. Summary of changes: python/mxnet/ndarray/ndarray.py | 81 ++--- src/ndarray/ndarray.cc | 5 +- src/operator/nn/mkldnn/mkldnn_base-inl.h| 4 ++ tests/python/unittest/test_numpy_ndarray.py | 20 --- 4 files changed, 71 insertions(+), 39 deletions(-)
[incubator-mxnet] branch master updated (d2d4876 -> 8f10d55)
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from d2d4876 Fix memory leak reported by ASAN in NNVM to ONNX conversion (#15516) add 8f10d55 [Numpy] Fix imperative basic indexing in numpy (#16902) No new revisions were added by this update. Summary of changes: python/mxnet/ndarray/ndarray.py | 81 ++--- src/ndarray/ndarray.cc | 5 +- src/operator/nn/mkldnn/mkldnn_base-inl.h| 4 ++ tests/python/unittest/test_numpy_ndarray.py | 20 --- 4 files changed, 71 insertions(+), 39 deletions(-)
[GitHub] [incubator-mxnet] haojin2 merged pull request #16902: [Numpy] Fix imperative basic indexing in numpy
haojin2 merged pull request #16902: [Numpy] Fix imperative basic indexing in numpy URL: https://github.com/apache/incubator-mxnet/pull/16902 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 closed issue #16887: [Numpy] Bug of basic indexing
haojin2 closed issue #16887: [Numpy] Bug of basic indexing URL: https://github.com/apache/incubator-mxnet/issues/16887 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wkcn edited a comment on issue #15516: Fix memory leak reported by ASAN in NNVM to ONNX conversion
wkcn edited a comment on issue #15516: Fix memory leak reported by ASAN in NNVM to ONNX conversion URL: https://github.com/apache/incubator-mxnet/pull/15516#issuecomment-558862408 Merged. It is a tiny change to modify the template type, and keep the original variable name. We can modify the semantic name in another PR : ) std::vector is not suitable in this PR, since it will allocate extra capacity. Thank you! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16830: CI error in unix gpu test_quantization_gpu.test_quantized_conv
pengzhao-intel commented on issue #16830: CI error in unix gpu test_quantization_gpu.test_quantized_conv URL: https://github.com/apache/incubator-mxnet/issues/16830#issuecomment-558862368 https://github.com/apache/incubator-mxnet/pull/16866 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wkcn commented on issue #15516: Fix memory leak reported by ASAN in NNVM to ONNX conversion
wkcn commented on issue #15516: Fix memory leak reported by ASAN in NNVM to ONNX conversion URL: https://github.com/apache/incubator-mxnet/pull/15516#issuecomment-558862408 Merged. It is a tiny change to change the template type, and keep the original variable name. We can modify the semantic name in another PR. std::vector is not suitable in this PR, since it will allocate extra capacity. Thank you! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16830: CI error in unix gpu test_quantization_gpu.test_quantized_conv
pengzhao-intel commented on issue #16830: CI error in unix gpu test_quantization_gpu.test_quantized_conv URL: https://github.com/apache/incubator-mxnet/issues/16830#issuecomment-558862104 > Happening again: http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-cpu/detail/master/1325/pipeline > > ``` > > == > > FAIL: test_quantization_mkldnn.test_quantized_conv > > -- > > Traceback (most recent call last): > > File "/usr/local/lib/python3.5/dist-packages/nose/case.py", line 198, in runTest > > self.test(*self.arg) > > File "/usr/local/lib/python3.5/dist-packages/nose/util.py", line 620, in newfunc > > return func(*arg, **kw) > > File "/work/mxnet/tests/python/mkl/../unittest/common.py", line 177, in test_new > > orig_test(*args, **kwargs) > > File "/work/mxnet/tests/python/mkl/../quantization/test_quantization.py", line 277, in test_quantized_conv > > check_quantized_conv((3, 4, 28, 28), (3, 3), 128, (1, 1), (1, 1), False, qdtype) > > File "/work/mxnet/tests/python/mkl/../quantization/test_quantization.py", line 273, in check_quantized_conv > > assert cond == 0 > > AssertionError: > > >> begin captured stdout << - > > skipped testing quantized_conv for mkldnn cpu int8 since it is not supported yet > > skipped testing quantized_conv for mkldnn cpu int8 since it is not supported yet > ``` > > @xinyu-intel @PatricZhao Anyone could take a look? Yes, @xinyu-intel is looking for this and the PR is under the testing. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] pengzhao-intel edited a comment on issue #16830: CI error in unix gpu test_quantization_gpu.test_quantized_conv
pengzhao-intel edited a comment on issue #16830: CI error in unix gpu test_quantization_gpu.test_quantized_conv URL: https://github.com/apache/incubator-mxnet/issues/16830#issuecomment-558862104 > @xinyu-intel @PatricZhao Anyone could take a look? Yes, @xinyu-intel is looking for this and the PR is under the testing. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch master updated (6b00b2c -> d2d4876)
This is an automated email from the ASF dual-hosted git repository. wkcn pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 6b00b2c Allow loading from model files with empty weights. (#16061) add d2d4876 Fix memory leak reported by ASAN in NNVM to ONNX conversion (#15516) No new revisions were added by this update. Summary of changes: src/operator/subgraph/tensorrt/nnvm_to_onnx.cc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
[GitHub] [incubator-mxnet] wkcn merged pull request #15516: Fix memory leak reported by ASAN in NNVM to ONNX conversion
wkcn merged pull request #15516: Fix memory leak reported by ASAN in NNVM to ONNX conversion URL: https://github.com/apache/incubator-mxnet/pull/15516 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch master updated (a11b7ea -> 6b00b2c)
This is an automated email from the ASF dual-hosted git repository. wkcn pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from a11b7ea Try to fix CI (#16908) add 6b00b2c Allow loading from model files with empty weights. (#16061) No new revisions were added by this update. Summary of changes: python/mxnet/model.py | 1 + 1 file changed, 1 insertion(+)
[incubator-mxnet] branch master updated (a11b7ea -> 6b00b2c)
This is an automated email from the ASF dual-hosted git repository. wkcn pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from a11b7ea Try to fix CI (#16908) add 6b00b2c Allow loading from model files with empty weights. (#16061) No new revisions were added by this update. Summary of changes: python/mxnet/model.py | 1 + 1 file changed, 1 insertion(+)
[GitHub] [incubator-mxnet] wkcn commented on issue #16061: Allow loading from model files with empty weights.
wkcn commented on issue #16061: Allow loading from model files with empty weights. URL: https://github.com/apache/incubator-mxnet/pull/16061#issuecomment-558860811 Merged. Thank you for the fix : ) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wkcn merged pull request #16061: Allow loading from model files with empty weights.
wkcn merged pull request #16061: Allow loading from model files with empty weights. URL: https://github.com/apache/incubator-mxnet/pull/16061 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sxjscience commented on issue #16917: Revert "Try to fix CI (#16908)"
sxjscience commented on issue #16917: Revert "Try to fix CI (#16908)" URL: https://github.com/apache/incubator-mxnet/pull/16917#issuecomment-558860463 @larroy It seems that the additional dependencies have partially fixed the issue. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sxjscience closed pull request #16917: Revert "Try to fix CI (#16908)"
sxjscience closed pull request #16917: Revert "Try to fix CI (#16908)" URL: https://github.com/apache/incubator-mxnet/pull/16917 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] eric-haibin-lin opened a new issue #16918: Configurable log interval for estimator
eric-haibin-lin opened a new issue #16918: Configurable log interval for estimator URL: https://github.com/apache/incubator-mxnet/issues/16918 Currently the gluon Estimator has a LoggingHandler, which supports logging per batch and per epoch. It would be good to have a configurable logging interval (e.g. logging every 50 batches). Logging per epoch does not provide enough information, while logging per batch is to verbose This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sxjscience commented on issue #16902: [Numpy] Fix imperative basic indexing in numpy
sxjscience commented on issue #16902: [Numpy] Fix imperative basic indexing in numpy URL: https://github.com/apache/incubator-mxnet/pull/16902#issuecomment-558847792 @ptrendx Sorry for the confusion. This one actually fixes #16887 I've mistakenly pointed to the other one. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ptrendx commented on issue #16902: [Numpy] Fix imperative basic indexing in numpy
ptrendx commented on issue #16902: [Numpy] Fix imperative basic indexing in numpy URL: https://github.com/apache/incubator-mxnet/pull/16902#issuecomment-558847500 @sxjscience I'm confused. The issue tagged 1.6.0 is #16887 and I thought you will make the separate fix for that issue, not #16279... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ptrendx commented on issue #16895: Fix ndarray indexing bug
ptrendx commented on issue #16895: Fix ndarray indexing bug URL: https://github.com/apache/incubator-mxnet/pull/16895#issuecomment-558846371 @reminisce could you merge with the latest master to pick up this fix for the cpp test? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sxjscience commented on issue #16917: Revert "Try to fix CI (#16908)"
sxjscience commented on issue #16917: Revert "Try to fix CI (#16908)" URL: https://github.com/apache/incubator-mxnet/pull/16917#issuecomment-558841809 @larroy This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sxjscience opened a new pull request #16917: Revert "Try to fix CI (#16908)"
sxjscience opened a new pull request #16917: Revert "Try to fix CI (#16908)" URL: https://github.com/apache/incubator-mxnet/pull/16917 This reverts commit a11b7eaca6cda5f560c315a30093256f9a455cc2. ## Description ## Unfortunately the previous attempt did not fix the CI problem and it should be reverted. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sxjscience commented on issue #16908: Try to fix CI
sxjscience commented on issue #16908: Try to fix CI URL: https://github.com/apache/incubator-mxnet/pull/16908#issuecomment-558840874 @larroy Yes, we should revert it This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] larroy commented on issue #16908: Try to fix CI
larroy commented on issue #16908: Try to fix CI URL: https://github.com/apache/incubator-mxnet/pull/16908#issuecomment-558839335 https://github.com/apache/incubator-mxnet/issues/16478#issuecomment-558771696 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] larroy commented on issue #16908: Try to fix CI
larroy commented on issue #16908: Try to fix CI URL: https://github.com/apache/incubator-mxnet/pull/16908#issuecomment-558837490 If this change is not fixing the failure, could we revert it if the additional deps are not needed? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] larroy commented on issue #16908: Try to fix CI
larroy commented on issue #16908: Try to fix CI URL: https://github.com/apache/incubator-mxnet/pull/16908#issuecomment-558837202 this could be a Docker cache issue. I can't reproduce locally. TVM lib seems corrupted. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] larroy edited a comment on issue #16908: Try to fix CI
larroy edited a comment on issue #16908: Try to fix CI URL: https://github.com/apache/incubator-mxnet/pull/16908#issuecomment-558837202 this could be a ccache cache issue. I can't reproduce locally. TVM lib seems corrupted. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 commented on issue #16830: CI error in unix gpu test_quantization_gpu.test_quantized_conv
haojin2 commented on issue #16830: CI error in unix gpu test_quantization_gpu.test_quantized_conv URL: https://github.com/apache/incubator-mxnet/issues/16830#issuecomment-558832803 Happening again: http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-cpu/detail/master/1325/pipeline ``` == FAIL: test_quantization_mkldnn.test_quantized_conv -- Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/nose/case.py", line 198, in runTest self.test(*self.arg) File "/usr/local/lib/python3.5/dist-packages/nose/util.py", line 620, in newfunc return func(*arg, **kw) File "/work/mxnet/tests/python/mkl/../unittest/common.py", line 177, in test_new orig_test(*args, **kwargs) File "/work/mxnet/tests/python/mkl/../quantization/test_quantization.py", line 277, in test_quantized_conv check_quantized_conv((3, 4, 28, 28), (3, 3), 128, (1, 1), (1, 1), False, qdtype) File "/work/mxnet/tests/python/mkl/../quantization/test_quantization.py", line 273, in check_quantized_conv assert cond == 0 AssertionError: >> begin captured stdout << - skipped testing quantized_conv for mkldnn cpu int8 since it is not supported yet skipped testing quantized_conv for mkldnn cpu int8 since it is not supported yet ``` @xinyu-intel @PatricZhao Anyone could take a look? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] guoquan commented on issue #16864: [Discussion] 1.7.0 Roadmap
guoquan commented on issue #16864: [Discussion] 1.7.0 Roadmap URL: https://github.com/apache/incubator-mxnet/issues/16864#issuecomment-558830286 Let's have it then. #16916 I would (personally) focus it to requesting support for XLA devices. It would be helpful in the way that it enables access to the ~evil~ TPU. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] guoquan opened a new issue #16916: Support for XLA devices
guoquan opened a new issue #16916: Support for XLA devices URL: https://github.com/apache/incubator-mxnet/issues/16916 ## Description (A clear and concise description of what the feature is.) - If the proposal is about a new model, provide description of what the model is. - If the proposal is about an API, provide mock examples if possible. ## References - list reference and related literature - list known implementations This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 opened a new pull request #16915: Add NumPy-compatible left_shift and right_shift
haojin2 opened a new pull request #16915: Add NumPy-compatible left_shift and right_shift URL: https://github.com/apache/incubator-mxnet/pull/16915 ## Description ## As title. ## Checklist ## ### Essentials ### Please feel free to remove inapplicable items for your PR. - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) created (except PRs with tiny changes) - [ ] Changes are complete (i.e. I finished coding on this PR) - [ ] All changes have test coverage: - Unit tests are added for small changes to verify correctness (e.g. adding a new operator) - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore) - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL) - [ ] Code is well-documented: - For user-facing API changes, API doc string has been updated. - For new C++ functions in header files, their functionalities and arguments are documented. - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable - Check the API doc at https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html - [ ] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change ### Changes ### - [ ] Feature1, tests, (and when applicable, API doc) - [ ] Feature2, tests, (and when applicable, API doc) ## Comments ## Kernels contributed by @gyshi in #16025 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch v1.6.x updated: updating MXNet version to 1.6.0 in base.h for C APIs (#16906)
This is an automated email from the ASF dual-hosted git repository. lanking pushed a commit to branch v1.6.x in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/v1.6.x by this push: new 121739a updating MXNet version to 1.6.0 in base.h for C APIs (#16906) 121739a is described below commit 121739a1656d42970c747956f995b52f9f6d31c2 Author: Rohit Kumar Srivastava AuthorDate: Tue Nov 26 13:04:00 2019 -0800 updating MXNet version to 1.6.0 in base.h for C APIs (#16906) --- include/mxnet/base.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/mxnet/base.h b/include/mxnet/base.h index 02dd204..90b36ab 100644 --- a/include/mxnet/base.h +++ b/include/mxnet/base.h @@ -73,7 +73,7 @@ /*! \brief major version */ #define MXNET_MAJOR 1 /*! \brief minor version */ -#define MXNET_MINOR 5 +#define MXNET_MINOR 6 /*! \brief patch version */ #define MXNET_PATCH 0 /*! \brief mxnet version */
[GitHub] [incubator-mxnet] lanking520 merged pull request #16906: Bumping MXNet version to 1.6.0 in base.h for C APIs
lanking520 merged pull request #16906: Bumping MXNet version to 1.6.0 in base.h for C APIs URL: https://github.com/apache/incubator-mxnet/pull/16906 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haidark edited a comment on issue #16701: Hybridize and recursive conditional operator gradient/trainer bug
haidark edited a comment on issue #16701: Hybridize and recursive conditional operator gradient/trainer bug URL: https://github.com/apache/incubator-mxnet/issues/16701#issuecomment-558737775 bump for update and edit: EDIT* We found out it is related to passing a variable through `contrib.cond` multiple times. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] access2rohit commented on issue #16906: Bumping MXNet version to 1.6.0 in base.h for C APIs
access2rohit commented on issue #16906: Bumping MXNet version to 1.6.0 in base.h for C APIs URL: https://github.com/apache/incubator-mxnet/pull/16906#issuecomment-558803632 @mxnet-label-bot add [pr-awaiting-merge] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] artor1os opened a new pull request #16914: [Numpy] Implement atleast_1d, atleast_2d, atleast_3d
artor1os opened a new pull request #16914: [Numpy] Implement atleast_1d, atleast_2d, atleast_3d URL: https://github.com/apache/incubator-mxnet/pull/16914 ## Description ## Implement atleast_1d, atleast_2d, atleast_3d ## Checklist ## ### Essentials ### Please feel free to remove inapplicable items for your PR. - [ ] Changes are complete (i.e. I finished coding on this PR) - [ ] All changes have test coverage: - Unit tests are added for small changes to verify correctness (e.g. adding a new operator) - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore) - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL) - [ ] Code is well-documented: - For user-facing API changes, API doc string has been updated. - For new C++ functions in header files, their functionalities and arguments are documented. - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable - Check the API doc at https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html - [ ] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change ### Changes ### - [ ] atleast_1d, tests - [ ] atleast_2d, tests - [ ] atleast_3d, tests This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] eric-haibin-lin commented on a change in pull request #16903: changing data type of 't' to int in lamb_update_phase1
eric-haibin-lin commented on a change in pull request #16903: changing data type of 't' to int in lamb_update_phase1 URL: https://github.com/apache/incubator-mxnet/pull/16903#discussion_r350943950 ## File path: src/operator/optimizer_op-inl.h ## @@ -1639,8 +1639,8 @@ struct LambUpdatePhaseOneKernel { DType g = mean_data[i] / (square_root::Map(var_data[i]) + epsilon) + wd * weight_data[i]; if (bias_correction) { - DType mean_hat = mean_data[i] / (1. - power::Map(beta1, t)); - DType var_hat = var_data[i] / (1 - power::Map(beta2, t)); + DType mean_hat = mean_data[i] / (1. - std::pow(beta1, t)); Review comment: @apeforest i'm not sure if mshadow allows mixed data type as inputs This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] eric-haibin-lin commented on a change in pull request #16903: changing data type of 't' to int in lamb_update_phase1
eric-haibin-lin commented on a change in pull request #16903: changing data type of 't' to int in lamb_update_phase1 URL: https://github.com/apache/incubator-mxnet/pull/16903#discussion_r350943950 ## File path: src/operator/optimizer_op-inl.h ## @@ -1639,8 +1639,8 @@ struct LambUpdatePhaseOneKernel { DType g = mean_data[i] / (square_root::Map(var_data[i]) + epsilon) + wd * weight_data[i]; if (bias_correction) { - DType mean_hat = mean_data[i] / (1. - power::Map(beta1, t)); - DType var_hat = var_data[i] / (1 - power::Map(beta2, t)); + DType mean_hat = mean_data[i] / (1. - std::pow(beta1, t)); Review comment: @apeforest i'm not sure mshadow allows mixed data type as inputs This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch master updated (c9585bd -> a11b7ea)
This is an automated email from the ASF dual-hosted git repository. sxjscience pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from c9585bd Fix the problem in printing feature in c++ API examples : feature_extract (#15686) add a11b7ea Try to fix CI (#16908) No new revisions were added by this update. Summary of changes: ci/docker/install/ubuntu_core.sh | 4 1 file changed, 4 insertions(+)
[incubator-mxnet] branch master updated (c9585bd -> a11b7ea)
This is an automated email from the ASF dual-hosted git repository. sxjscience pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from c9585bd Fix the problem in printing feature in c++ API examples : feature_extract (#15686) add a11b7ea Try to fix CI (#16908) No new revisions were added by this update. Summary of changes: ci/docker/install/ubuntu_core.sh | 4 1 file changed, 4 insertions(+)
[GitHub] [incubator-mxnet] sxjscience merged pull request #16908: Try to fix CI
sxjscience merged pull request #16908: Try to fix CI URL: https://github.com/apache/incubator-mxnet/pull/16908 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16906: Bumping MXNet version to 1.6.0 in base.h for C APIs
ChaiBapchya commented on issue #16906: Bumping MXNet version to 1.6.0 in base.h for C APIs URL: https://github.com/apache/incubator-mxnet/pull/16906#issuecomment-558772944 @ptrendx prima facie (I could see lots of pipelines timing out) Need to dig deep to find out where exactly was it consuming the most time. In December, we will be revisiting this in greater detail. For now CI is to be kept running... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] access2rohit commented on issue #16905: updating MXNet version to 1.6.0 in base.h for C APIs
access2rohit commented on issue #16905: updating MXNet version to 1.6.0 in base.h for C APIs URL: https://github.com/apache/incubator-mxnet/pull/16905#issuecomment-558772297 > @access2rohit Since the 1.6.x branch is already diverged from master, there is no need to do this 1.6 step I think - we should just go directly to 1.7 (which I believe requires changes in more places). Yes it requires changes in more places. IMO we should follow proper progression by moving from 1.5->1.6->1.7 to keep the change history in master clean and consistent. @ptrendx do you agree ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16478: CI unix-gpu GPU:CMake build failures
ChaiBapchya commented on issue #16478: CI unix-gpu GPU:CMake build failures URL: https://github.com/apache/incubator-mxnet/issues/16478#issuecomment-558771696 Another one here (unrelated PR #16894 ) http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/detail/PR-16894/9/pipeline This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ptrendx commented on issue #16905: updating MXNet version to 1.6.0 in base.h for C APIs
ptrendx commented on issue #16905: updating MXNet version to 1.6.0 in base.h for C APIs URL: https://github.com/apache/incubator-mxnet/pull/16905#issuecomment-558770227 @access2rohit Since the 1.6.x branch is already diverged from master, there is no need to do this 1.6 step I think - we should just go directly to 1.7 (which I believe requires changes in more places). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] access2rohit commented on issue #16905: updating MXNet version to 1.6.0 in base.h for C APIs
access2rohit commented on issue #16905: updating MXNet version to 1.6.0 in base.h for C APIs URL: https://github.com/apache/incubator-mxnet/pull/16905#issuecomment-558768357 > ? @samskalicky Sure! I think we should first bump master to 1.6 and then again to 1.7. Does that makes sense ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 7061f4a Bump the publish timestamp. 7061f4a is described below commit 7061f4ad1e1eef43c3e8c2936155585add9871a7 Author: mxnet-ci AuthorDate: Tue Nov 26 18:41:29 2019 + Bump the publish timestamp. --- date.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/date.txt b/date.txt new file mode 100644 index 000..7c08641 --- /dev/null +++ b/date.txt @@ -0,0 +1 @@ +Tue Nov 26 18:41:29 UTC 2019
[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16830: CI error in unix gpu test_quantization_gpu.test_quantized_conv
ChaiBapchya commented on issue #16830: CI error in unix gpu test_quantization_gpu.test_quantized_conv URL: https://github.com/apache/incubator-mxnet/issues/16830#issuecomment-558745418 Happening again http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-cpu/detail/PR-16894/9/pipeline #16894 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haidark commented on issue #16701: Hybridize, conditional operator, and loop gradient/trainer bug
haidark commented on issue #16701: Hybridize, conditional operator, and loop gradient/trainer bug URL: https://github.com/apache/incubator-mxnet/issues/16701#issuecomment-558737775 bump for update This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] lanking520 commented on issue #16906: Bumping MXNet version to 1.6.0 in base.h for C APIs
lanking520 commented on issue #16906: Bumping MXNet version to 1.6.0 in base.h for C APIs URL: https://github.com/apache/incubator-mxnet/pull/16906#issuecomment-558735807 restarted all the tests This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] samskalicky commented on issue #16905: updating MXNet version to 1.6.0 in base.h for C APIs
samskalicky commented on issue #16905: updating MXNet version to 1.6.0 in base.h for C APIs URL: https://github.com/apache/incubator-mxnet/pull/16905#issuecomment-558731525 @access2rohit shouldnt we be changing master to 1.7? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ptrendx commented on issue #16906: Bumping MXNet version to 1.6.0 in base.h for C APIs
ptrendx commented on issue #16906: Bumping MXNet version to 1.6.0 in base.h for C APIs URL: https://github.com/apache/incubator-mxnet/pull/16906#issuecomment-558727945 @ChaiBapchya @larroy all those failed builds are because of a hang during compilation, what can we do to diagnose and fix it? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch v1.6.x updated: port shape op to 1.6.x (#16912)
This is an automated email from the ASF dual-hosted git repository. ptrendx pushed a commit to branch v1.6.x in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/v1.6.x by this push: new 00f169f port shape op to 1.6.x (#16912) 00f169f is described below commit 00f169f7519e1e416592b68c3dc8ac92362d6d07 Author: Hao Jin AuthorDate: Tue Nov 26 09:07:49 2019 -0800 port shape op to 1.6.x (#16912) --- python/mxnet/ndarray/numpy/_op.py | 40 +++-- python/mxnet/numpy/multiarray.py | 68 +- python/mxnet/numpy_dispatch_protocol.py| 1 + .../python/unittest/test_numpy_interoperability.py | 15 +++-- tests/python/unittest/test_numpy_op.py | 19 ++ 5 files changed, 119 insertions(+), 24 deletions(-) diff --git a/python/mxnet/ndarray/numpy/_op.py b/python/mxnet/ndarray/numpy/_op.py index ed3d9d8..4ce675a 100644 --- a/python/mxnet/ndarray/numpy/_op.py +++ b/python/mxnet/ndarray/numpy/_op.py @@ -28,7 +28,7 @@ from ...context import current_context from . import _internal as _npi from ..ndarray import NDArray -__all__ = ['zeros', 'ones', 'full', 'add', 'subtract', 'multiply', 'divide', 'mod', 'remainder', 'power', +__all__ = ['shape', 'zeros', 'ones', 'full', 'add', 'subtract', 'multiply', 'divide', 'mod', 'remainder', 'power', 'arctan2', 'sin', 'cos', 'tan', 'sinh', 'cosh', 'tanh', 'log10', 'sqrt', 'cbrt', 'abs', 'absolute', 'exp', 'expm1', 'arcsin', 'arccos', 'arctan', 'sign', 'log', 'degrees', 'log2', 'log1p', 'rint', 'radians', 'reciprocal', 'square', 'negative', 'fix', 'ceil', 'floor', @@ -41,7 +41,37 @@ __all__ = ['zeros', 'ones', 'full', 'add', 'subtract', 'multiply', 'divide', 'mo 'hsplit', 'rot90', 'einsum', 'true_divide', 'nonzero', 'shares_memory', 'may_share_memory', 'diff'] @set_module('mxnet.ndarray.numpy') -def zeros(shape, dtype=_np.float32, order='C', ctx=None): +def shape(a): +""" +Return the shape of an array. +Parameters +-- +a : array_like +Input array. +Returns +--- +shape : tuple of ints +The elements of the shape tuple give the lengths of the +corresponding array dimensions. +See Also + +ndarray.shape : Equivalent array method. +Examples + +>>> np.shape(np.eye(3)) +(3, 3) +>>> np.shape([[1, 2]]) +(1, 2) +>>> np.shape([0]) +(1,) +>>> np.shape(0) +() +""" +return a.shape + + +@set_module('mxnet.ndarray.numpy') +def zeros(shape, dtype=_np.float32, order='C', ctx=None): # pylint: disable=redefined-outer-name """Return a new array of given shape and type, filled with zeros. This function currently only supports storing multi-dimensional data in row-major (C-style). @@ -75,7 +105,7 @@ def zeros(shape, dtype=_np.float32, order='C', ctx=None): @set_module('mxnet.ndarray.numpy') -def ones(shape, dtype=_np.float32, order='C', ctx=None): +def ones(shape, dtype=_np.float32, order='C', ctx=None): # pylint: disable=redefined-outer-name """Return a new array of given shape and type, filled with ones. This function currently only supports storing multi-dimensional data in row-major (C-style). @@ -108,8 +138,9 @@ def ones(shape, dtype=_np.float32, order='C', ctx=None): return _npi.ones(shape=shape, ctx=ctx, dtype=dtype) +# pylint: disable=too-many-arguments, redefined-outer-name @set_module('mxnet.ndarray.numpy') -def full(shape, fill_value, dtype=None, order='C', ctx=None, out=None): # pylint: disable=too-many-arguments +def full(shape, fill_value, dtype=None, order='C', ctx=None, out=None): """ Return a new array of given shape and type, filled with `fill_value`. Parameters @@ -161,6 +192,7 @@ def full(shape, fill_value, dtype=None, order='C', ctx=None, out=None): # pylin ctx = current_context() dtype = _np.float32 if dtype is None else dtype return _npi.full(shape=shape, value=fill_value, ctx=ctx, dtype=dtype, out=out) +# pylint: enable=too-many-arguments, redefined-outer-name @set_module('mxnet.ndarray.numpy') diff --git a/python/mxnet/numpy/multiarray.py b/python/mxnet/numpy/multiarray.py index ad5fb54..35a549e 100644 --- a/python/mxnet/numpy/multiarray.py +++ b/python/mxnet/numpy/multiarray.py @@ -45,7 +45,7 @@ from ..context import current_context from ..ndarray import numpy as _mx_nd_np from ..ndarray.numpy import _internal as _npi -__all__ = ['ndarray', 'empty', 'array', 'zeros', 'ones', 'full', 'add', 'subtract', 'multiply', 'divide', +__all__ = ['ndarray', 'empty', 'array', 'shape', 'zeros', 'ones', 'full', 'add', 'subtract', 'multiply', 'divide', 'mod', 'remainder', 'power', 'arctan2', 'sin', 'cos', 'tan', 'sinh', 'cosh', 'tanh', 'log10', 'sqrt', 'cbrt', 'abs', 'absolute', 'exp', 'expm1', 'arcsin', 'arccos',
[GitHub] [incubator-mxnet] ptrendx merged pull request #16912: Port shape op to 1.6.x
ptrendx merged pull request #16912: Port shape op to 1.6.x URL: https://github.com/apache/incubator-mxnet/pull/16912 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch v1.6.x updated: Backport of #16827, #16791 and #16888 to 1.6 branch (#16901)
This is an automated email from the ASF dual-hosted git repository. ptrendx pushed a commit to branch v1.6.x in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/v1.6.x by this push: new 4c41afd Backport of #16827, #16791 and #16888 to 1.6 branch (#16901) 4c41afd is described below commit 4c41afd1e6014c5cde00f4d253474ffa1e141cac Author: Przemyslaw Tredak AuthorDate: Tue Nov 26 09:03:15 2019 -0800 Backport of #16827, #16791 and #16888 to 1.6 branch (#16901) * refactor and reduce float types for some functions, also add bitwise_xor (#16827) * Mixed precison binary op backward (use in) for numpy (#16791) * mixed precison binary op backward * reduce unix cpu runtime * Add evaluation_loss to the estimator base class. (#16888) * Add evaluation_loss to the estimator base class. * Update the base estimator class to support the separate evaluation loss. * Add evaluation loss to the base estimator class. * Add unittest for evaluation loss in the test_evaluation function * Update estimator.py * Update estimator.py --- python/mxnet/gluon/contrib/estimator/estimator.py | 11 +- python/mxnet/ndarray/numpy/_op.py | 40 +++- python/mxnet/numpy/multiarray.py | 42 +++- python/mxnet/numpy_dispatch_protocol.py| 1 + python/mxnet/symbol/numpy/_symbol.py | 35 ++- src/operator/elemwise_op_common.h | 3 +- src/operator/numpy/np_elemwise_broadcast_op.cc | 243 + src/operator/numpy/np_elemwise_broadcast_op.cu | 75 +-- src/operator/numpy/np_elemwise_broadcast_op.h | 114 +- ..._op.cc => np_elemwise_broadcast_op_extended.cc} | 193 ..._op.cu => np_elemwise_broadcast_op_extended.cu} | 81 +-- src/operator/operator_tune.cc | 4 +- src/operator/tensor/elemwise_binary_broadcast_op.h | 136 src/operator/tensor/elemwise_binary_op.h | 148 +++-- src/operator/tensor/elemwise_binary_scalar_op.h| 20 ++ src/operator/tensor/elemwise_unary_op.h| 4 +- tests/python/unittest/test_gluon_estimator.py | 4 +- .../python/unittest/test_numpy_interoperability.py | 13 ++ tests/python/unittest/test_numpy_op.py | 23 +- 19 files changed, 528 insertions(+), 662 deletions(-) diff --git a/python/mxnet/gluon/contrib/estimator/estimator.py b/python/mxnet/gluon/contrib/estimator/estimator.py index 83b954d..54a0b16 100644 --- a/python/mxnet/gluon/contrib/estimator/estimator.py +++ b/python/mxnet/gluon/contrib/estimator/estimator.py @@ -59,6 +59,9 @@ class Estimator(object): Trainer to apply optimizer on network parameters. context : Context or list of Context Device(s) to run the training on. +evaluation_loss: gluon.loss.loss +Loss (objective) function to calculate during evaluation. If set evaluation_loss +None, it will use the same loss function as self.loss """ @@ -85,12 +88,16 @@ class Estimator(object): metrics=None, initializer=None, trainer=None, - context=None): + context=None, + evaluation_loss=None): self.net = net self.loss = self._check_loss(loss) self._train_metrics = _check_metrics(metrics) self._add_default_training_metrics() self._add_validation_metrics() +self.evaluation_loss = self.loss +if evaluation_loss is not None: +self.evaluation_loss = self._check_loss(evaluation_loss) self.logger = logging.Logger(name='Estimator', level=logging.INFO) self.logger.addHandler(logging.StreamHandler(sys.stdout)) @@ -228,7 +235,7 @@ class Estimator(object): """ data, label = self._get_data_and_label(val_batch, self.context, batch_axis) pred = [self.net(x) for x in data] -loss = [self.loss(y_hat, y) for y_hat, y in zip(pred, label)] +loss = [self.evaluation_loss(y_hat, y) for y_hat, y in zip(pred, label)] # update metrics for metric in val_metrics: if isinstance(metric, metric_loss): diff --git a/python/mxnet/ndarray/numpy/_op.py b/python/mxnet/ndarray/numpy/_op.py index ff404a7..ed3d9d8 100644 --- a/python/mxnet/ndarray/numpy/_op.py +++ b/python/mxnet/ndarray/numpy/_op.py @@ -36,7 +36,7 @@ __all__ = ['zeros', 'ones', 'full', 'add', 'subtract', 'multiply', 'divide', 'mo 'linspace', 'logspace', 'expand_dims', 'tile', 'arange', 'split', 'vsplit', 'concatenate', 'append', 'stack', 'vstack', 'column_stack', 'dstack', 'mean', 'maximum', 'minimum', 'swapaxes', 'clip', 'argmax', 'argmin', 'std', 'var', 'indices', 'copysign', 'ravel', 'hanning', 'hamming', 'blackman', 'flip', -
[GitHub] [incubator-mxnet] ptrendx merged pull request #16901: Backport of #16827, #16791 and #16888 to 1.6 branch
ptrendx merged pull request #16901: Backport of #16827, #16791 and #16888 to 1.6 branch URL: https://github.com/apache/incubator-mxnet/pull/16901 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wkcn commented on issue #16909: Build with cv 4.0
wkcn commented on issue #16909: Build with cv 4.0 URL: https://github.com/apache/incubator-mxnet/pull/16909#issuecomment-558701043 @ewail Welcome to submit the PR : ) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch master updated (6c7ce24 -> c9585bd)
This is an automated email from the ASF dual-hosted git repository. wkcn pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 6c7ce24 Revert "Mkldnn fullyConnect bwd bug fix (#16890)" (#16907) add c9585bd Fix the problem in printing feature in c++ API examples : feature_extract (#15686) No new revisions were added by this update. Summary of changes: cpp-package/example/feature_extract/feature_extract.cpp | 1 + 1 file changed, 1 insertion(+)
[GitHub] [incubator-mxnet] wkcn merged pull request #15686: Fix the problem in printing feature in c++ API examples : feature_extract
wkcn merged pull request #15686: Fix the problem in printing feature in c++ API examples : feature_extract URL: https://github.com/apache/incubator-mxnet/pull/15686 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services