[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #18017: [Numpy] New FFIs for Operator: tile, trace, transpose
hzfan commented on a change in pull request #18017: [Numpy] New FFIs for Operator: tile, trace, transpose URL: https://github.com/apache/incubator-mxnet/pull/18017#discussion_r409995466 ## File path: src/api/operator/tensor/matrix_op.cc ## @@ -68,4 +68,25 @@ MXNET_REGISTER_API("_npi.clip") } }); +MXNET_REGISTER_API("_npi.tile") +.set_body([](runtime::MXNetArgs args, runtime::MXNetRetValue* ret) { + using namespace runtime; + const nnvm::Op* op = Op::Get("_npi_tile"); + nnvm::NodeAttrs attrs; + op::TileParam param; + if (args[1].type_code() == kDLInt) { +param.reps = Tuple(1, args[1].operator int64_t()); + } else { + param.reps = Tuple(args[1].operator ObjectRef()); Review comment: Indent This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #18053: [Numpy] New FFIs for Operator: pad, prod
hzfan commented on a change in pull request #18053: [Numpy] New FFIs for Operator: pad, prod URL: https://github.com/apache/incubator-mxnet/pull/18053#discussion_r409992007 ## File path: src/api/operator/numpy/np_pad_op.cc ## @@ -0,0 +1,85 @@ + +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/*! + * \file np_pad_op.cc + * \brief Implementation of the API of functions in src/operator/numpy/np_pad_op.cc + */ +#include +#include +#include +#include "../utils.h" +#include "../../../operator/numpy/np_pad_op-inl.h" + +namespace mxnet { + +inline int String2MXNetPadType(const std::string& s) { + using namespace op; + if (s == "constant") { +return pad_enum::kConstant; + } else if (s == "edge") { +return pad_enum::kEdge; + } else if (s == "reflect") { +return pad_enum::kReflect; + } else if (s == "symmetric") { +return pad_enum::kSymmetric; + } else if (s == "maximum") { +return pad_enum::kMaximum; + } else if (s == "minimum") { +return pad_enum::kMinimum; + } else { +LOG(FATAL) << "unknown type " << s; + } + LOG(FATAL) << "should not reach here "; + return 0; +} + +MXNET_REGISTER_API("_npi.pad") +.set_body([](runtime::MXNetArgs args, runtime::MXNetRetValue* ret) { + using namespace runtime; + const nnvm::Op* op = Op::Get("_npi_pad"); + nnvm::NodeAttrs attrs; + op::NumpyPadParam param; + ADT adt = Downcast(args[1].operator ObjectRef()); + int ndim = adt.size(); + std::vector> temp; + int counter = 0; + for (counter = 0; counter < ndim; counter++) { +temp.emplace_back(mxnet::Tuple(adt[counter])); + } + param.pad_width = Tuple>(temp.begin(), temp.end()); + param.mode = String2MXNetPadType(args[2].operator std::string()); + if (args[3].type_code() != kNull) { +param.constant_values = args[3].operator double(); + } + if (args[4].type_code() != kNull) { +param.reflect_type = args[4].operator std::string(); + } + attrs.op = op; + attrs.parsed = std::move(param); + SetAttrDict(); + int num_inputs = 1; + int num_outputs = 1; Review comment: `num_outputs` should be 0. It is the number of outputs provided by the user, not the number of outputs yielded by the op. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #18053: [Numpy] New FFIs for Operator: pad, prod
hzfan commented on a change in pull request #18053: [Numpy] New FFIs for Operator: pad, prod URL: https://github.com/apache/incubator-mxnet/pull/18053#discussion_r409988276 ## File path: python/mxnet/numpy/multiarray.py ## @@ -10363,7 +10364,154 @@ def pad(x, pad_width=None, mode="constant", **kwargs): # pylint: disable=too-man [10, 10, 10, 10, 10, 10, 10], [10, 10, 10, 10, 10, 10, 10]]) """ -return _mx_nd_np.pad(x, pad_width, mode, **kwargs) +if not _np.asarray(pad_width).dtype.kind == 'i': Review comment: Seems that we have checks in `_mx_nd_np.pad`? Why not simply invoke it? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #18053: [Numpy] New FFIs for Operator: pad, prod
hzfan commented on a change in pull request #18053: [Numpy] New FFIs for Operator: pad, prod URL: https://github.com/apache/incubator-mxnet/pull/18053#discussion_r409992467 ## File path: src/operator/numpy/np_pad_op-inl.h ## @@ -122,6 +143,18 @@ struct NumpyPadParam : public dmlc::Parameter { "the extended part of the array is created by subtracting the " "reflected values from two times the edge value."); } + // Added SetAttrDict function here + void SetAttrDict(std::unordered_map* dict) { +std::ostringstream pad_width_s, mode_s, constant_values_s, reflect_type_s; +pad_width_s << pad_width; +// mode_s << mode; Review comment: Remove dead code. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #18053: [Numpy] New FFIs for Operator: pad, prod
hzfan commented on a change in pull request #18053: [Numpy] New FFIs for Operator: pad, prod URL: https://github.com/apache/incubator-mxnet/pull/18053#discussion_r409993623 ## File path: src/api/operator/numpy/np_broadcast_reduce_op_value.cc ## @@ -89,4 +89,50 @@ MXNET_REGISTER_API("_npi.mean") } }); +MXNET_REGISTER_API("_npi.prod") +.set_body([](runtime::MXNetArgs args, runtime::MXNetRetValue* ret) { + using namespace runtime; + const nnvm::Op* op = Op::Get("_npi_prod"); + nnvm::NodeAttrs attrs; + op::NumpyReduceAxesParam param; + if (args[1].type_code() == kNull) { +param.axis = dmlc::optional>(); + } else if (args[1].type_code() == kDLInt) { +param.axis = Tuple(1, args[1].operator int64_t()); + } else { +param.axis = Tuple(args[1].operator ObjectRef()); + } + if (args[2].type_code() == kNull) { +param.dtype = dmlc::optional(); + } else { +param.dtype = String2MXNetTypeWithBool(args[2].operator std::string()); + } + if (args[3].type_code() == kNull) { +param.keepdims = false; + } else { +param.keepdims = args[3].operator bool(); + } + if (args[4].type_code() == kNull) { +param.initial = dmlc::optional(); + } else { +param.initial = args[4].operator double(); + } + attrs.op = op; + attrs.parsed = std::move(param); + SetAttrDict(); + // inputs + NDArray* inputs[] = {args[0].operator mxnet::NDArray*()}; + int num_inputs = 1; + // outputs + NDArray* out = args[5].operator mxnet::NDArray*(); + NDArray** outputs = out == nullptr ? nullptr : + int num_outputs = out != nullptr; + auto ndoutputs = Invoke(op, , num_inputs, inputs, _outputs, outputs); + if (out) { +*ret = PythonArg(3); Review comment: `PythonArg(5)`. Here we specify the argument in `_api_intenral.prod` instead of `_op.prod` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #18049: [numpy] add numpy op fill_diagonal
hzfan commented on a change in pull request #18049: [numpy] add numpy op fill_diagonal URL: https://github.com/apache/incubator-mxnet/pull/18049#discussion_r409986072 ## File path: python/mxnet/ndarray/numpy/_op.py ## @@ -7477,6 +7477,94 @@ def resize(a, new_shape): return _npi.resize_fallback(a, new_shape=new_shape) +@set_module('mxnet.ndarray.numpy') +def fill_diagonal(a, val, wrap=False): +""" +Fill the main diagonal of the given array of any dimensionality. +For an array `a` with ``a.ndim >= 2``, the diagonal is the list of +locations with indices ``a[i, ..., i]`` all identical. This function +modifies the input array in-place, it does not return a value. +Parameters +-- +a : array, at least 2-D. + Array whose diagonal is to be filled, it gets modified in-place. +val : scalar + Value to be written on the diagonal, its type must be compatible with + that of the array a. +wrap : bool + For tall matrices in NumPy version up to 1.6.2, the + diagonal "wrapped" after N columns. You can have this behavior + with this option. This affects only tall matrices. + +Examples + +>>> a = np.zeros((3, 3), int) +>>> np.fill_diagonal(a, 5) +>>> a +array([[5, 0, 0], + [0, 5, 0], + [0, 0, 5]]) +The same function can operate on a 4-D array: +>>> a = np.zeros((3, 3, 3, 3), int) +>>> np.fill_diagonal(a, 4) +We only show a few blocks for clarity: +>>> a[0, 0] +array([[4, 0, 0], + [0, 0, 0], + [0, 0, 0]]) +>>> a[1, 1] +array([[0, 0, 0], + [0, 4, 0], + [0, 0, 0]]) +>>> a[2, 2] +array([[0, 0, 0], + [0, 0, 0], + [0, 0, 4]]) +The wrap option affects only tall matrices: +>>> # tall matrices no wrap +>>> a = np.zeros((5, 3), int) +>>> np.fill_diagonal(a, 4) +>>> a +array([[4, 0, 0], + [0, 4, 0], + [0, 0, 4], + [0, 0, 0], + [0, 0, 0]]) +>>> # tall matrices wrap +>>> a = np.zeros((5, 3), int) +>>> np.fill_diagonal(a, 4, wrap=True) +>>> a +array([[4, 0, 0], + [0, 4, 0], + [0, 0, 4], + [0, 0, 0], + [4, 0, 0]]) +>>> # wide matrices +>>> a = np.zeros((3, 5), int) +>>> np.fill_diagonal(a, 4, wrap=True) +>>> a +array([[4, 0, 0, 0, 0], + [0, 4, 0, 0, 0], + [0, 0, 4, 0, 0]]) +The anti-diagonal can be filled by reversing the order of elements +using either `numpy.flipud` or `numpy.fliplr`. +>>> a = np.zeros((3, 3), int); +>>> np.fill_diagonal(np.fliplr(a), [1,2,3]) # Horizontal flip +>>> a +array([[0, 0, 1], + [0, 2, 0], + [3, 0, 0]]) +>>> np.fill_diagonal(np.flipud(a), [1,2,3]) # Vertical flip +>>> a +array([[0, 0, 3], + [0, 2, 0], + [1, 0, 0]]) +Note that the order in which the diagonal is filled varies depending +on the flip function. +""" +return _npi.fill_diagonal(a, val=val, wrap=wrap, out=a) Review comment: Use `_api_internal` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #18049: [numpy] add numpy op fill_diagonal
hzfan commented on a change in pull request #18049: [numpy] add numpy op fill_diagonal URL: https://github.com/apache/incubator-mxnet/pull/18049#discussion_r409985694 ## File path: src/api/operator/numpy/np_fill_diagonal_op.cc ## @@ -0,0 +1,58 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/*! + * \file np_fill_diagonal_op.cc + * \brief Implementation of the API of functions in src/operator/numpy/np_fill_diagonal.cc */ +#include +#include "../utils.h" +#include "../../../operator/numpy/np_fill_diagonal_op-inl.h" + +namespace mxnet { + +MXNET_REGISTER_API("_npi.fill_diagonal") +.set_body([](runtime::MXNetArgs args, runtime::MXNetRetValue* ret) { + using namespace runtime; + const nnvm::Op* op = Op::Get("_npi_fill_diagonal"); + nnvm::NodeAttrs attrs; + + op::NumpyFillDiagonalParam param; + int num_inputs = 1; + NDArray* inputs[] = {args[0].operator mxnet::NDArray*()}; + + param.val = Tuple(1, args[1].operator double()); Review comment: @BenjaminCHEN2016 #17866 has been merged. Let's use `Obj2Tuple` like https://github.com/apache/incubator-mxnet/pull/17866/files#diff-4c8cf94cd8bf4368b07f43c165402239R49. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] mxnet-bot commented on issue #18064: [v1.x] Backport #17689 and #17884 to v1.x branch
mxnet-bot commented on issue #18064: [v1.x] Backport #17689 and #17884 to v1.x branch URL: https://github.com/apache/incubator-mxnet/pull/18064#issuecomment-615029889 Jenkins CI successfully triggered : [unix-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wuxun-zhang commented on issue #18064: [v1.x] Backport #17689 and #17884 to v1.x branch
wuxun-zhang commented on issue #18064: [v1.x] Backport #17689 and #17884 to v1.x branch URL: https://github.com/apache/incubator-mxnet/pull/18064#issuecomment-615029861 @mxnet-bot run ci [unix-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet-ci] leezu commented on issue #20: Update config files for g4
leezu commented on issue #20: Update config files for g4 URL: https://github.com/apache/incubator-mxnet-ci/pull/20#issuecomment-615024833 g4 needs new driver. Therefore AMI requires update. Yes, there should be only a single AMI. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet-ci] leezu edited a comment on issue #20: Update config files for g4
leezu edited a comment on issue #20: Update config files for g4 URL: https://github.com/apache/incubator-mxnet-ci/pull/20#issuecomment-615024833 g4 needs a more recent driver. Therefore AMI requires update. Yes, there should be only a single AMI. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 commented on issue #18004: Wrong result when using new numpy ffi in deferred compute
haojin2 commented on issue #18004: Wrong result when using new numpy ffi in deferred compute URL: https://github.com/apache/incubator-mxnet/issues/18004#issuecomment-615024481 @leezu Have you been able to find the root cause for this bug? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] mxnet-bot commented on issue #18082: Add gelu fuse ops
mxnet-bot commented on issue #18082: Add gelu fuse ops URL: https://github.com/apache/incubator-mxnet/pull/18082#issuecomment-615020905 Jenkins CI successfully triggered : [unix-gpu, centos-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] MoisesHer commented on issue #18082: Add gelu fuse ops
MoisesHer commented on issue #18082: Add gelu fuse ops URL: https://github.com/apache/incubator-mxnet/pull/18082#issuecomment-615020849 @mxnet-bot run ci [centos-gpu, unix-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet-ci] marcoabreu commented on issue #20: Update config files for g4
marcoabreu commented on issue #20: Update config files for g4 URL: https://github.com/apache/incubator-mxnet-ci/pull/20#issuecomment-615019588 Why is it necessary at all? You can create an image on one machine type and just run it on all. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] mxnet-bot commented on issue #18087: CI/CD: Remove cuda 9.0, 9.1
mxnet-bot commented on issue #18087: CI/CD: Remove cuda 9.0, 9.1 URL: https://github.com/apache/incubator-mxnet/pull/18087#issuecomment-615019436 Jenkins CI successfully triggered : [unix-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu commented on issue #18087: CI/CD: Remove cuda 9.0, 9.1
leezu commented on issue #18087: CI/CD: Remove cuda 9.0, 9.1 URL: https://github.com/apache/incubator-mxnet/pull/18087#issuecomment-615019382 @mxnet-bot run ci [unix-gpu] stuck after np empty This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu commented on issue #17808: [WIP] Windows dev environment configuration, update install instructions from source in the docs
leezu commented on issue #17808: [WIP] Windows dev environment configuration, update install instructions from source in the docs URL: https://github.com/apache/incubator-mxnet/pull/17808#issuecomment-615013613 @vexilligera it's not fixed. We need a script to generate the working AMI. https://github.com/apache/incubator-mxnet/pull/17962 is only a hotfix. Will you be working on the automated setup or is someone else taking over the work? Please clarify. Thank you This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] mxnet-bot commented on issue #18039: [MKLDNN] Backport #17707 "Remove overhead of sg_mkldnn_fullyconnected op" to v1.x
mxnet-bot commented on issue #18039: [MKLDNN] Backport #17707 "Remove overhead of sg_mkldnn_fullyconnected op" to v1.x URL: https://github.com/apache/incubator-mxnet/pull/18039#issuecomment-615012292 Jenkins CI successfully triggered : [unix-gpu, windows-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ciyongch commented on issue #18039: [MKLDNN] Backport #17707 "Remove overhead of sg_mkldnn_fullyconnected op" to v1.x
ciyongch commented on issue #18039: [MKLDNN] Backport #17707 "Remove overhead of sg_mkldnn_fullyconnected op" to v1.x URL: https://github.com/apache/incubator-mxnet/pull/18039#issuecomment-615012255 @mxnet-bot run ci [unix-gpu, windows-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #18049: [numpy] add numpy op fill_diagonal
haojin2 commented on a change in pull request #18049: [numpy] add numpy op fill_diagonal URL: https://github.com/apache/incubator-mxnet/pull/18049#discussion_r409961176 ## File path: src/operator/numpy/np_fill_diagonal_op-inl.h ## @@ -0,0 +1,174 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/*! + * Copyright (c) 2020 by Contributors + * \file np_tril_op-inl.h + * \brief Function definition of the tril (lower triangle of an array) op + */ + +#ifndef MXNET_OPERATOR_NUMPY_NP_FILL_DIAGONAL_OP_INL_H_ +#define MXNET_OPERATOR_NUMPY_NP_FILL_DIAGONAL_OP_INL_H_ + +#include +#include +#include +#include +#include "../mxnet_op.h" +#include "../operator_common.h" +#include "../elemwise_op_common.h" + +namespace mxnet { +namespace op { + +struct NumpyFillDiagonalParam : public dmlc::Parameter { + Tuple val; + bool wrap; + DMLC_DECLARE_PARAMETER(NumpyFillDiagonalParam) { +DMLC_DECLARE_FIELD(val) + .describe("Value to be written on the diagonal, " +"its type must be compatible with that of the array a."); +DMLC_DECLARE_FIELD(wrap) +.set_default(false) +.describe("The diagonal “wrapped” after N columns." + "You can have this behavior with this option. " + "This affects only tall matrices."); + } + void SetAttrDict(std::unordered_map* dict) { Review comment: one more blank line above This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #18049: [numpy] add numpy op fill_diagonal
haojin2 commented on a change in pull request #18049: [numpy] add numpy op fill_diagonal URL: https://github.com/apache/incubator-mxnet/pull/18049#discussion_r409960810 ## File path: src/api/operator/numpy/np_fill_diagonal_op.cc ## @@ -0,0 +1,58 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/*! + * \file np_fill_diagonal_op.cc + * \brief Implementation of the API of functions in src/operator/numpy/np_fill_diagonal.cc */ +#include +#include "../utils.h" +#include "../../../operator/numpy/np_fill_diagonal_op-inl.h" + +namespace mxnet { + +MXNET_REGISTER_API("_npi.fill_diagonal") +.set_body([](runtime::MXNetArgs args, runtime::MXNetRetValue* ret) { + using namespace runtime; + const nnvm::Op* op = Op::Get("_npi_fill_diagonal"); + nnvm::NodeAttrs attrs; + + op::NumpyFillDiagonalParam param; + int num_inputs = 1; + NDArray* inputs[] = {args[0].operator mxnet::NDArray*()}; + + param.val = Tuple(1, args[1].operator double()); Review comment: #17866 merged This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #18049: [numpy] add numpy op fill_diagonal
haojin2 commented on a change in pull request #18049: [numpy] add numpy op fill_diagonal URL: https://github.com/apache/incubator-mxnet/pull/18049#discussion_r409958567 ## File path: python/mxnet/ndarray/numpy/_op.py ## @@ -7477,6 +7477,94 @@ def resize(a, new_shape): return _npi.resize_fallback(a, new_shape=new_shape) +@set_module('mxnet.ndarray.numpy') +def fill_diagonal(a, val, wrap=False): +""" +Fill the main diagonal of the given array of any dimensionality. +For an array `a` with ``a.ndim >= 2``, the diagonal is the list of +locations with indices ``a[i, ..., i]`` all identical. This function +modifies the input array in-place, it does not return a value. +Parameters Review comment: one more blank line above This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ciyongch commented on issue #17177: Fix incorrect calculation results when the C locale is set to a locale that uses commas as the decimal separator
ciyongch commented on issue #17177: Fix incorrect calculation results when the C locale is set to a locale that uses commas as the decimal separator URL: https://github.com/apache/incubator-mxnet/pull/17177#issuecomment-615005075 @nickguletskii, as we're doing 1.7 release recently, can you also help to backport this PR to v1.x branch as @stu1130 mentioned, which will be included in MXNet 1.7 as well? Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #18049: [numpy] add numpy op fill_diagonal
haojin2 commented on a change in pull request #18049: [numpy] add numpy op fill_diagonal URL: https://github.com/apache/incubator-mxnet/pull/18049#discussion_r409958205 ## File path: python/mxnet/symbol/numpy/_symbol.py ## @@ -6582,6 +6582,102 @@ def resize(a, new_shape): return _npi.resize_fallback(a, new_shape=new_shape) +@set_module('mxnet.symbol.numpy') +def fill_diagonal(a, val, wrap=False): +""" +Fill the main diagonal of the given array of any dimensionality. +For an array `a` with ``a.ndim >= 2``, the diagonal is the list of +locations with indices ``a[i, ..., i]`` all identical. This function +modifies the input array in-place, it does not return a value. +Parameters +-- +a : array, at least 2-D. + Array whose diagonal is to be filled, it gets modified in-place. +val : scalar + Value to be written on the diagonal, its type must be compatible with + that of the array a. +wrap : bool + For tall matrices in NumPy version up to 1.6.2, the + diagonal "wrapped" after N columns. You can have this behavior + with this option. This affects only tall matrices. +See also + +diag_indices, diag_indices_from +Notes +- +.. versionadded:: 1.4.0 +This functionality can be obtained via `diag_indices`, but internally +this version uses a much faster implementation that never constructs the +indices and uses simple slicing. +Examples Review comment: actually no need for `examples` for symbol This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #18049: [numpy] add numpy op fill_diagonal
haojin2 commented on a change in pull request #18049: [numpy] add numpy op fill_diagonal URL: https://github.com/apache/incubator-mxnet/pull/18049#discussion_r409956899 ## File path: python/mxnet/symbol/numpy/_symbol.py ## @@ -6582,6 +6582,102 @@ def resize(a, new_shape): return _npi.resize_fallback(a, new_shape=new_shape) +@set_module('mxnet.symbol.numpy') +def fill_diagonal(a, val, wrap=False): +""" +Fill the main diagonal of the given array of any dimensionality. +For an array `a` with ``a.ndim >= 2``, the diagonal is the list of +locations with indices ``a[i, ..., i]`` all identical. This function +modifies the input array in-place, it does not return a value. +Parameters Review comment: one more blank line above This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #18049: [numpy] add numpy op fill_diagonal
haojin2 commented on a change in pull request #18049: [numpy] add numpy op fill_diagonal URL: https://github.com/apache/incubator-mxnet/pull/18049#discussion_r409956925 ## File path: python/mxnet/symbol/numpy/_symbol.py ## @@ -6582,6 +6582,102 @@ def resize(a, new_shape): return _npi.resize_fallback(a, new_shape=new_shape) +@set_module('mxnet.symbol.numpy') +def fill_diagonal(a, val, wrap=False): +""" +Fill the main diagonal of the given array of any dimensionality. +For an array `a` with ``a.ndim >= 2``, the diagonal is the list of +locations with indices ``a[i, ..., i]`` all identical. This function +modifies the input array in-place, it does not return a value. +Parameters +-- +a : array, at least 2-D. + Array whose diagonal is to be filled, it gets modified in-place. +val : scalar + Value to be written on the diagonal, its type must be compatible with + that of the array a. +wrap : bool + For tall matrices in NumPy version up to 1.6.2, the + diagonal "wrapped" after N columns. You can have this behavior + with this option. This affects only tall matrices. +See also + +diag_indices, diag_indices_from +Notes +- +.. versionadded:: 1.4.0 +This functionality can be obtained via `diag_indices`, but internally +this version uses a much faster implementation that never constructs the +indices and uses simple slicing. +Examples Review comment: one more blank line above This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #18049: [numpy] add numpy op fill_diagonal
haojin2 commented on a change in pull request #18049: [numpy] add numpy op fill_diagonal URL: https://github.com/apache/incubator-mxnet/pull/18049#discussion_r409957004 ## File path: python/mxnet/symbol/numpy/_symbol.py ## @@ -6582,6 +6582,102 @@ def resize(a, new_shape): return _npi.resize_fallback(a, new_shape=new_shape) +@set_module('mxnet.symbol.numpy') +def fill_diagonal(a, val, wrap=False): +""" +Fill the main diagonal of the given array of any dimensionality. +For an array `a` with ``a.ndim >= 2``, the diagonal is the list of +locations with indices ``a[i, ..., i]`` all identical. This function +modifies the input array in-place, it does not return a value. +Parameters +-- +a : array, at least 2-D. + Array whose diagonal is to be filled, it gets modified in-place. +val : scalar + Value to be written on the diagonal, its type must be compatible with + that of the array a. +wrap : bool + For tall matrices in NumPy version up to 1.6.2, the + diagonal "wrapped" after N columns. You can have this behavior + with this option. This affects only tall matrices. +See also + +diag_indices, diag_indices_from +Notes Review comment: no need for `notes` and `see also` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #18049: [numpy] add numpy op fill_diagonal
haojin2 commented on a change in pull request #18049: [numpy] add numpy op fill_diagonal URL: https://github.com/apache/incubator-mxnet/pull/18049#discussion_r409956742 ## File path: tests/python/unittest/test_numpy_op.py ## @@ -6900,7 +6900,51 @@ def hybrid_forward(self, F, x, *args, **kwargs): np_data[np_out] = -10 mx_data[mx_out] = -10 assert same(np_data, mx_data.asnumpy()) - + Review comment: one more blank line above This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 commented on issue #18017: [Numpy] New FFIs for Operator: tile, trace, transpose
haojin2 commented on issue #18017: [Numpy] New FFIs for Operator: tile, trace, transpose URL: https://github.com/apache/incubator-mxnet/pull/18017#issuecomment-615002756 @mxnet-bot run ci [unix-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] mxnet-bot commented on issue #18017: [Numpy] New FFIs for Operator: tile, trace, transpose
mxnet-bot commented on issue #18017: [Numpy] New FFIs for Operator: tile, trace, transpose URL: https://github.com/apache/incubator-mxnet/pull/18017#issuecomment-615002784 Jenkins CI successfully triggered : [unix-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] vexilligera commented on issue #17808: [WIP] Windows dev environment configuration, update install instructions from source in the docs
vexilligera commented on issue #17808: [WIP] Windows dev environment configuration, update install instructions from source in the docs URL: https://github.com/apache/incubator-mxnet/pull/17808#issuecomment-615001520 Closing since this is now fixed in #17962 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] vexilligera closed pull request #17808: [WIP] Windows dev environment configuration, update install instructions from source in the docs
vexilligera closed pull request #17808: [WIP] Windows dev environment configuration, update install instructions from source in the docs URL: https://github.com/apache/incubator-mxnet/pull/17808 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 commented on issue #17824: [Numpy] FFI: max/min/amax/amin
haojin2 commented on issue #17824: [Numpy] FFI: max/min/amax/amin URL: https://github.com/apache/incubator-mxnet/pull/17824#issuecomment-614999648 @mxnet-bot run ci [unix-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] mxnet-bot commented on issue #17824: [Numpy] FFI: max/min/amax/amin
mxnet-bot commented on issue #17824: [Numpy] FFI: max/min/amax/amin URL: https://github.com/apache/incubator-mxnet/pull/17824#issuecomment-614999679 Jenkins CI successfully triggered : [unix-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 commented on issue #17990: Boolean indexing accesses out of bound elements
haojin2 commented on issue #17990: Boolean indexing accesses out of bound elements URL: https://github.com/apache/incubator-mxnet/issues/17990#issuecomment-614999441 Fixed in #17796, test re-enabled This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] mxnet-bot commented on issue #17534: [numpy] add logical op
mxnet-bot commented on issue #17534: [numpy] add logical op URL: https://github.com/apache/incubator-mxnet/pull/17534#issuecomment-614998225 Jenkins CI successfully triggered : [edge, unix-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 commented on issue #17534: [numpy] add logical op
haojin2 commented on issue #17534: [numpy] add logical op URL: https://github.com/apache/incubator-mxnet/pull/17534#issuecomment-614998161 @mxnet-bot run ci [edge, unix-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch master updated (5155095 -> b7d1c69)
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 5155095 add zero grad for npi_unique (#18080) add b7d1c69 [Numpy] add new ffi for np.linalg.norm (#18066) No new revisions were added by this update. Summary of changes: benchmark/python/ffi/benchmark_ffi.py | 1 + python/mxnet/ndarray/numpy/linalg.py | 17 +++--- .../numpy/linalg/{np_gesvd.cc => np_norm.cc} | 26 -- src/operator/numpy/linalg/np_norm-inl.h| 18 +++ 4 files changed, 46 insertions(+), 16 deletions(-) copy src/api/operator/numpy/linalg/{np_gesvd.cc => np_norm.cc} (68%)
[incubator-mxnet] branch master updated (5155095 -> b7d1c69)
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 5155095 add zero grad for npi_unique (#18080) add b7d1c69 [Numpy] add new ffi for np.linalg.norm (#18066) No new revisions were added by this update. Summary of changes: benchmark/python/ffi/benchmark_ffi.py | 1 + python/mxnet/ndarray/numpy/linalg.py | 17 +++--- .../numpy/linalg/{np_gesvd.cc => np_norm.cc} | 26 -- src/operator/numpy/linalg/np_norm-inl.h| 18 +++ 4 files changed, 46 insertions(+), 16 deletions(-) copy src/api/operator/numpy/linalg/{np_gesvd.cc => np_norm.cc} (68%)
[incubator-mxnet] branch master updated (5155095 -> b7d1c69)
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 5155095 add zero grad for npi_unique (#18080) add b7d1c69 [Numpy] add new ffi for np.linalg.norm (#18066) No new revisions were added by this update. Summary of changes: benchmark/python/ffi/benchmark_ffi.py | 1 + python/mxnet/ndarray/numpy/linalg.py | 17 +++--- .../numpy/linalg/{np_gesvd.cc => np_norm.cc} | 26 -- src/operator/numpy/linalg/np_norm-inl.h| 18 +++ 4 files changed, 46 insertions(+), 16 deletions(-) copy src/api/operator/numpy/linalg/{np_gesvd.cc => np_norm.cc} (68%)
[incubator-mxnet] branch master updated (bd0816e -> 5155095)
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from bd0816e Add np.linalg.qr backward (#18050) add 5155095 add zero grad for npi_unique (#18080) No new revisions were added by this update. Summary of changes: src/operator/numpy/np_unique_op.cc | 1 + tests/python/unittest/test_numpy_op.py | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-)
[incubator-mxnet] branch master updated (5155095 -> b7d1c69)
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 5155095 add zero grad for npi_unique (#18080) add b7d1c69 [Numpy] add new ffi for np.linalg.norm (#18066) No new revisions were added by this update. Summary of changes: benchmark/python/ffi/benchmark_ffi.py | 1 + python/mxnet/ndarray/numpy/linalg.py | 17 +++--- .../numpy/linalg/{np_gesvd.cc => np_norm.cc} | 26 -- src/operator/numpy/linalg/np_norm-inl.h| 18 +++ 4 files changed, 46 insertions(+), 16 deletions(-) copy src/api/operator/numpy/linalg/{np_gesvd.cc => np_norm.cc} (68%)
[incubator-mxnet] branch master updated (bd0816e -> 5155095)
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from bd0816e Add np.linalg.qr backward (#18050) add 5155095 add zero grad for npi_unique (#18080) No new revisions were added by this update. Summary of changes: src/operator/numpy/np_unique_op.cc | 1 + tests/python/unittest/test_numpy_op.py | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-)
[incubator-mxnet] branch master updated (5155095 -> b7d1c69)
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 5155095 add zero grad for npi_unique (#18080) add b7d1c69 [Numpy] add new ffi for np.linalg.norm (#18066) No new revisions were added by this update. Summary of changes: benchmark/python/ffi/benchmark_ffi.py | 1 + python/mxnet/ndarray/numpy/linalg.py | 17 +++--- .../numpy/linalg/{np_gesvd.cc => np_norm.cc} | 26 -- src/operator/numpy/linalg/np_norm-inl.h| 18 +++ 4 files changed, 46 insertions(+), 16 deletions(-) copy src/api/operator/numpy/linalg/{np_gesvd.cc => np_norm.cc} (68%)
[incubator-mxnet] branch master updated (bd0816e -> 5155095)
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from bd0816e Add np.linalg.qr backward (#18050) add 5155095 add zero grad for npi_unique (#18080) No new revisions were added by this update. Summary of changes: src/operator/numpy/np_unique_op.cc | 1 + tests/python/unittest/test_numpy_op.py | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-)
[incubator-mxnet] branch master updated (bd0816e -> 5155095)
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from bd0816e Add np.linalg.qr backward (#18050) add 5155095 add zero grad for npi_unique (#18080) No new revisions were added by this update. Summary of changes: src/operator/numpy/np_unique_op.cc | 1 + tests/python/unittest/test_numpy_op.py | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-)
[GitHub] [incubator-mxnet] haojin2 merged pull request #18066: [Numpy] add new ffi for np.linalg.norm
haojin2 merged pull request #18066: [Numpy] add new ffi for np.linalg.norm URL: https://github.com/apache/incubator-mxnet/pull/18066 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] mxnet-bot commented on issue #17637: [Numpy] Add cross product backward
mxnet-bot commented on issue #17637: [Numpy] Add cross product backward URL: https://github.com/apache/incubator-mxnet/pull/17637#issuecomment-614990715 Jenkins CI successfully triggered : [windows-cpu, unix-gpu, windows-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 merged pull request #18080: Add zero grad for npi_unique
haojin2 merged pull request #18080: Add zero grad for npi_unique URL: https://github.com/apache/incubator-mxnet/pull/18080 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 commented on issue #17637: [Numpy] Add cross product backward
haojin2 commented on issue #17637: [Numpy] Add cross product backward URL: https://github.com/apache/incubator-mxnet/pull/17637#issuecomment-614990654 @mxnet-bot run ci [unix-gpu, windows-cpu, windows-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] stu1130 edited a comment on issue #16864: [Discussion] 1.7.0 Roadmap
stu1130 edited a comment on issue #16864: [Discussion] 1.7.0 Roadmap URL: https://github.com/apache/incubator-mxnet/issues/16864#issuecomment-614966759 https://github.com/apache/incubator-mxnet/pull/17177 solves the locale issue for not only JVM languages but also Python, see https://github.com/apache/incubator-mxnet/issues/18079. So I want to include this one on 1.7 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu commented on issue #18087: CI/CD: Remove cuda 9.0, 9.1
leezu commented on issue #18087: CI/CD: Remove cuda 9.0, 9.1 URL: https://github.com/apache/incubator-mxnet/pull/18087#issuecomment-614979006 @mxnet-bot run ci [unix-gpu] stuck after np true_divide This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] mxnet-bot commented on issue #18087: CI/CD: Remove cuda 9.0, 9.1
mxnet-bot commented on issue #18087: CI/CD: Remove cuda 9.0, 9.1 URL: https://github.com/apache/incubator-mxnet/pull/18087#issuecomment-614979049 Jenkins CI successfully triggered : [unix-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet-ci] ChaiBapchya commented on issue #20: Update config files for g4
ChaiBapchya commented on issue #20: Update config files for g4 URL: https://github.com/apache/incubator-mxnet-ci/pull/20#issuecomment-614978298 Sure makes sense! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet-ci] leezu commented on issue #20: Update config files for g4
leezu commented on issue #20: Update config files for g4 URL: https://github.com/apache/incubator-mxnet-ci/pull/20#issuecomment-614978202 Suggest to have a single folder for all GPU instances. Not introduce a new folder for G4, but rather consolidate existing 2 folders into a single one. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet-ci] ChaiBapchya opened a new pull request #20: Update config files for g4
ChaiBapchya opened a new pull request #20: Update config files for g4 URL: https://github.com/apache/incubator-mxnet-ci/pull/20 Config files for G4 instance on MXNet CI [unix-gpu slaves] G4 instances have Tesla T4 drivers Also uses updated Nvidia driver [v440 instead of 418] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new ed5ec5a Bump the publish timestamp. ed5ec5a is described below commit ed5ec5a2a971f4be887e1a4a0fdf1f8474b9d96d Author: mxnet-ci AuthorDate: Fri Apr 17 00:47:25 2020 + Bump the publish timestamp. --- date.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/date.txt b/date.txt new file mode 100644 index 000..cb16b7c --- /dev/null +++ b/date.txt @@ -0,0 +1 @@ +Fri Apr 17 00:47:25 UTC 2020
[GitHub] [incubator-mxnet] stu1130 commented on issue #16864: [Discussion] 1.7.0 Roadmap
stu1130 commented on issue #16864: [Discussion] 1.7.0 Roadmap URL: https://github.com/apache/incubator-mxnet/issues/16864#issuecomment-614966759 https://github.com/apache/incubator-mxnet/pull/17177 solves the locale issue for not only JVM but also Python, see https://github.com/apache/incubator-mxnet/issues/18079. So I want to include this one on 1.7 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] stu1130 commented on issue #17177: Fix incorrect calculation results when the C locale is set to a locale that uses commas as the decimal separator
stu1130 commented on issue #17177: Fix incorrect calculation results when the C locale is set to a locale that uses commas as the decimal separator URL: https://github.com/apache/incubator-mxnet/pull/17177#issuecomment-614966397 +1 need this fix on MXNet 1.7 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] mxnet-bot commented on issue #18064: [v1.x] Backport #17689 and #17884 to v1.x branch
mxnet-bot commented on issue #18064: [v1.x] Backport #17689 and #17884 to v1.x branch URL: https://github.com/apache/incubator-mxnet/pull/18064#issuecomment-614962552 Jenkins CI successfully triggered : [edge, website, unix-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wuxun-zhang commented on issue #18064: [v1.x] Backport #17689 and #17884 to v1.x branch
wuxun-zhang commented on issue #18064: [v1.x] Backport #17689 and #17884 to v1.x branch URL: https://github.com/apache/incubator-mxnet/pull/18064#issuecomment-614962496 @mxnet-bot run ci [unix-gpu, website, edge] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch v1.x updated (b56571d -> 8cfc64a)
This is an automated email from the ASF dual-hosted git repository. lausen pushed a change to branch v1.x in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from b56571d [v1.x] backport #17900 "[MKLDNN] support using any format in pooling backward" (#18067) add 8cfc64a No tensor cores for fp32 interleaved attention, remove div by 8 restriction (#17994) (#18085) No new revisions were added by this update. Summary of changes: src/operator/contrib/transformer.cu | 53 ++--- 1 file changed, 37 insertions(+), 16 deletions(-)
[incubator-mxnet] branch v1.x updated (b56571d -> 8cfc64a)
This is an automated email from the ASF dual-hosted git repository. lausen pushed a change to branch v1.x in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from b56571d [v1.x] backport #17900 "[MKLDNN] support using any format in pooling backward" (#18067) add 8cfc64a No tensor cores for fp32 interleaved attention, remove div by 8 restriction (#17994) (#18085) No new revisions were added by this update. Summary of changes: src/operator/contrib/transformer.cu | 53 ++--- 1 file changed, 37 insertions(+), 16 deletions(-)
[GitHub] [incubator-mxnet] leezu opened a new issue #18088: CI Bot missing comments
leezu opened a new issue #18088: CI Bot missing comments URL: https://github.com/apache/incubator-mxnet/issues/18088 ## Description CI bot didn't comment at https://github.com/apache/incubator-mxnet/pull/18087 Suspected root-cause: Reused a branchname that was previously used for a different PR. Later the other PR got merged and the branch deleted. Recently recreated the branch and opened #18087 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu merged pull request #18085: No tensor cores for fp32 interleaved attention, remove div by 8 restiction (#17994)
leezu merged pull request #18085: No tensor cores for fp32 interleaved attention, remove div by 8 restiction (#17994) URL: https://github.com/apache/incubator-mxnet/pull/18085 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] mxnet-bot commented on issue #18087: CI/CD: Remove cuda 9.0, 9.1
mxnet-bot commented on issue #18087: CI/CD: Remove cuda 9.0, 9.1 URL: https://github.com/apache/incubator-mxnet/pull/18087#issuecomment-614947016 Jenkins CI successfully triggered : [unix-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu commented on issue #18087: CI/CD: Remove cuda 9.0, 9.1
leezu commented on issue #18087: CI/CD: Remove cuda 9.0, 9.1 URL: https://github.com/apache/incubator-mxnet/pull/18087#issuecomment-614946981 @mxnet-bot run ci [unix-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu edited a comment on issue #17988: Row-sparse constant initializer accesses out of bound elements
leezu edited a comment on issue #17988: Row-sparse constant initializer accesses out of bound elements URL: https://github.com/apache/incubator-mxnet/issues/17988#issuecomment-614944263 #17762 fixed an unrelated issue. You can build latest master from source and run `test_init.test_rsp_const_init` to verify the still unresolved bug in rsp code. Though `test_init.test_rsp_const_init` is currently disabled, so you need to enable it locally again first. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu commented on issue #17988: Row-sparse constant initializer accesses out of bound elements
leezu commented on issue #17988: Row-sparse constant initializer accesses out of bound elements URL: https://github.com/apache/incubator-mxnet/issues/17988#issuecomment-614944263 #17762 fixed an unrelated issue. You can build latest master from source and run `test_init.test_rsp_const_init` to verify the still unresolved bug in rsp code. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch master updated (7bef85e -> bd0816e)
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 7bef85e [Numpy] Add ffi for np.sum, np.std, np.var, np.average and np.histogram (#17866) add bd0816e Add np.linalg.qr backward (#18050) No new revisions were added by this update. Summary of changes: src/operator/numpy/linalg/np_qr-inl.h | 132 + src/operator/numpy/linalg/np_qr.cc | 10 ++- src/operator/numpy/linalg/np_qr.cu | 3 + tests/python/unittest/test_numpy_op.py | 69 +++-- 4 files changed, 208 insertions(+), 6 deletions(-)
[incubator-mxnet] branch master updated (7bef85e -> bd0816e)
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 7bef85e [Numpy] Add ffi for np.sum, np.std, np.var, np.average and np.histogram (#17866) add bd0816e Add np.linalg.qr backward (#18050) No new revisions were added by this update. Summary of changes: src/operator/numpy/linalg/np_qr-inl.h | 132 + src/operator/numpy/linalg/np_qr.cc | 10 ++- src/operator/numpy/linalg/np_qr.cu | 3 + tests/python/unittest/test_numpy_op.py | 69 +++-- 4 files changed, 208 insertions(+), 6 deletions(-)
[incubator-mxnet] branch master updated (7bef85e -> bd0816e)
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 7bef85e [Numpy] Add ffi for np.sum, np.std, np.var, np.average and np.histogram (#17866) add bd0816e Add np.linalg.qr backward (#18050) No new revisions were added by this update. Summary of changes: src/operator/numpy/linalg/np_qr-inl.h | 132 + src/operator/numpy/linalg/np_qr.cc | 10 ++- src/operator/numpy/linalg/np_qr.cu | 3 + tests/python/unittest/test_numpy_op.py | 69 +++-- 4 files changed, 208 insertions(+), 6 deletions(-)
[GitHub] [incubator-mxnet] haojin2 commented on issue #18050: [Numpy] Add linalg.qr backward
haojin2 commented on issue #18050: [Numpy] Add linalg.qr backward URL: https://github.com/apache/incubator-mxnet/pull/18050#issuecomment-614940143 @D-Roberts merged, thanks for your contribution! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 merged pull request #18050: [Numpy] Add linalg.qr backward
haojin2 merged pull request #18050: [Numpy] Add linalg.qr backward URL: https://github.com/apache/incubator-mxnet/pull/18050 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu edited a comment on issue #17177: Fix incorrect calculation results when the C locale is set to a locale that uses commas as the decimal separator
leezu edited a comment on issue #17177: Fix incorrect calculation results when the C locale is set to a locale that uses commas as the decimal separator URL: https://github.com/apache/incubator-mxnet/pull/17177#issuecomment-614937590 @nickguletskii can resolve the conflicts so this PR may be merged? Are you interested in working on the "more principled approach to serialization in MXNet 2.0, e.g. using a binary format for communication between the frontend and the backend. In addition to solving locale-related issues, this would probably result in a smaller invocation overhead."? Part of this may (or may not) be done already via the FFI work lead by @hzfan? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu commented on issue #17177: Fix incorrect calculation results when the C locale is set to a locale that uses commas as the decimal separator
leezu commented on issue #17177: Fix incorrect calculation results when the C locale is set to a locale that uses commas as the decimal separator URL: https://github.com/apache/incubator-mxnet/pull/17177#issuecomment-614937590 @nickguletskii can resolve the conflicts so this PR may be merged? Are you interested in working on the "more principled approach to serialization in MXNet 2.0, e.g. using a binary format for communication between the frontend and the backend. In addition to solving locale-related issues, this would probably result in a smaller invocation overhead."? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #17988: Row-sparse constant initializer accesses out of bound elements
eric-haibin-lin commented on issue #17988: Row-sparse constant initializer accesses out of bound elements URL: https://github.com/apache/incubator-mxnet/issues/17988#issuecomment-614937598 Did the fix in https://github.com/apache/incubator-mxnet/pull/17762 not help? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu commented on issue #18080: Add zero grad for npi_unique
leezu commented on issue #18080: Add zero grad for npi_unique URL: https://github.com/apache/incubator-mxnet/pull/18080#issuecomment-614936908 @mxnet-bot run ci [unix-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] mxnet-bot commented on issue #18080: Add zero grad for npi_unique
mxnet-bot commented on issue #18080: Add zero grad for npi_unique URL: https://github.com/apache/incubator-mxnet/pull/18080#issuecomment-614936948 Jenkins CI successfully triggered : [unix-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sxjscience commented on issue #18078: A possible bug when computing a gradient vector
sxjscience commented on issue #18078: A possible bug when computing a gradient vector URL: https://github.com/apache/incubator-mxnet/issues/18078#issuecomment-614921875 @zleyk22 I can think of two possible ways to solve this problem: 1) Use two cumsums - `[\log a_0, \log a_0 + \log a_1, ..., \log a_0 + \log a_1 + ... \log a_{n-3}]`, - `[\log a_2 + \log a_3 ... + \log a_{n-1}, \log a_3 + ... + \log a_{n-1}, ... \log a_{n-1}]` Then, sum up these two cumsums. Here, I think we should take the log-sum approach to avoid the overflow/underflow problem of multiplying lots of numbers. (Also this is the algorithm used to solve https://leetcode.com/problems/product-of-array-except-self/) 2) Detect 0s and give them special treatment. We may detect the positions of the zeros and update the gradient of these positions with the correct value. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu commented on issue #7899: Need help: numpy array to mxnet ndarray is too slow.
leezu commented on issue #7899: Need help: numpy array to mxnet ndarray is too slow. URL: https://github.com/apache/incubator-mxnet/issues/7899#issuecomment-614917501 You can use the `mx.nd.from_numpy(ndarray, zero_copy=True)` if you don't mind that the numpy array is invalid afterwards. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu closed issue #17976: No recent nightly builds for ubuntu-latest
leezu closed issue #17976: No recent nightly builds for ubuntu-latest URL: https://github.com/apache/incubator-mxnet/issues/17976 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu edited a comment on issue #17976: No recent nightly builds for ubuntu-latest
leezu edited a comment on issue #17976: No recent nightly builds for ubuntu-latest URL: https://github.com/apache/incubator-mxnet/issues/17976#issuecomment-614916321 Works again ``` % pip install --pre --upgrade --user "mxnet<2" -f https://dist.mxnet.io/python Looking in links: https://dist.mxnet.io/python Collecting mxnet<2 Downloading https://repo.mxnet.io/dist/python/cpu/mxnet-1.7.0b20200415-py2.py3-none-manylinux1_x86_64.whl (48.4 MB) |▌ | 12.8 MB 3.5 MB/s eta 0:00:11^C ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu commented on issue #17976: No recent nightly builds for ubuntu-latest
leezu commented on issue #17976: No recent nightly builds for ubuntu-latest URL: https://github.com/apache/incubator-mxnet/issues/17976#issuecomment-614916321 Works again ``` % pip install --pre --upgrade --user "mxnet<2" -f https://dist.mxnet.io/python ~/src/mxnet-master fixcd + Looking in links: https://dist.mxnet.io/python Collecting mxnet<2 Downloading https://repo.mxnet.io/dist/python/cpu/mxnet-1.7.0b20200415-py2.py3-none-manylinux1_x86_64.whl (48.4 MB) |▌ | 12.8 MB 3.5 MB/s eta 0:00:11^C ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu opened a new pull request #18087: CI/CD: Remove cuda 9.0, 9.1
leezu opened a new pull request #18087: CI/CD: Remove cuda 9.0, 9.1 URL: https://github.com/apache/incubator-mxnet/pull/18087 ## Description ## Cuda 9.0 and 9.1 lack support for gcc7 host compiler, which is now required by MXNet. Thus retain only support for Cuda 9.2. An alternative is to build cuda related code with gcc6 and the rest of MXNet with gcc7 or newer, but the added complexity may not be justified. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu opened a new issue #18086: CD pipeline tests segfaults
leezu opened a new issue #18086: CD pipeline tests segfaults URL: https://github.com/apache/incubator-mxnet/issues/18086 CD tests for cuda builds fail with ``` [2020-04-16T20:36:39.175Z] test of the deformable convolution layer with possible combinations of arguments, ... [2020-04-16T20:36:39.175Z] Segmentation fault: 11 [2020-04-16T20:36:39.175Z] [2020-04-16T20:36:39.175Z] terminate called without an active exception [2020-04-16T20:36:40.100Z] /work/runtime_functions.sh: line 986: 8 Aborted (core dumped) $nose_cmd $NOSE_TIMER_ARGUMENTS --verbose tests/python/unittest ``` http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/restricted-mxnet-cd%2Fmxnet-cd-release-job/detail/mxnet-cd-release-job/977/pipeline/354 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17283: [NumPy]Set numpy default dtype
haojin2 commented on a change in pull request #17283: [NumPy]Set numpy default dtype URL: https://github.com/apache/incubator-mxnet/pull/17283#discussion_r409856213 ## File path: src/operator/tensor/init_op.h ## @@ -288,12 +289,25 @@ struct InitOpWithScalarParam : dmlc::Parameter { .set_default("") .describe("Context of output, in format [cpu|gpu|cpu_pinned](n)." "Only used for imperative calls."); -DMLC_DECLARE_FIELD(dtype).set_default(mshadow::kFloat32) +DMLC_DECLARE_FIELD(dtype) + .set_default(-1) + .add_enum("None", -1) MXNET_ADD_ALL_TYPES_WITH_BOOL .describe("Target data type."); DMLC_DECLARE_FIELD(value) .describe("Value with which to fill newly created tensor"); } + void SetAttrDict(std::unordered_map* dict) { Review comment: one more blank line above. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17283: [NumPy]Set numpy default dtype
haojin2 commented on a change in pull request #17283: [NumPy]Set numpy default dtype URL: https://github.com/apache/incubator-mxnet/pull/17283#discussion_r409855167 ## File path: src/operator/tensor/init_op.cu ## @@ -70,6 +70,9 @@ NNVM_REGISTER_OP(_contrib_arange_like) NNVM_REGISTER_OP(_linspace) .set_attr("FCompute", LinspaceCompute); +NNVM_REGISTER_OP(_npi_linspace) Review comment: move this to `src/operator/numpy/np_init_op.cu` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17283: [NumPy]Set numpy default dtype
haojin2 commented on a change in pull request #17283: [NumPy]Set numpy default dtype URL: https://github.com/apache/incubator-mxnet/pull/17283#discussion_r409855003 ## File path: src/operator/tensor/init_op.cc ## @@ -150,6 +148,16 @@ NNVM_REGISTER_OP(_linspace) .set_attr("FCompute", LinspaceCompute) .add_arguments(RangeParam::__FIELDS__()); +NNVM_REGISTER_OP(_npi_linspace) Review comment: also move this to `src/operator/numpy/np_init_op.cc` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] mxnet-bot commented on issue #17102: Support mixed precision backward for NumPy-compatible true_divide
mxnet-bot commented on issue #17102: Support mixed precision backward for NumPy-compatible true_divide URL: https://github.com/apache/incubator-mxnet/pull/17102#issuecomment-614896203 Jenkins CI successfully triggered : [unix-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 commented on issue #17102: Support mixed precision backward for NumPy-compatible true_divide
haojin2 commented on issue #17102: Support mixed precision backward for NumPy-compatible true_divide URL: https://github.com/apache/incubator-mxnet/pull/17102#issuecomment-614896117 @mxnet-bot run ci [unix-gpu] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch master updated (9337137 -> 7bef85e)
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 9337137 For mxnet-validation pipeline, require sanity build to complete successfully before running other build pipelines. (#17999) add 7bef85e [Numpy] Add ffi for np.sum, np.std, np.var, np.average and np.histogram (#17866) No new revisions were added by this update. Summary of changes: benchmark/python/ffi/benchmark_ffi.py | 5 + include/mxnet/runtime/ffi_helper.h | 18 ++ include/mxnet/runtime/object.h | 1 + python/mxnet/_ffi/_cython/convert.pxi | 6 + python/mxnet/_ffi/node_generic.py | 2 + python/mxnet/_numpy_op_doc.py | 92 - python/mxnet/ndarray/numpy/_op.py | 114 ++- python/mxnet/numpy/multiarray.py | 104 +- python/mxnet/symbol/numpy/_symbol.py | 51 - python/mxnet/symbol/numpy/linalg.py| 8 +- src/api/_api_internal/_api_internal.cc | 10 + src/api/operator/numpy/np_bincount_op.cc | 4 +- .../operator/numpy/np_broadcast_reduce_op_value.cc | 67 ++- src/api/operator/numpy/np_cumsum.cc| 4 +- .../{random/np_pareto_op.cc => np_histogram_op.cc} | 79 src/api/operator/numpy/np_moments_op.cc| 209 + src/api/operator/numpy/np_tensordot_op.cc | 4 +- src/api/operator/utils.h | 10 + src/operator/numpy/np_broadcast_reduce_op.h| 32 +++- src/operator/numpy/np_broadcast_reduce_op_value.cc | 22 +-- src/operator/numpy/np_broadcast_reduce_op_value.cu | 4 +- src/operator/tensor/histogram-inl.h| 42 +++-- 22 files changed, 702 insertions(+), 186 deletions(-) copy src/api/operator/numpy/{random/np_pareto_op.cc => np_histogram_op.cc} (50%) create mode 100644 src/api/operator/numpy/np_moments_op.cc
[incubator-mxnet] branch master updated (9337137 -> 7bef85e)
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 9337137 For mxnet-validation pipeline, require sanity build to complete successfully before running other build pipelines. (#17999) add 7bef85e [Numpy] Add ffi for np.sum, np.std, np.var, np.average and np.histogram (#17866) No new revisions were added by this update. Summary of changes: benchmark/python/ffi/benchmark_ffi.py | 5 + include/mxnet/runtime/ffi_helper.h | 18 ++ include/mxnet/runtime/object.h | 1 + python/mxnet/_ffi/_cython/convert.pxi | 6 + python/mxnet/_ffi/node_generic.py | 2 + python/mxnet/_numpy_op_doc.py | 92 - python/mxnet/ndarray/numpy/_op.py | 114 ++- python/mxnet/numpy/multiarray.py | 104 +- python/mxnet/symbol/numpy/_symbol.py | 51 - python/mxnet/symbol/numpy/linalg.py| 8 +- src/api/_api_internal/_api_internal.cc | 10 + src/api/operator/numpy/np_bincount_op.cc | 4 +- .../operator/numpy/np_broadcast_reduce_op_value.cc | 67 ++- src/api/operator/numpy/np_cumsum.cc| 4 +- .../{random/np_pareto_op.cc => np_histogram_op.cc} | 79 src/api/operator/numpy/np_moments_op.cc| 209 + src/api/operator/numpy/np_tensordot_op.cc | 4 +- src/api/operator/utils.h | 10 + src/operator/numpy/np_broadcast_reduce_op.h| 32 +++- src/operator/numpy/np_broadcast_reduce_op_value.cc | 22 +-- src/operator/numpy/np_broadcast_reduce_op_value.cu | 4 +- src/operator/tensor/histogram-inl.h| 42 +++-- 22 files changed, 702 insertions(+), 186 deletions(-) copy src/api/operator/numpy/{random/np_pareto_op.cc => np_histogram_op.cc} (50%) create mode 100644 src/api/operator/numpy/np_moments_op.cc
[incubator-mxnet] branch master updated (9337137 -> 7bef85e)
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 9337137 For mxnet-validation pipeline, require sanity build to complete successfully before running other build pipelines. (#17999) add 7bef85e [Numpy] Add ffi for np.sum, np.std, np.var, np.average and np.histogram (#17866) No new revisions were added by this update. Summary of changes: benchmark/python/ffi/benchmark_ffi.py | 5 + include/mxnet/runtime/ffi_helper.h | 18 ++ include/mxnet/runtime/object.h | 1 + python/mxnet/_ffi/_cython/convert.pxi | 6 + python/mxnet/_ffi/node_generic.py | 2 + python/mxnet/_numpy_op_doc.py | 92 - python/mxnet/ndarray/numpy/_op.py | 114 ++- python/mxnet/numpy/multiarray.py | 104 +- python/mxnet/symbol/numpy/_symbol.py | 51 - python/mxnet/symbol/numpy/linalg.py| 8 +- src/api/_api_internal/_api_internal.cc | 10 + src/api/operator/numpy/np_bincount_op.cc | 4 +- .../operator/numpy/np_broadcast_reduce_op_value.cc | 67 ++- src/api/operator/numpy/np_cumsum.cc| 4 +- .../{random/np_pareto_op.cc => np_histogram_op.cc} | 79 src/api/operator/numpy/np_moments_op.cc| 209 + src/api/operator/numpy/np_tensordot_op.cc | 4 +- src/api/operator/utils.h | 10 + src/operator/numpy/np_broadcast_reduce_op.h| 32 +++- src/operator/numpy/np_broadcast_reduce_op_value.cc | 22 +-- src/operator/numpy/np_broadcast_reduce_op_value.cu | 4 +- src/operator/tensor/histogram-inl.h| 42 +++-- 22 files changed, 702 insertions(+), 186 deletions(-) copy src/api/operator/numpy/{random/np_pareto_op.cc => np_histogram_op.cc} (50%) create mode 100644 src/api/operator/numpy/np_moments_op.cc
[GitHub] [incubator-mxnet] haojin2 merged pull request #17866: [Numpy] Add ffi for np.sum, np.std, np.var and np.average
haojin2 merged pull request #17866: [Numpy] Add ffi for np.sum, np.std, np.var and np.average URL: https://github.com/apache/incubator-mxnet/pull/17866 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 151198b Bump the publish timestamp. 151198b is described below commit 151198b2b0a007ba56faa1e09d7ad5a794707027 Author: mxnet-ci AuthorDate: Thu Apr 16 18:46:59 2020 + Bump the publish timestamp. --- date.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/date.txt b/date.txt new file mode 100644 index 000..8c49410 --- /dev/null +++ b/date.txt @@ -0,0 +1 @@ +Thu Apr 16 18:46:59 UTC 2020
[GitHub] [incubator-mxnet] MoisesHer commented on a change in pull request #18082: Add gelu fuse ops
MoisesHer commented on a change in pull request #18082: Add gelu fuse ops URL: https://github.com/apache/incubator-mxnet/pull/18082#discussion_r409751099 ## File path: src/operator/fusion/fused_op-inl.h ## @@ -543,6 +552,13 @@ __device__ inline DType relu(const DType val) { return val > 0 ? val : 0; } +__constant__ const float SQRT_2 = 1.4142135623730950488016887242096; Review comment: Thank you, I just removed "__constant__" This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] mxnet-bot commented on issue #18085: No tensor cores for fp32 interleaved attention, remove div by 8 restiction (#17994)
mxnet-bot commented on issue #18085: No tensor cores for fp32 interleaved attention, remove div by 8 restiction (#17994) URL: https://github.com/apache/incubator-mxnet/pull/18085#issuecomment-614797040 Hey @blchu , Thanks for submitting the PR All tests are already queued to run once. If tests fail, you can trigger one or more tests again with the following commands: - To trigger all jobs: @mxnet-bot run ci [all] - To trigger specific jobs: @mxnet-bot run ci [job1, job2] *** **CI supported jobs**: [sanity, website, centos-gpu, edge, miscellaneous, unix-gpu, windows-gpu, clang, centos-cpu, unix-cpu, windows-cpu] *** _Note_: Only following 3 categories can trigger CI :PR Author, MXNet Committer, Jenkins Admin. All CI tests must pass before the PR can be merged. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] blchu opened a new pull request #18085: No tensor cores for fp32 interleaved attention, remove div by 8 restiction (#17994)
blchu opened a new pull request #18085: No tensor cores for fp32 interleaved attention, remove div by 8 restiction (#17994) URL: https://github.com/apache/incubator-mxnet/pull/18085 (cherry picked from commit afae030beb168f09cf08be101714e059157a9507) ## Description ## Fixed issue where fp32 inputs use tensor cores for the interleaved multihead attention operators, resulting in lower precision calculations and potential reduction in accuracy. ## Checklist ## ### Essentials ### - [ ] Changes are complete (i.e. I finished coding on this PR) - [ ] All changes have test coverage: - [ ] To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change ### Changes ### - [ ] Set interleaved multihead attention GEMM default to not use tensor cores, and only use if input data type is fp16 - [ ] No longer checks for tensor input shape divisibility by 8 ## Comments ## This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services