Awesome news Sam, should make maintaining and integrating custom ops a lot
easier. Thanks for the efforts everyone.
On Mon, Dec 9, 2019 at 5:55 AM Skalicky, Sam
wrote:
> Thanks Ciyong,
>
> Absolutely! Heres how a backward function is registered [1] and here’s an
> example backward function for
Quite interested in BytePS. Looking forward to seeing how integration
could evolve.
On Wed, Nov 6, 2019, 8:14 AM Yimin Jiang wrote:
> Hi Zhennan,
>
> Thanks for your interest. To be honest, our team currently do not have a
> plan for CPU training. That said, the notion of BytePS is not
Lgtm Tao.
On Thu, Oct 10, 2019, 7:23 AM Tao Lv wrote:
> Okay, looks like there is no objections. I will send the announcement to
> announce@ and general@ soon.
>
> Thanks,
> -tao
>
> On Wed, Oct 9, 2019 at 10:35 AM Tao Lv wrote:
>
> > Dear community,
> >
> > This is to review the announcement
New site looks good. I do notice that a few tutorials from the old site
are missing (for example the TensorRT tutorial). Any plans to bring them
back?
On Sun, Sep 22, 2019 at 10:04 AM Haibin Lin
wrote:
> Another issue I found with the current website: the Sphinx object inventory
>
Thanks for organizing the release Tao.
On Sun, Sep 1, 2019, 5:53 PM Tao Lv wrote:
> Hi Community,
>
> Code freeze for 1.5.1 patch release will be 9/3 6pm PST (9/4 9am CST). If
> you have any additional fix in progress and would like to include it in
> this release, please assure they have been
Having runtime loadable / plugable operators might help with this.
On Thu, Jul 11, 2019 at 10:20 AM kellen sunderland <
kellen.sunderl...@gmail.com> wrote:
> Once it's compiled the forward / backward, etc kernel implementations are
> hard coded to use cuDNN. In theory we could supp
Once it's compiled the forward / backward, etc kernel implementations are
hard coded to use cuDNN. In theory we could support raw CUDA in addition
to cuDNN but the additional CUDA kernel code would bloat the binary (it
targets several GPU types).
On Thu, Jul 11, 2019 at 9:36 AM Chris Olivier
I remember at the time we also had a read through of this blog post, but to
use the code looked like it was following the advice:
https://devblogs.nvidia.com/cuda-pro-tip-always-set-current-device-avoid-multithreading-bugs/
On Mon, Jun 24, 2019 at 6:39 PM kellen sunderland <
kellen.sund
> samples/sec
> > > > > > > accuracy=0.999844
> > > > > > > INFO:root:Epoch[19] Batch [200-300] Speed: 45146.84
> samples/sec
> > > > > > > accuracy=0.999687
> > > > > > > INFO:root:Epoch[19] Batch [300
a side note, I mentioned a couple of things in my email yesterday that
> > still are not being responded to (they were also ignored in the last
> > incarnation of this “discussion” — I have much experience in this matter
> to
> > assume “discussion” is a waste of my time, seeing
I've also quite often seen two versions of OpenMP linked. I think we can
all agree we probably want to avoid linking in two libraries that do
effectively the same thing.
The performance questions should be fairly straight forward to demonstrate
right? Could we just collaborate on a few minimal
Just double checked CUDA 9, 10 and 10.1 all support SM3, so actually I
don't believe there's any need to drop SMs.
On Wed, Jun 19, 2019 at 9:56 AM kellen sunderland <
kellen.sunderl...@gmail.com> wrote:
> I think where we're all going to have agreement is that we shouldn't have
> co
I think where we're all going to have agreement is that we shouldn't have
code targeting CUDA versions earlier than CUDA 9, or cuDNN versions earlier
than 6. We can go ahead and remove any code that targets those old
versions, and drop any SMs that are not supported by CUDA 9 / cuDNN 6. Id
L: lib
> /home/lvtao/Workspace/mxnet-official/build/mklml/mklml_lnx_2019.0.1.20180928/lib/libmklml_intel.so
>
> Thank you Junru for managing this release. We also verified MKL-DNN
> related tests, convergence, quantization and FP32/INT8 performance. They
> all look good to me.
>
t; > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_parser.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
>
out issues and concerns,
> rather than broken links to nonsense.
>
> Looking forward to your reply!
>
> Thanks,
> Junru
>
> On Fri, May 3, 2019 at 08:05 kellen sunderland <
> kellen.sunderl...@gmail.com>
> wrote:
>
> > Hey Konstantin. Thanks for startin
Hey Konstantin. Thanks for starting an email thread and sorry for the
confusion. I think the ides is that we should discuss and agree on
Conan.io adoption first on the dev list, then start merging PRs. Release
1.4.1 is already in testing and the 1.5 code freeze deadline is also near
so I think
non-binding distinction. I am not a PPMC
> member, so my vote is non-binding.
>
> Best,
> Damien
>
> On Fri, May 3, 2019 at 3:19 AM kellen sunderland <
> kellen.sunderl...@gmail.com> wrote:
>
> > Hi Junru could you give a quick summary of the binding / non-bi
riate for me to vote so
> I refrained from voting till now.
>
> +1
>
> -sz
>
> > On May 3, 2019, at 12:19 AM, kellen sunderland <
> kellen.sunderl...@gmail.com> wrote:
> >
> > Hi Junru could you give a quick summary of the binding / non-binding
>
passed
> > > - GluonCV unittest scripts passed
> > > - GluonCV training scripts passed
> > > - No issue with python multiprocessing
> > >
> > > Best,
> > > Zhi
> > > > On May 2, 2019, at 11:34 AM, kellen sunderland <
> > > kellen.sunderl
+1 (non-binding)
I checked TRT integration builds and tests pass.
MD5s
Sigs look good.
-Kellen
On Thu, May 2, 2019 at 10:51 AM Damien Stanton
wrote:
> +1 (binding)
>
> Built from source / Scala / Clojure. All tests pass. The only issue of
> minor note: The macOS build guide indicates a
Welcome! Very impressed with the work fixing memory leaks so far.
On Tue, Apr 16, 2019 at 9:14 AM Carin Meier wrote:
> Congrats!
>
> On Tue, Apr 16, 2019 at 11:58 AM Anirudh Subramanian <
> anirudh2...@gmail.com>
> wrote:
>
> > Hi,
> >
> > Please join me to welcome Wang Jiajun
Hey Per, just wanted to drop a line and say thanks for supporting the
community on this one.
On Tue, Apr 9, 2019 at 4:20 AM Per da Silva wrote:
> I've created an issue to track this problem:
> https://github.com/apache/incubator-mxnet/issues/14652
>
> On Tue, Apr 9, 2019 at 9:07 AM Per da Silva
rden and technical debts without significant
> benefit. I would suggest starting by supporting something simple like a
> plugin module, before moving toward the general direction.
>
> Tianqi
>
> On Sun, Apr 7, 2019 at 1:31 PM kellen sunderland <
> kellen.sunderl...@gmail.co
Strongly support the idea of runtime loadable components in MXNet. There's
no reason (other than perhaps engineering effort) we can't have a single
compilation of MXNet that finds dependencies and chooses execution paths
intelligently (or based on configuration) at runtime.
On Thu, Apr 4, 2019
"Does merging mshadow into mxnet bring any actual benefit for customers in
sense of performance, portability, or anything else?"
It would improve the contributor experience in that if we find a bug which
requires fixes in both repos, we won't have to coordinate 2 PRs. It would
also make
Hello MXNet devs,
I'd like to start a thread discussing what our build system should look
like in MXNet 2.0. I'd propose that although the current make system has
served us well in the past, we remove it along with the bump to 2.0. The
end goal I'd like to see is that we have a clean build
Release breakdown makes sense to me Hagay. Thanks for initiating a
discussion.
Some features that I'm personally looking forward to that I hope can make
it into 1.5 (schedule permitting):
* TensorRT being integrated with the subgraph API
* VNNI MKLDNN support
* AMP training in MXNet
I like
Is this the error?
"test_model.R:129: error: Fine-tune
cannot open URL
'http://data.dmlc.ml/models/imagenet/inception-bn/Inception-BN-0126.params'
1: GetInception() at R-package/tests/testthat/test_model.R:129
2:
Congrats Patric!
On Sun, Mar 17, 2019 at 10:34 PM Hagay Lupesko wrote:
> Congrats Patric!
>
> On Fri, Mar 15, 2019 at 7:49 AM Joshua Z. Zhang
> wrote:
>
> >
> >
> >
> > Congrats Patrick!
> >
> >
> >
> >
> >
> > Zhi
> >
> > >
> > > On Mar 15, 2019 at 10:46 PM, > marco.g.ab...@gmail.com)>
Great news. Congrats Steffen.
On Mon, Feb 4, 2019, 5:29 PM Thomas DELTEIL Welcome Steffen!
>
> On Mon, Feb 4, 2019, 15:55 Marco de Abreu
> > Welcome!
> >
> > Am Di., 5. Feb. 2019, 00:45 hat Chris Olivier
> > geschrieben:
> >
> > > Dear Community:
> > >
> > > Please join me to welcome Steffen
Congrats Lin! Well deserved.
On Sat, Feb 2, 2019 at 11:05 PM Marco de Abreu
wrote:
> Congratulations, welcome!
>
> Am So., 3. Feb. 2019, 04:04 hat Chaitanya Bapat
> geschrieben:
>
> > Congratulations Lin! Way to go!
> >
> > On Sat, 2 Feb 2019 at 19:39, sandeep krishnamurthy <
> >
;
> gpg: There is no indication that the signature belongs to the
> owner.
>
> Primary key fingerprint: BD52 136E 76B7 BD68 E784 3B0B 591C 0666 9F74 0FD7
>
>
> Best,
> Steffen
>
> On Wed, Jan 30, 2019 at 10:39 PM kellen sunderland <
> kellen.sunderl...@gmail.c
+0
Overall release looks good. Probably something I'm doing wrong, but so far
not able to validate the .asc. I'm getting "Can't check signature: No
public key". I've added the keys from GitHub and the release folder, and
also added your public key "40C9346904DFCE37" from the MIT key server
Great response Carin.
Just wanted to chime in and say, while the amount of work shouldn't be
underestimated to maintain a new language binding, I'd love to see some
Rust support. The interop patterns between Rust and C/C++ in particular
could make propagating errors a little nicer of an
Hey Qing, thanks for the summary and to everyone for automating the
deployment process. I've left a few comments on the doc.
On Wed, Jan 23, 2019 at 11:46 AM Qing Lan wrote:
> Hi all,
>
> Recently Zach announced the availability for MXNet Maven publishing
> pipeline and general static-build
gt; merge today.
> >
> > Yuxi asked offline to merge
> > https://github.com/apache/incubator-mxnet/pull/13922 to complete Horovod
> > integration. PR will be merged today.
> >
> > After above PR are merge and CI passed successfully 1.4.0.rc1 will be
> &g
sky PR and can
> get to a stable and tested build by Friday.
>
> Best,
> Steffen
>
> On Tue, Jan 15, 2019 at 9:48 PM kellen sunderland <
> kellen.sunderl...@gmail.com> wrote:
>
> > Many thanks for the license fixes and allowing some other PRs to come
> into
>
Regards,
> > > Steffen
> > >
> > > On Tue, Jan 8, 2019 at 11:28 AM Qing Lan wrote:
> > >
> > > > Hi all,
> > > >
> > > > I added a section F in the document that explained the current
> > > > static-linked dependencie
We may want to consider having a new code freeze deadline for RC1. We
could allow users to open PRs against the 1.4.x branch up until this
deadline.
One advantage is we can have a second look at some API changes which we may
not have got 100% right before we push them out and have to support
Congrats Roshani. Well deserved.
On Tue, Jan 8, 2019, 8:29 AM Marco de Abreu Great to have you on board, Roshani!
>
> -Marco
>
> Am Di., 8. Jan. 2019, 15:18 hat Carin Meier
> geschrieben:
>
> > Please join me in welcoming Roshani Nagmote as a new committer.
> >
> > She has been active in the
t; Steffen
>
> On Mon, Jan 7, 2019 at 6:39 PM kellen sunderland <
> kellen.sunderl...@gmail.com> wrote:
>
> > Sorry to hear about the licensing issues. I was following the general
> > vote but I'm still lacking some clarity around what licenses in the
> > onnx-trt r
Sorry to hear about the licensing issues. I was following the general vote
but I'm still lacking some clarity around what licenses in the onnx-trt
repo need to be surfaced. I believe onnx-trt is MIT licensed, but it
includes Onnx as a third party repo which then brings in dependencies with
a
ted from
> internet). It would bring us the best security we have.
>
> Thanks,
> Qing
>
> On 12/17/18, 2:06 PM, "kellen sunderland"
> wrote:
>
> I'm not in favour of publishing artifacts from any Jenkins based
> systems.
> There are many ways to bu
I'm not in favour of publishing artifacts from any Jenkins based systems.
There are many ways to bundle artifacts and publish them from an automated
system. Why we would use a CI system like Jenkins for this task? Jenkins
frequently has security vulnerabilities and is designed to run arbitrary
If it's hanging consistently would you be able to dump a native stack trace
and see what call specifically is hanging?
On Fri, Dec 14, 2018 at 11:38 AM Alex Zai wrote:
> Is anyone familiar with the Julia build and can help debug an issue where
> the Julia stage in the CI just hangs? I have made
Congrats Aaron. Really appreciate all the effort spent improving the
documentation.
On Mon, Dec 3, 2018 at 6:30 PM Hagay Lupesko wrote:
> Congrats Aaron!
> Your work on the docs definitely set a new standard and helps the community
> tremendously - well deserved!
>
>
> On Mon, Dec 3, 2018 at
Congrats Rahul, well deserved.
On Mon, Dec 3, 2018 at 6:24 PM Tianqi Chen wrote:
> Let us welcome Rahul Huilgol as a new Committer of MXNet. He has
> contributed to many fronts, including the FP16 support, distributed
> training and mixed precision support of MXNet. He has a breadth of
>
e on a different instance type. In
> >> that
> >>>> Case, it should not be a big deal.
> >>>>
> >>>> If there are big differences, that's already a yellow flag for
> >>>> compatibility, but that's unlikely. But in that case, we wou
least we can contribute to reducing the carbon footprint and slows
> down
> the global warming :)
>
> Tianqi
>
> On Fri, Nov 30, 2018 at 9:38 AM kellen sunderland <
> kellen.sunderl...@gmail.com> wrote:
>
> > Regarding cost, yes we could ru
Just looked at the mf16c work and wanted to mention Rahul clearly _was_
thinking about AMD users in that PR.
On Thu, Nov 29, 2018 at 3:46 PM kellen sunderland <
kellen.sunderl...@gmail.com> wrote:
> From my perspective we're developing a few features like mf16c and MKLDNN
>
>From my perspective we're developing a few features like mf16c and MKLDNN
integration specifically for Intel CPUs. It wouldn't hurt to make sure
those changes also run properly on AMD cpus.
On Thu, Nov 29, 2018, 3:38 PM Hao Jin I'm a bit confused about why we need extra functionality tests
+1
On Thu, Nov 29, 2018 at 2:50 PM Seth, Manu
wrote:
> +1
>
> On 11/29/18, 2:39 PM, "Alex Zai" wrote:
>
> What are people's thoughts on having AMD machines tested on the CI? AMD
> machines are now available on AWS.
>
> Best,
> Alex
>
>
>
wrote:
> Kellen - please merge your PR before v1.4.x branch is created or integrate
> afterwards.
> Steffen
>
> On Tue, Nov 20, 2018 at 7:01 PM kellen sunderland <
> kellen.sunderl...@gmail.com> wrote:
>
> > Hey Steffen, I'd like to be able to merge this PR for ve
Welcome Tao!
On Mon, Nov 26, 2018 at 7:13 PM Sheng Zha wrote:
> We are pleased to announce Tao Lv as a new committer of Apache
> MXNet. Tao's sustained contribution to the project has been greatly helping
> the CPU performance of MXNet.
>
> Please join me to welcome Tao to the team!
>
> -sz
>
Sorry, [1] meant to reference
https://issues.jenkins-ci.org/browse/JENKINS-37984 .
On Sun, Nov 25, 2018 at 5:41 PM kellen sunderland <
kellen.sunderl...@gmail.com> wrote:
> Marco and I ran into another urgent issue over the weekend that was
> causing builds to fail. This issue w
Marco and I ran into another urgent issue over the weekend that was causing
builds to fail. This issue was unrelated to any feature development work,
or other CI fixes applied recently, but it did require quite a bit of work
from Marco (and a little from me) to fix.
We spent enough time on the
Hey Marco, I'm still having quite a few issues passing PRs. Would you be
able to at least test a handful of PRs and make sure they pass/fail tests
as you expect?
On Sat, Nov 24, 2018, 7:01 PM Marco de Abreu
Hello Steffen,
>
> thank you for bringing up these PRs.
>
> I had to abort the builds
Agree with your point about other repos also not being based on versioning
Tao. I would point out that I've given some that I've worked with similar
feedback: https://github.com/onnx/onnx-tensorrt/issues/68
On Wed, Nov 21, 2018 at 6:48 PM Naveen Swamy wrote:
> Tao,
>
> You are right there are
I've spent the last few days testing MXNet w/ MKLDNN and quantized models
and it's a beast. Really good speed improvements on my models, no bugs
that I've noticed.
I'm in general supportive but I'm still wondering what the story is like
when there's no AVX instructions present on CPUs. Do we
Hey Carin, I don't think there's any issues merging this PR. The veto'd
aspect was around _requiring_ modern loop usage, and failing the build if
clang tidy detected modern loops could be used but weren't. The original
PR included a check for this and would fail any builds not using modern
Just tested with 1.3.0 and those tests were failing for that release as
well. Given it's not a regression I'm +1 (non-binding).
On Thu, Nov 15, 2018 at 11:52 PM kellen sunderland <
kellen.sunderl...@gmail.com> wrote:
> Thanks for organizing the release Anton and for testing Carin and
Thanks for organizing the release Anton and for testing Carin and Steffen.
Lots of great fixes in this release. As we don't have the required 3
committers I'd suggest extending the vote for a few days.
I tested the following on MacOS 10.13, High Sierra:
INCUBATING IN RELEASE FILE: check.
I think we should bias towards static linking. It should make using mxnet
easier in a lot of cases for users. As long as the license permits static
linking (i.e. is non-gpl) I'd +1 static linking for portability and ease of
use. The only caveat would be in cases where the package size would
+1 (non-binding)
On Thu, Nov 8, 2018 at 10:37 AM Thomas DELTEIL
wrote:
> +1 (non-binding)
>
> Le jeu. 8 nov. 2018 à 10:04, Carin Meier a écrit :
>
> > Reminder - Vote ends tomorrow- Friday Nov 9th at 6:00 am EST
> >
> > On Mon, Nov 5, 2018 at 11:29 AM Carin Meier
> wrote:
> >
> > > This is a
t; > >> discussed. I summarized discussion here
> > >> <
> >
> https://cwiki.apache.org/confluence/display/MXNET/Hangout+October+24th+2018+8am+and+5pm+PDT
> >
> > and
> > >> updated the release proposal page
> > >> <
> >
>
; > > Best,
> >>>> > > > Sandeep
> >>>> > > >
> >>>> > > >
> >>>> > > > On Tue, Sep 18, 2018 at 9:51 AM Marco de Abreu
> >>>> > > > wrote:
> >>>> > > >
> >>>> > > &g
Hey Tao, thanks for letting the community know. It's completely
understandable if you want to dig deep on the failure. Don't worry about
taking a little extra time to get to the bottom of test failures, that's
exactly the reason we have the CI setup. Let us know if there's anything
you think we
ing for code quality. As a developer I wonder, do we have actionable
> items for looking at / fixing these issues or right now is done in an
> informational / good will basis?
>
> Is there a way to colorize this output?
>
> Pedro.
>
> On Fri, Nov 2, 2018 at 5:10 PM kellen su
Reference scan here (I believe I also count 5 memory violations):
http://jenkins.mxnet-ci.amazon-ml.com/blue/rest/organizations/jenkins/pipelines/incubator-mxnet/branches/master/runs/1856/nodes/104/log/?start=0
-Kellen
On Fri, Nov 2, 2018 at 9:07 AM kellen sunderland <
kellen.sund
Hey Anton, can you provide a sample scan? I'm interested to see if it
catches different memory access violations, or if it gets the same ones
we've already seen reported by clang-tidy. For example are these
violations in the reports:
--
+1 non-binding. As mentioned in various threads, this model should be much
more scalable. I like the idea of hierarchies of contributors on the
project.
On Mon, Oct 29, 2018 at 3:47 PM Carin Meier wrote:
> This vote is to adopt the document
>
>
I believe the wording _must_ comes from the fact that the PMC (as a body)
must have a formal vote for a release, otherwise the release will not
happen. I don't believe it means every PMC member is required to vote on
the release. I can see where the confusion comes from, but also feel the
Hey Sergio, I think it's mostly to keep the Dockerfile size down by
matching the system python package. Of course people can extend the image
and use python 3.6 / 3.7. I think we should follow this up with an update
to the new Ubuntu LTS version as a base docker image at which point it
would use
First of all thanks to Intel for these improvements, really a great effort.
What would the compatibility story look like for users that don't have
these AVX instructions? Would there be any negative affect for AMD users?
Regarding TensorRT: It's a possibility but not planned in the short term.
This feels like something we should get a little data on before making a
decision, but I also don't have a strong opinion. I would bias towards
pushing something that might be imperfect and moving on to develop other
improvements for users rather than determining a 'perfect' solution.
The
oduction use-case with only
> necessary runtime packages.
>
> -1
>
> On Wed, Oct 17, 2018 at 11:48 AM kellen sunderland <
> kellen.sunderl...@gmail.com> wrote:
>
> > Hey Pedro, sorry I still don't see a good reason to justify changing the
> > filenames. Renam
> suggested by several MXNet contributors during review.
>
> Pedro.
>
> On Wed, Oct 17, 2018 at 11:21 AM kellen sunderland <
> kellen.sunderl...@gmail.com> wrote:
>
> > -1. (non-binding)
> >
> > These Dockerfiles are very bloated and imo only useful
-1. (non-binding)
These Dockerfiles are very bloated and imo only useful for creating a build
environment or running tests. Just as you wouldn't setup a server for a
service and then install 200 packages that may or may not be used for the
service I wouldn't recommend using these Dockerfiles at
Awesome work! Many thanks.
On Fri, Oct 12, 2018, 1:19 AM Harsh Patel
wrote:
> Hey,
> I am looking to contribute to MXNet. I have a working implementation based
> on my proposed design structure according to this wiki page (
>
>
Hello MXNet Community,
Some community members recently had an offline brainstorming focused on how
to speed up CI builds and test runs. I've summarized some of that offline
discussion, but we'd like to call out that we're also open to new ideas
from the community. If others have speedup
I think it makes a lot of sense to separate these roles Haibin. My
impression is there's a high degree of knowledge and experience required to
make strategic design decisions on the project. There's a bunch of core
members of the team that have that knowledge, and I feel there's a bit of
an
:
> > >
> > https://cwiki.apache.org/confluence/display/MXNET/MXNet+Graph+Optimiz
> > ation+and+Quantization+based+on+subgraph+and+MKL-DNN
> > > Lead Contributor: Patric Zhao,
> https://github.com/pengzhao-intel/
> > >
> > > Regarding
"I ran a similar test(test_slice_batchnorm) for 5K times and I couldn't
reproduce the issue."
One thing to keep in mind is that the SelectAlgo call will cache results in
a registry that is in static scope. To repro you'd likely have to create a
new process each time you run the test. (Apologies
Abreu"
wrote:
I think the timeout and other limitations have been employed by Apache
Infra and not by Travis. They didn't say that specifically, but they
already made me aware that we might get further restrictions if we consume
too many resources.
kellen sunderland schrieb am Di., 2. Okt.
2
> from Infra when I had a chat with them a few days ago. But from that
> conversation it was made pretty clear that we cannot increase the limits.
>
> -Marco
>
> kellen sunderland schrieb am Di., 2. Okt.
> 2018, 03:25:
>
> > Interesting, this page seems to indicate that p
Interesting, this page seems to indicate that private projects do have a
longer time out. I'll drop Travis a quick email and see what the deal
would be for our project.
https://docs.travis-ci.com/user/customizing-the-build/#build-timeouts.
On Tue, Oct 2, 2018, 3:15 AM kellen sunderland
wrote
> Thanks,
> Qing
>
> On 10/1/18, 6:08 PM, "kellen sunderland"
> wrote:
>
> Does the global time out change for paid plans? I looked into it
> briefly
> but didn't see anything that would indicate it does.
>
> On Tue, Oct 2, 2018, 2:25 AM Pedro
te:
>
> > This makes sense. Thanks
> >
> > On Sat, Sep 29, 2018 at 6:36 PM kellen sunderland <
> > kellen.sunderl...@gmail.com> wrote:
> >
> > > Hey Zhennan, yes this is the exact problem, and I agree with your
> points
> > > completely. This is why wh
> > > > // since C++11
> > > > struct cow_string { /* ... */ };
> > > > // a copy-on-write string cow_string str = /* ... */;
> > > > // for(auto x : str) { /* ... */ } // may cause deep copy
> > > > for(auto x : std::as_const(str)) {
ly I don't know yet. I can help to investigate. Just given the
> > evidence that, travis timeout every time it gets re-triggered - 2
> > times at least. Correct me if I'm wrong @ Zhennan On Sat, Sep 29, 2018
> > at 1:54 PM kellen sunderland wrote:
> > >
> > > Read
. Do you have a time plan to solve the
> timeout issue? Rebasing can't work for my case. Or shall we run it silently
> to disallow it voting X for overall CI result? Because most developers are
> used to ignore the PRs with 'X'.
> >
> > Thanks,
> > Zhennan
> >
> &g
> > friction for developers imho.
> >
> > On Fri, Sep 28, 2018 at 7:42 AM kellen sunderland <
> > kellen.sunderl...@gmail.com> wrote:
> >
> > > "Range loops aren’t always the most performant way" Do you have an
> > example
> &g
Sorry I meant to say next 'Regarding the *minor* release'.
On Sat, Sep 29, 2018 at 5:27 AM kellen sunderland <
kellen.sunderl...@gmail.com> wrote:
> Thanks for transparently setting a rough timeline Steffen. I think this
> will go a long way in helping the community plan thei
Thanks for transparently setting a rough timeline Steffen. I think this
will go a long way in helping the community plan their work, even if the
details change somewhat on the road to the release.
Regarding the major release: I would propose we unify TensorRT with the
subgraph operator work.
ddition, sometimes
> you want the index. Or maybe you want to iterate backwards, or not start
> from the first, etc. Maybe you want the iterator because you remove it from
> the list at the bottom of the loop Seems like a rule for the sake of
> having a rule.
>
> On
Hey Zhennan, you're safe to ignore Travis failures for now. They're just
informational.
The reason you sometimes see quick builds and sometimes see slow builds is
that we're making use of ccache in between builds. If your PR is similar
to what's in master you should build very quickly, if not
Hey Jim, welcome to the community.
To the best of my knowledge we have not yet discussed/run a Maturity
Model. My gut feel is that MXNet would come away a fairly bi-model
result. My view of the project is that it's getting the Apache Way right
in terms of Code, Releases, and Quality. I think
Hello MXNet devs,
I'd like to discuss uniformly adopting C++11 range loops in the MXNet
project. The benefits I see are:
* Improved C++ readability (examples below).
* Consistency with other languages. The range-loops are quite similar to
loops almost all other programming languages. Given
My gut feel would be just to squash and merge, it usually works quite well.
Is there any chance that someone might want to cherry-pick, revert or
rebase any portions of the PR?
If so what I try and is to provide atomic commits the bring small
individual pieces of value to the codebase. This
1 - 100 of 198 matches
Mail list logo