Thank you, Chris, for driving the adoption of JIRA. Adding Keras component.
I will add all tasks required for Keras2-MXNet project to enable community
to have visibility and contribute to the tasks.
Best,
Sandeep
On Tue, Mar 6, 2018 at 8:11 AM, Chris Olivier wrote:
>
Could we actually just define a mechanism so the libs could register their
ops at runtime? No linking required?
On Tue, Mar 6, 2018, 8:36 PM Pedro Larroy
wrote:
> This is a good point. What additional blockers would there be for linking
> against a user provided
This is a good point. What additional blockers would there be for linking
against a user provided library with custom operators?
On Tue, Mar 6, 2018 at 5:16 PM, Barber, Christopher <
christopher.bar...@analog.com> wrote:
> To avoid this kind of problem, you really need to support features that
This was discussed in the past and so far, it seems possible for Unix
builds, although it’s going to be messy since the compile would generally
need access to all a large portion of the headers in the source tree, since
the “things needed to create your own op” aren’t necessarily all contained
in
I also don't see any reason it wouldn't work.
I wonder if we could do this and then offer a convenient pip package with
operators in separated .SOs, each linked against various libs. For example
all versions of blas libs, all versions of cuda. We could then detected
the user's environment at
There are two separate issues: how to compile against the MXNet source and how
to dynamically load external dynamic libraries. While it would be nice to have
all necessary headers in the same place, it doesn't really matter that much if
people building extensions have to have access to the
The static init of your module should register your operator just as it
does for the operators in mxnet (NNVM_REGISTER_OP). While I haven't done
it personally, I see no reason why it wouldn't work like any other operator
at that point.
On Tue, Mar 6, 2018 at 1:28 PM, Barber, Christopher <
We want as few dependencies as possible.
CMake alone is enough trouble for our users. We don't want to burden them with
other stuff.
On 2018/03/06 17:21:15, kellen sunderland wrote:
> Short term solution sounds good to me Chris. Converting the CI should be
>
I think the right approach here is to start another vote on terminate the
starting process of using JIRA,
since we have passed this vote
On Tue, Mar 6, 2018 at 9:13 PM, Eric Xie wrote:
> -1
>
> JIRA is ancient and arcane. This adds unnecessary overhead.
>
> On 2018/03/03
Hi, all
Based on the results in
https://lists.apache.org/thread.html/b54c168add0dc623a5356eb878e785886ecb8a4b08049c1ed0a63899@%3Cdev.mxnet.apache.org%3E,
our community has agreed on that
We should track code changes with JIRA
I have updated guidelines for contributors in
it seems strange that s3 would make such a major restriction. there’s
literally no way to incrementally write a file without knowing the size
beforehand? some sort of separate append calls, maybe?
On Tue, Mar 6, 2018 at 8:53 PM Rahul Huilgol wrote:
> Hi everyone,
>
> I
One potential problem is with libstdc++. Specifically with vectors. Our
operator interface uses vectors, which breaks when core lib and extension are
compiled with different compiler version
On 2018/03/06 22:45:16, Chris Olivier wrote:
> The static init of your module
-1
JIRA is ancient and arcane. This adds unnecessary overhead.
On 2018/03/03 06:11:12, CodingCat wrote:
> This vote passes with 6 +1 votes (6 bindings) and no 0 or -1 votes.
>
> Binding +1:
> Chris Olivier
> Indhu Bharathi
> Suneel Marthi
> Yuan Tang
> Marco de Abreu
>
Eric,
while you may not be, most people are using some sort of
crappy-JIRA-like-tool (such as SIM) which is both way behind JIRA in
utility and usability as well as not public, so the rest of the world can’t
see the backlog or what work orders are or whatever. the development
process does not
Hi Chris,
S3 doesn't support append calls. They promote the use of multipart uploads
to upload large files in parallel, or when network reliability is an issue.
Writing like a stream does not seem to be the purpose of multipart uploads.
I looked into what the AWS SDK does (in Java). It buffers
This was agreed upon some time ago in a github issue thread, unless there
are new objections to it.
As far as I know, it's just a matter of someone putting in the work to add
more functionality to cmake or to fuse the two builds.
One solution for the short term might include having the Makefile
Here is discussion:
https://github.com/apache/incubator-mxnet/issues/8702
On Tue, Mar 6, 2018 at 9:14 AM, Chris Olivier wrote:
> This was agreed upon some time ago in a github issue thread, unless there
> are new objections to it.
>
> As far as I know, it's just a matter
Short term solution sounds good to me Chris. Converting the CI should be
pretty easy. One thing we should keep in mind is that there's going to be
a bunch of doc's we'll have to update.
Warning, slight thread hijack ahead:
As a more long term change I was wondering if we had considered using
To avoid this kind of problem, you really need to support features that allow
MXNet to be extended without having to resort to forking. There is currently no
way to add C++ custom operators without forking, and no way to share such
operators across projects. This creates a perverse incentive to
Hi Yuji,
We will not depend on Pytorch. There is a convenience function available
(by the author of tensorboard-pytorch) for transforming models in onnx
format to the graph protobuf in TensorFlow. If we can export MXNet symbols
and params to the onnx format, we can just use that function directly
20 matches
Mail list logo