I do expect the API to change in the future. Currently @szhengac @zhongyuchen
and I are exploring APIs for gradient compression with a few algorithms, and we
may bring back the best practice back to MXNet.
--
You are receiving this because you are subscribed to this thread.
Reply to this
How's this project going?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/16376#issuecomment-562906794
> Heres a set of links for today’s builds
>
> (Plain mxnet, no mkl no cuda)
> https://apache-mxnet.s3-us-west-2.amazonaws.com/dist/2019-12-07/dist/mxnet-1.6.0b20191207-py2.py3-none-manylinux1_x86_64.whl
> (mxnet-mkl)
>
Hi @samskalicky , thank you for the contribution!
I have several suggestions.
- custom GPU operators
1. Provide CUDA stream in `OpResource`.
2. Share the same function on CPU and GPU.
Users can discriminate the context by `MXTensor::dltensor::ctx`
- Call framework specific math helper
Stop disseminating false information:
https://github.com/apache/incubator-mxnet/issues/14979
On Sat, Dec 7, 2019 at 7:04 AM Chris Olivier wrote:
> -1
>
> mkldnn removed omp5 for licencing issues
> no bugs have actually been traced to the use of llvm openmp. only an assert
> caused by an
Awesome project, love it! It really seems easy to use, great job!
-Marco
Skalicky, Sam schrieb am Sa., 7. Dez. 2019,
19:50:
> Hi MXNet Community,
>
> We have been working on adding support for custom C++ operators for a
> while and are happy to announce that the initial functionality is now
>
Hi MXNet Community,
We have been working on adding support for custom C++ operators for a while and
are happy to announce that the initial functionality is now available for you
to try out in the master branch!
CustomOp support in MXNet began with allowing users to write custom operators
in
## Description
Request for comments on the next PR for enhancing custom operator support
Heres some suggestions from the initial PR (Part 1):
- custom GPU operators
- Random number generator resource request
- sparse data types
- migrate lambda functions in MXLoadLib in src/c_api/c_api.cc to
Could you elaborate how a non-Amazonian is able to access, maintain and
review the CodeBuild pipeline? How come we've diverted from the community
agreed-on standard where the public Jenkins serves for the purpose of
testing and releasing MXNet? I'd be curious about the issues you're
encountering
Hi MXNet Community,
We have been working on getting nightly builds fixed and made available again.
We’ve made another system using AWS CodeBuild & S3 to work around the problems
with Jenkins CI, PyPI, etc. It is currently building all the flavors and
publishing to an S3 bucket here:
Chris, I'm trying to understand the situation better exactly because I think
this bug is important and I would like to address it. Therefore I asked you a
question, expecting your answer would be helpful to solve this problem.
Unfortunately it seems to me that your answer misses the point of my
if it is really a problem, then it would be prioritized. all the necessary
info is in that issue (and i already mentioned just yesterday or today on
that ticket) what it was again and it’s like i was talking to no one, as it
has been, simply an immediate revert to “remove the library”. in the
Chris, if you can fix this in a small fraction of a time, please go ahead and do
so. Could you clarify why you think Intel's statement is nonsense or not
applicable? "Because different OpenMP runtimes may not be binary-compatible,
it's important to ensure that only one OpenMP runtime is used
-1
mkldnn removed omp5 for licencing issues
no bugs have actually been traced to the use of llvm openmp. only an assert
caused by an actual bug in mxnet code. there are suitable workarounds.
over time llvm omp has simply been used as a “catch all” for random
problems that aren’t related at all
## Description
In `tf.keras`, users could call `add_loss` method to create some non-standard
loss function (when I say standard, I mean loss function that takes parameters
other than `y_true` and `y_pred`), e.g. loss function that involves the input.
i test 3.12.2 3.13.3 3.14.2 3.15.5
shiwen hu 于2019年12月7日周六 下午7:28写道:
> yes.
>
> Lausen, Leonard 于2019年12月7日周六 下午7:20写道:
>
>> Do you mean starting 3.15.5 it works fine?
>> The image you attached doesn't display on my end.
>>
>> On Dec 7, 2019 19:12, shiwen hu wrote:
>> [image.png]
>>
>> I
yes.
Lausen, Leonard 于2019年12月7日周六 下午7:20写道:
> Do you mean starting 3.15.5 it works fine?
> The image you attached doesn't display on my end.
>
> On Dec 7, 2019 19:12, shiwen hu wrote:
> [image.png]
>
> I tested these versions. until 3.15.5 is working fine.
>
> shiwen hu
Do you mean starting 3.15.5 it works fine?
The image you attached doesn't display on my end.
On Dec 7, 2019 19:12, shiwen hu wrote:
[image.png]
I tested these versions. until 3.15.5 is working fine.
shiwen hu mailto:yajiedes...@gmail.com>> 于2019年12月7日周六
下午1:24写道:
Now, other problems are
[image: image.png]
I tested these versions. until 3.15.5 is working fine.
shiwen hu 于2019年12月7日周六 下午1:24写道:
> Now, other problems are solved by modifying CMakeLists.txt.but The command
> line is too long problem must update cmake.However I don't know which
> minimum version fixed the
19 matches
Mail list logo