Re: soft relu gradient, is it correct?

2019-01-06 Thread Matthias Seeger
e gradient be the logistic function? > > Is my understanding that the gradient of the softrelu should go down > to zero as Lim x -> -Inf Which is not the case with the above > definition which goes to -Inf as Lim x- > -Inf > > https://en.wikipedia.org/wiki/Rectifier_(neural_networks) > > > Pedro. > -- Matthias Seeger

Proposal: New namespace 'math' for (specific) mathematical functions

2018-03-08 Thread Matthias Seeger
ome guidance on how to do that. Bye, Matthias​ -- Matthias Seeger

Grant access to slack

2017-11-01 Thread Matthias Seeger
Hello, I got a few PRs merged into MXNet already (github mseeger). Could you grant me access to slack, and give me details on the channel? -- Matthias Seeger