@dabrowski: In my experience, J beats Java-server-mode. J gets C level
performance. I've never gotten C-level performance via Java.

@bill: I'm happy to use "J restricted to floating points" -- I have no
attachment to doubles.

I guess no one has an argument for J+GPU beating wrappers-around-CUDA --
which makes sense, this is consistent with everything I know.


On Tue, Dec 19, 2017 at 7:32 PM, bill lam <[email protected]> wrote:

> One weakness of J for GPU is that J doesn't support single precision or
> half precision float but which are what GPU being commonly used for. The
> overhead of conversion from/to double precision may or may not be
> significant, it depends on applications, ymmv.
>
> On Dec 20, 2017 4:39 AM, "TongKe Xue" <[email protected]> wrote:
>
> > Hi,
> >
> >   In my experience, on the CPU, J beats Java. I suspect this is due to
> > Java's GC and J's ability to via "higher representation of ranks/loops"
> to
> > run highly optimized code.
> >
> >   Is there any reason to believe that GPU-backed-J would beat Tensorflow
> on
> > Tensor / Deep Learning work ?
> >
> >   Given that much of said works reduces to cuBlas + cuDNN, it seems like
> a
> > GPU-backed-J, although more concise, would end up calling the same
> > functions.
> >
> > Thanks,
> > --TongKe
> > ----------------------------------------------------------------------
> > For information about J forums see http://www.jsoftware.com/forums.htm
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm
>
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to