What would be really nice would be for the matrix multiplication phrase in J (+/ . *), the matrix inverse phrase (%.), and other J matrix operations, to detect the presence of a GPU and use "special code" to accelerate these matrix operations when a GPU is present. J code running with a GPU shouldn't look any different that J code running on a scalar machine. It should just run a LOT faster.
Skip Skip Cave Cave Consulting LLC On Sat, Aug 30, 2014 at 12:14 AM, Raul Miller <[email protected]> wrote: > This isn't really worth a pull request, but double->float is 1&(3!:5) > and float->double is _1&(3!:5) > > This is documented at > http://www.jsoftware.com/help/dictionary/dx003.htm. (Note that floats > are represented as a sequence of literals, because J can't work with > them.) > > $1&(3!:5) o.1 2 > 4 > $1&(3!:5) o.1 2 > 8 > a.i.1&(3!:5) o.1 > 219 15 73 64 > > Thanks, > > -- > Raul > > > On Sat, Aug 30, 2014 at 1:04 AM, Scott Locklin <[email protected]> wrote: > > So, it took me all of 20 minutes to pull dgemm into J for a matrix > > multiplication speedup. > > I stuck it here, along with an org-emacs TODO list for making this > actually > > happen. > > It's all "busy work" as far as I can tell, though it would be my first > time > > writing code that links to CUDA. > > > > Either way, the dgemm wrapper should eventually make its way into the API > > stuff, as it's a pretty good speedup over +/ .* for bigger array problems > > > > https://github.com/locklin/jCUDA > > > > Feel free to pitch in on the busy work if anyone has problems that would > > benefit from this. > > > > -Scott > > ---------------------------------------------------------------------- > > For information about J forums see http://www.jsoftware.com/forums.htm > ---------------------------------------------------------------------- > For information about J forums see http://www.jsoftware.com/forums.htm > ---------------------------------------------------------------------- For information about J forums see http://www.jsoftware.com/forums.htm
