>Perhaps of more immediate use and interest is the present days availability of 
>the Nvidea K1 processor in a 64b SoC in a chromebook ~nvidia-cb-vmware. This 
>should be very nice for actual development as one can easily (even my klutzy 
>self did it in 1/2 hour) install several different flavours of linux and run J 
>in it right out of the box. Likely the X1, and future generation of Nvidea 
>SoC's will be put in ChromeBooks too. Tablets are nice to play with at the 
>beach but CB's are much easier for work and development.

---~
http://u.tgu.ca/nvidia-cb-vmware

greg
~krsnadas.org

--

from: Skip Cave <[email protected]>
to: Chat forum <[email protected]>
date: 21 January 2015 at 07:01
subject: Re: [Jchat] Deep Learning With Google
Marc,

>Interesting video. Thanks for pointing it out. This is the direction J should 
>be moving in, since a true interpretive parallel language would be a powerful 
>tool. I also watched Lindsey Kuper's video on LVars 
><https://www.youtube.com/watch?v=8dFO5Ir0xqY> which Aaron had talked about. 
>Nice solution to some difficult issues in parallel processing.

--

rom: Marc Simpson <[email protected]>
to: [email protected]
date: 20 January 2015 at 23:34
subject: Re: [Jchat] Deep Learning With Google

Skip,

>Have you seen Aaron Hsu's work on co-dfns in Dyalog? He provides a flag for 
>leveraging CUDA and running parallelised expressions on the GPU.

- Talk at Dyalog 14: http://video.dyalog.com/Dyalog14/?v=8VPQmaJquB0
- Repository: https://github.com/arcfide/Co-dfns

Best,
Marc

--

from: Skip Cave <[email protected]>
to: Chat forum <[email protected]>
date: 20 January 2015 at 22:58
subject: Re: [Jchat] Deep Learning With Google

>When I took the Andrew Ng's Machine Learning course at Stanford (Coursera), 
>all the homework was in Octave (Open-Source Matlab). I actually did some of my 
>ML homework in J, but most of the homework problems required submitting the 
>answers in Octave code. Octave is a nice matrix-handling language, but it 
>lacks many of the useful primitives of J. We only touched on the 
>then-brand-new Deep Learning algorithms in that class.

>The Deep Learning library Theano <http://deeplearning.net/tutorial/> is 
>written in Python which has a library to run computations on the Nvidia 
>CPU/GPU <http://bit.ly/1JbQ1eA>. Most of the serious deep learning research 
>runs on GPUs using large arrays of homogenous parallel graphic processors. The 
>huge number-crunching task that is needed to train a multi-layered neural 
>network was nearly impossible until the advent of these large GPU clusters.

>It is becoming clear that advances in CPU power in the near future will not 
>come from faster clock speeds, because of power and other limitations. The 
>major advances in processing power will come from adding more parallel 
>processors on a chip. The need for ultra-high resolution (4K) video processing 
>is driving chip vendors to put massive parallel processing power in all their 
>mid and higher-end chips.

>Dual and quad CPUs  are becoming commonplace in desktops, laptops, and even 
>smart phones. Even more importantly, massive multi.processing GPUs are getting 
>integrated right along with these multiple CPUs on a single System-on-Chip 
>(SOC). The NVidia Tegra X1 chip 
><http://www.nvidia.com/object/tegra-x1-processor.html> has eight 64bit ARM 
>cores and 256 GPU cores on *a single chip intended for mobile devices*. Truly 
>a supercomputer on a chip. And it is likely to be coming to you in a tablet 
>priced under $500 in the near future.

>So it is clear that your everyday processor will soon have multiple parallel 
>CPUs and hundreds of parallel GPUs (if yours doesn't have already). What is 
>needed now is a programing language to deal with all this parallelism.

>I have always felt that APL and J are perfect languages to express parallel 
>operations. APL and subsequently J have evolved over 50 years to develop and 
>polish the set of primitives that now cover arguably the most commonly-used 
>and useful set of array operations of any language. I believe that if J's 
>primitives could run on a modern multi-CPU and GPU architecture and take 
>advantage of all that parallelism, this would give J a unique position in 
>programming languages as being a true "native" parallel language. This could 
>significantly raise the visibility of J in the programming world.

>However, we must keep in mind the fate of Analogic's APL Machine 
><http://bit.ly/157Lhtd>, one of the first computers to implement APL using a 
>vector processor architecture. I believe that the APL Machine story points out 
>the risk of tying a language to what was then, rather exotic hardware.   I 
>believe that you need to make J language run on commodity hardware, taking 
>advantage of the parallel processing that is now showing up in most common 
>stationary and mobile devices.

>For a test case, I would recommend porting the J kernel to the NVidia K1 
>processor, which is in the NVidia Shield tablet, or also in the Lenovo IdeaPad 
>K1. The K1 has the same basic CPU and GPU architecture as the X!, but not 
>quite so many cores. When the X1 hits volume production later this year, 
>moving to it should be fairly straightforward. Unfortunately, my coding skills 
>fall way short of those required to perform this task, so I can only point out 
>the opportunity.

>I realize that some of J's primitives do not fit well with massively parallel 
>processors. However that is the whole idea behind a high-level language - the 
>language takes advantage of the underlying parallel hardware when it can, and 
>falls back to traditional scalar processing when it can't.

Skip
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to