Marc,

Interesting video. Thanks for pointing it out. This is the direction J
should be moving in, since a true interpretive parallel language would be a
powerful tool. I also watched Lindsey Kuper's video on LVars
<https://www.youtube.com/watch?v=8dFO5Ir0xqY> which Aaron had talked about.
Nice solution to some difficult issues in parallel processing.

Skip

Skip Cave
Cave Consulting LLC

On Wed, Jan 21, 2015 at 1:34 AM, Marc Simpson <[email protected]> wrote:

> Skip,
>
> Have you seen Aaron Hsu's work on co-dfns in Dyalog? He provides a
> flag for leveraging CUDA and running parallelised expressions on the
> GPU.
>
> - Talk at Dyalog 14: http://video.dyalog.com/Dyalog14/?v=8VPQmaJquB0
> - Repository: https://github.com/arcfide/Co-dfns
>
> Best,
> Marc
>
> On Wed, Jan 21, 2015 at 6:58 AM, Skip Cave <[email protected]>
> wrote:
> > When I took the Andrew Ng's Machine Learning course at Stanford
> (Coursera),
> > all the homework was in Octave (Open-Source Matlab). I actually did some
> of
> > my ML homework in J, but most of the homework problems required
> submitting
> > the answers in Octave code. Octave is a nice matrix-handling language,
> but
> > it lacks many of the useful primitives of J. We only touched on the
> > then-brand-new Deep Learning algorithms in that class.
> >
> > The Deep Learning library Theano <http://deeplearning.net/tutorial/> is
> > written in Python which has a library to run computations on the Nvidia
> > CPU/GPU <http://bit.ly/1JbQ1eA>. Most of the serious deep learning
> research
> > runs on GPUs using large arrays of homogenous parallel graphic
> processors.
> > The huge number-crunching task that is needed to train a multi-layered
> > neural network was nearly impossible until the advent of these large GPU
> > clusters.
> >
> > It is becoming clear that advances in CPU power in the near future will
> not
> > come from faster clock speeds, because of power and other limitations.
> The
> > major advances in processing power will come from adding more parallel
> > processors on a chip. The need for ultra-high resolution (4K) video
> > processing is driving chip vendors to put massive parallel processing
> power
> > in all their mid and higher-end chips.
> >
> > Dual and quad CPUs  are becoming commonplace in desktops, laptops, and
> even
> > smart phones. Even more importantly, massive multi.processing GPUs are
> > getting integrated right along with these multiple CPUs on a single
> > System-on-Chip (SOC). The NVidia Tegra X1 chip
> > <http://www.nvidia.com/object/tegra-x1-processor.html> has eight 64bit
> ARM
> > cores and 256 GPU cores on *a single chip intended for mobile devices*.
> > Truly a supercomputer on a chip. And it is likely to be coming to you in
> a
> > tablet priced under $500 in the near future.
> >
> > So it is clear that your everyday processor will soon have multiple
> > parallel CPUs and hundreds of parallel GPUs (if yours doesn't have
> > already). What is needed now is a programing language to deal with all
> this
> > parallelism.
> >
> > I have always felt that APL and J are perfect languages to express
> parallel
> > operations. APL and subsequently J have evolved over 50 years to develop
> > and polish the set of primitives that now cover arguably the most
> > commonly-used and useful set of array operations of any language. I
> believe
> > that if J's primitives could run on a modern multi-CPU and GPU
> architecture
> > and take advantage of all that parallelism, this would give J a unique
> > position in programming languages as being a true "native" parallel
> > language. This could significantly raise the visibility of J in the
> > programming world.
> >
> > However, we must keep in mind the fate of Analogic's APL Machine
> > <http://bit.ly/157Lhtd>, one of the first computers to implement APL
> using
> > a vector processor architecture. I believe that the APL Machine story
> > points out the risk of tying a language to what was then, rather exotic
> > hardware.   I believe that you need to make J language run on commodity
> > hardware, taking advantage of the parallel processing that is now showing
> > up in most common stationary and mobile devices.
> >
> > For a test case, I would recommend porting the J kernel to the NVidia K1
> > processor, which is in the NVidia Shield tablet, or also in the Lenovo
> > IdeaPad K1. The K1 has the same basic CPU and GPU architecture as the X!,
> > but not quite so many cores. When the X1 hits volume production later
> this
> > year, moving to it should be fairly straightforward. Unfortunately, my
> > coding skills fall way short of those required to perform this task, so I
> > can only point out the opportunity.
> >
> > I realize that some of J's primitives do not fit well with massively
> > parallel processors. However that is the whole idea behind a high-level
> > language - the language takes advantage of the underlying parallel
> hardware
> > when it can, and falls back to traditional scalar processing when it
> can't.
> >
> > Skip
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Skip Cave
> > Cave Consulting LLC
> >
> > On Tue, Jan 20, 2015 at 12:31 PM, greg heil <[email protected]> wrote:
> >
> >> Indeed, i was sort of wondering what the best way to make the tool
> >> that Facebook is donating be more immediately available to J users.
> >> Natural ports for Java and Lua users.
> >>
> >> ---~
> >>
> >>
> http://gigaom.com/2015/01/16/facebook-open-sources-tools-for-bigger-faster-deep-learning-models
> >>
> >> greg
> >> ~krsnadas.org
> >>
> >> --
> >>
> >> from: Jon Hough <[email protected]>
> >> to: "[email protected]" <[email protected]>
> >> date: 20 January 2015 at 05:59
> >> subject: [Jchat] Deep Learning With Google
> >>
> >> >I found this article about Google's deep learning very interesting.
> >>
> >>
> https://medium.com/backchannel/google-search-will-be-your-next-brain-5207c26e4523
> >>
> >> >Just thought I'd throw it out there, to anyone who might be
> interested. I
> >> know there are J'ers who do data analysis and possibly machine learning
> >> stuff. This could be interesting for them.
> >>
> >> >As a machine learning layman, the above article was pretty useful to
> help
> >> understand how companies like Google leverage all the data they have.
> >> ----------------------------------------------------------------------
> >> For information about J forums see http://www.jsoftware.com/forums.htm
> >>
> > ----------------------------------------------------------------------
> > For information about J forums see http://www.jsoftware.com/forums.htm
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm
>
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to