Thank you Ben, your answer helps me to start my investigations from the right perspective. More between your lines..
Am Dienstag, 2. August 2016 15:19:49 UTC+2 schrieb Ben Goertzel: > > Guys, > > The big problem with using GPUs for OpenCog is that most OpenCog > cognitive algorithms would be better suited for MIMD parallelism than > for SIMD parallelism > O.K. > > To put it simply, GPUs are SIMD parallel which means they are suited > for cases where one needs to repetitively do the same thing over and > over to multiple data items ... Neural net algorithms tend to be like > this. In OpenCog, ECAN is also like this (as it's basically a > special variant of an attractor neural net). But the other OpenCog > algorithms are generally not like this. They are tractably > parallelizable, but only on a MIMD parallel substrate... > O.K. > > Another issue is RAM access -- for OpenCog (or any system centered on > manipulation of large graphs) the biggest cost in terms of processing > time is RAM access for small, hard to predict RAM read/writes .... So > if the bulk of RAM is not on the GPU then all the savings realized by > the GPU will be eaten by GPU-CPU messaging > O.K. > > What you really want for OpenCog is a MIMD parallel chip, with a lot > of RAM, and special connects btw the processors' caches..... This > would let you put OpenCog on embedded devices in a useful way, and > also build OpenCog-tailored supercomputers.... These would be > customized for OpenCog in the same sense that the current crop of > "deep learning chips" are customized for hierarchical NNs. > This sounds like a SPARK-architecture could do it. SPARC T5 <http://www.oracle.com/us/corporate/innovation/sparc-t5-deep-dive/index.html> > Mandeep Bhatia and I have sketched some ideas about an "OpenCog chip" > along these lines but have been too busy with other stuff to refine > these ideas into a detailed design that can be given to an FPGA > programmer for prototyping... it will happen eventually ;) > Maybe a partner like Oracle/Sun could be of value. They are very good doing things like this. I am very sure that <Artificial Real World Cognition> will be the next big bussines and that it not can be done just with NNs. It will happen here first. To me it seems possible to get a partner like Oracle/Sun - they need the bussines and are always looking for new opportunities. For them it would be easy to spend some servers and help to port at least the core funktions of opencog and make it accessible via WWW. This would end up in some cogy cloud computing - why not? > > For the present, GPUs could be used for certain special purposes > within OpenCog -- e.g. > > -- ECAN importance spreading across an Atomspace whose structure does > not frequently change > > -- maybe, with a lot of work, some sort of limited (but could still be > very useful) pattern matching against an Atomspace whose structure > does not frequently change > > OK. > > These could be quite valuable but wouldn't constitute "porting the > whole OpenCog to GPU" > of course... sorry for being not precise. I was just to entusiastic :) > -- Ben > > > > On Tue, Aug 2, 2016 at 3:43 AM, Andi <[email protected] <javascript:>> > wrote: > > Hi Gaurav, > > I think that it should be possible to take advantage of the massiv > parallel > > computing power of a GPU also for the OpenCog system like it is used for > > NNs. > > > > So, are there any NP-hard problems inside the box? > > > > --Andi > > > > Am Dienstag, 2. August 2016 03:22:22 UTC+2 schrieb Gaurav Gautam: > >> > >> I may be wrong, but as far as I understand one problem may be that > neural > >> networks are not really graphs or hypergraphs. Books show them as a set > of > >> layers and some connecting edges which looks a lot like a graph, but > when > >> they are implemented in code, they mostly are matrix operations. So, as > far > >> as I understand a program implementing a neural network will be doing > matrix > >> operations. If I am right about this, then I don't see how seeing > atomspace > >> as a neural network will help. > >> > >> What I am saying is that I don't think the atoms and links can be > >> connected to make a neural network straightforwardly. Of course, one > could > >> make atoms that represent the coefficients of the model that the CNN > >> represents and then connect those with links that have weights and then > make > >> a function that can take such a hypergraph and tune the weights. But > >> wouldn't that be very inefficient? Wouldn't you want to just represent > a > >> feature vector in atomese and then run CNN on it (through an external > >> library perhaps) and get results in atomese that the other algorithms > can > >> pick up? But then again, I have very little idea what I am talking > about, so > >> I may be way off. > > > > -- > > You received this message because you are subscribed to the Google > Groups > > "opencog" group. > > To unsubscribe from this group and stop receiving emails from it, send > an > > email to [email protected] <javascript:>. > > To post to this group, send email to [email protected] > <javascript:>. > > Visit this group at https://groups.google.com/group/opencog. > > To view this discussion on the web visit > > > https://groups.google.com/d/msgid/opencog/067f3e5c-493e-4f68-b7fb-1c2b46a05bb2%40googlegroups.com. > > > > > > For more options, visit https://groups.google.com/d/optout. > > > > -- > Ben Goertzel, PhD > http://goertzel.org > > Super-benevolent super-intelligence is the thought the Global Brain is > currently struggling to form... > -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/a2969484-3280-429f-9927-c8b8ce772f16%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
