Conclusion to my original question if it makes sense to think about 
parallel hardware architectures to enhance the performane of an OpenCog 
system.

1. There are no basic rejections.

2. OpenCog cognitive algorithms would be better suited for MIMD parallelism 
than for SIMD parallelism. 

3. RAM acces, caching are the bottlenecks

4. What you really want for OpenCog is a MIMD parallel chip, with a lot 
of RAM, and special connects btw the processors' caches.....   This 
would let you put OpenCog on embedded devices in a useful way, and 
also build OpenCog-tailored supercomputers....  These would be 
customized for OpenCog in the same sense that the current crop of 
"deep learning chips" are customized for hierarchical NNs. 

5. Today the data structure of the atom space is like a rizome.
The atoms are scattered randomly through RAM. This can lead to a very high 
cache-miss rate.

6. It could be of advantage to put some associativity to the atomspace with 
the aim to make it possible to build clusters within which any given search 
is guaranteed to stay in just this cluster. This would maybe allow to load 
this cluster even into a GPU without causing to much cache-misses.


So in the moment, for my self, i guess that amoung existig designs a SPARK 
T5 could maybe do it better than a XENON PHI. Maybe even a chip with many 
original 8086 kernels without any caching and runing good old Borland turbo 
prolog on them could be the winner.

--Andi





Am Montag, 1. August 2016 22:26:06 UTC+2 schrieb Andi:
>
> Hello All,
> I do not want to disturb the ongoing work so an answer to this question is 
> not urgent,
> but it will help me during my investigations within the next month.
>
> *What in the hell could prevent me to look at the Atomspace as a certain 
> kind of Neuronal Network?*
>
> Please don't tell me:"Because it is a hypergraph" haha....
>
> One of my aims is, try to port the whole thing, or some parts, to 
> hardware. Maybe a bunch of some GPU's, PLD's and a CPU can do it. It seems 
> that they are designing some interesting machines for Deep Learning - so 
> maybe even nothing new has to be invented..
>
> For first steps I think it should at least be possible to use some 
> GPU-power to do some work in parallel or is there really a theoretical 
> barrier for paralleling some work that I cannot see in the moment?
>
> Please don't be afraid,  I know what kind of challenging task this is and 
> would carry it on my own back. But maybe it is not so much work if the 
> right approach is found...
> At least I want to investigate this - so any red lights blinking?
>
> --Andi
>
>
> Am Sonntag, 24. April 2016 07:07:25 UTC+2 schrieb Ben Goertzel:
>>
>> Indeed this is not an OpenCoggy question, but some of us may be able 
>> to help... is this dynamic data or instantaneous data you're trying to 
>> classify? 
>>
>>
>>
>> On Sat, Apr 23, 2016 at 1:46 PM,  <[email protected]> wrote: 
>> > Hi 
>> > 
>> > I have a dataset of mocap (motion caption) positions as vectors which I 
>> am 
>> > going to train a DNN for this dataset. 
>> > the sample data would be like a 140-D dimension vector. 
>> > Is it possible to train a CNN for this kind of data? I have 
>> > how to use convolution layers for this kind of data as kernels are 
>> e.g.5x5 
>> > while the data is  a vector? 
>> > 
>> > 
>> > If I make the data in a form of matrix, is it possible to train a 
>> pretrained 
>> > CNN e.g. alexnet for this dataset? 
>> > 
>> > Best 
>> > Majid 
>> > 
>> > -- 
>> > You received this message because you are subscribed to the Google 
>> Groups 
>> > "opencog" group. 
>> > To unsubscribe from this group and stop receiving emails from it, send 
>> an 
>> > email to [email protected]. 
>> > To post to this group, send email to [email protected]. 
>> > Visit this group at https://groups.google.com/group/opencog. 
>> > To view this discussion on the web visit 
>> > 
>> https://groups.google.com/d/msgid/opencog/1a04d763-3dfa-473f-a240-a0e452f6faba%40googlegroups.com.
>>  
>>
>> > For more options, visit https://groups.google.com/d/optout. 
>>
>>
>>
>> -- 
>> Ben Goertzel, PhD 
>> http://goertzel.org 
>>
>> "I am Ubik. Before the universe was, I am. I made the suns. I made the 
>> worlds. I created the lives and the places they inhabit; I move them 
>> here, I put them there. They go as I say, then do as I tell them. I am 
>> the word and my name is never spoken, the name which no one knows. I 
>> am called Ubik, but that is not my name. I am. I shall always be.” -- 
>> Ubik 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/8e97e75d-4b1c-496f-a08f-a11232489a68%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to