O.K. !

Am Freitag, 5. August 2016 01:05:48 UTC+2 schrieb linas:
>
> Hey,
>
> If you want to port any algo to a GPU, or even make it run efficiently on 
> a CPU, you had better understand what caches are, and how they work.  This 
> is not a factoid about cpu design that you can just ignore: its fairly 
> central, as it is a bottleneck for many classes of computational problems. 
>  (It is also the reason why many other, most other algos run really really 
> really well on modern cpus, such as your cellphone.  Without caches, your 
> cellhone would be about as fast as a PC from about 20 years ago.)
>
> --linas
>
> On Thu, Aug 4, 2016 at 5:57 PM, Andi <[email protected] <javascript:>> 
> wrote:
>
>> Ty Linas!
>> ->
>>
>> Am Dienstag, 2. August 2016 18:05:28 UTC+2 schrieb linas:
>>>
>>> Hi Andi,
>>>
>>> Ben has a good answer, and to emphasize, let me add this:   Think of the 
>>> atomspace as being a collection of trees.  The atoms are the nodes in the 
>>> tree.  Any one atom can appear in many trees, and so the whole thing is in 
>>> fact tangled into a big matt, like a rhizome 
>>> https://www.google.com/search?q=rhizome&tbm=isch 
>>>
>>> What I already understood is that it is related to PROLOG-trees but with 
>> real-truth values and attention values.
>>  
>>
>>> The pattern matcher starts at one atom, and walks the rhizome, exploring 
>>> nearest neighbors, until all the entire neighborhood is explored (and a 
>>> match is found, or some other (local) computation is performed). 
>>>
>>> The problem is that the atoms are scattered randomly through RAM, so 
>>> when the nearest neighbor walk happens, random locations in RAM get 
>>> visited.  I'm guessing that there is a lot of cache-miss going on two:  If 
>>> you have, say, a CPU cache that is 8-way, 4-associative, then you could 
>>> have maybe 32 atoms in the cache, but the chance that the 33rd atom will 
>>> accidentally be in one of the existing cache lines is just about zero, and 
>>> so the graph walk will have a 99.9% cache-miss rate.   (most graphs that 
>>> get searched have more that 32 atoms in them. )
>>>
>>> Ty for this precise technical information it will help me much by 
>> thinking about performance.  
>>
>>>
>>> Hmm, I have an idea -- I guess the atomsapce *could* keep track of 
>>> individual connected components  (create a bag of trees, which are 
>>> connected by one or more atoms) -- any given search is guaranteed to stay 
>>> in just one bag, and so maybe one could download the entire bag to the gpu 
>>> before starting a search.   Could work if the bags are small enough to fit 
>>> in GPU ram.
>>>
>>
>> I had visions like that. Its a problem of mapping. this will be solved 
>> more in praxis than in theorie. We will see what can be done...
>>  
>>
>>>
>>> Maybe allocation could be changed to improve cache locality: allocate 
>>> atoms so that they are more likely to be on the same cache line if they are 
>>> also connected.  But this becomes a hard, fiddly computer-science problem...
>>>
>>  
>> In the momend I am not so much familiar with caching technics. In my 
>> answer to Ben I wrote that maybe a SPARK-architecture could do it. As fare 
>> as I know and understand there is some room for cachecontrol...
>>
>> I think that after the current goals are achived - robots are doing well 
>> and good demos are there- than the project focus will come back to 
>> performance again. Maybe this will happen within about one year. If I get 
>> some help, from you and Ben , like I got here, from time to time - I 
>> hopefully will be there just in time!
>> --Andi 
>>  
>>
>>>
>>> --linas
>>>
>>>
>>> On Mon, Aug 1, 2016 at 3:26 PM, Andi <[email protected]> wrote:
>>>
>>>> Hello All,
>>>> I do not want to disturb the ongoing work so an answer to this question 
>>>> is not urgent,
>>>> but it will help me during my investigations within the next month.
>>>>
>>>> *What in the hell could prevent me to look at the Atomspace as a 
>>>> certain kind of Neuronal Network?*
>>>>
>>>> Please don't tell me:"Because it is a hypergraph" haha....
>>>>
>>>> One of my aims is, try to port the whole thing, or some parts, to 
>>>> hardware. Maybe a bunch of some GPU's, PLD's and a CPU can do it. It seems 
>>>> that they are designing some interesting machines for Deep Learning - so 
>>>> maybe even nothing new has to be invented..
>>>>
>>>> For first steps I think it should at least be possible to use some 
>>>> GPU-power to do some work in parallel or is there really a theoretical 
>>>> barrier for paralleling some work that I cannot see in the moment?
>>>>
>>>> Please don't be afraid,  I know what kind of challenging task this is 
>>>> and would carry it on my own back. But maybe it is not so much work if the 
>>>> right approach is found...
>>>> At least I want to investigate this - so any red lights blinking?
>>>>
>>>> --Andi
>>>>
>>>>
>>>> Am Sonntag, 24. April 2016 07:07:25 UTC+2 schrieb Ben Goertzel:
>>>>>
>>>>> Indeed this is not an OpenCoggy question, but some of us may be able 
>>>>> to help... is this dynamic data or instantaneous data you're trying to 
>>>>> classify? 
>>>>>
>>>>>
>>>>>
>>>>> On Sat, Apr 23, 2016 at 1:46 PM,  <[email protected]> wrote: 
>>>>> > Hi 
>>>>> > 
>>>>> > I have a dataset of mocap (motion caption) positions as vectors 
>>>>> which I am 
>>>>> > going to train a DNN for this dataset. 
>>>>> > the sample data would be like a 140-D dimension vector. 
>>>>> > Is it possible to train a CNN for this kind of data? I have 
>>>>> > how to use convolution layers for this kind of data as kernels are 
>>>>> e.g.5x5 
>>>>> > while the data is  a vector? 
>>>>> > 
>>>>> > 
>>>>> > If I make the data in a form of matrix, is it possible to train a 
>>>>> pretrained 
>>>>> > CNN e.g. alexnet for this dataset? 
>>>>> > 
>>>>> > Best 
>>>>> > Majid 
>>>>> > 
>>>>> > -- 
>>>>> > You received this message because you are subscribed to the Google 
>>>>> Groups 
>>>>> > "opencog" group. 
>>>>> > To unsubscribe from this group and stop receiving emails from it, 
>>>>> send an 
>>>>> > email to [email protected]. 
>>>>> > To post to this group, send email to [email protected]. 
>>>>> > Visit this group at https://groups.google.com/group/opencog. 
>>>>> > To view this discussion on the web visit 
>>>>> > 
>>>>> https://groups.google.com/d/msgid/opencog/1a04d763-3dfa-473f-a240-a0e452f6faba%40googlegroups.com.
>>>>>  
>>>>>
>>>>> > For more options, visit https://groups.google.com/d/optout. 
>>>>>
>>>>>
>>>>>
>>>>> -- 
>>>>> Ben Goertzel, PhD 
>>>>> http://goertzel.org 
>>>>>
>>>>> "I am Ubik. Before the universe was, I am. I made the suns. I made the 
>>>>> worlds. I created the lives and the places they inhabit; I move them 
>>>>> here, I put them there. They go as I say, then do as I tell them. I am 
>>>>> the word and my name is never spoken, the name which no one knows. I 
>>>>> am called Ubik, but that is not my name. I am. I shall always be.” -- 
>>>>> Ubik 
>>>>>
>>>> -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "opencog" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to [email protected].
>>>> To post to this group, send email to [email protected].
>>>> Visit this group at https://groups.google.com/group/opencog.
>>>> To view this discussion on the web visit 
>>>> https://groups.google.com/d/msgid/opencog/b3ca200e-c039-4417-96dc-5ef3f37f38ea%40googlegroups.com
>>>>  
>>>> <https://groups.google.com/d/msgid/opencog/b3ca200e-c039-4417-96dc-5ef3f37f38ea%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/7f89a06e-bf35-4d76-b5a4-107da07a8745%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to