Hi Linas, I believe we are basically in agreement. But I will comment anyway. I agree that inference and evoking an image give the same functional result. Inference by step by step traversing many serial nodes is slow. Evoking an image by broadcasting "cat" to the whole memory system and potentiating the "cat" related entries followed by broadcasting "mice" and super potentiating (or triggering) the relevant entries. The first can take thousands of time steps, the second takes about two time steps. Yes, additional steps to deal with the several memories thrown up.
I like "compute close to memory" solutions. Ed On Thursday, August 25, 2016 at 1:08:35 AM UTC-4, linas wrote: > > > > On Tue, Aug 23, 2016 at 7:05 PM, Ed Pell <[email protected] <javascript:>> > wrote: > >> All three of my answers did not come from inference. They came from a >> lifetime of experience with cats, boxes, and mice. It was 90% memory bases >> with maybe 10% logic/inference to gue the pieces together. >> > > I suppose. A lot of inference/deduction is subconscious. The word "cat" > evokes a huge number of mental images, in a human. I sketched the computer > equivalent of this in the other email -- "evoking an image" is actually an > exploration of a connected graph of factoids. We can quibble about whether > this is knowledge or memory or inference -- there are algorithmic > trade-offs. > > --linas > >> >> > -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/df792b31-4c74-4e2e-81f1-e88f01c9e2ae%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
