Hi Mark, On Tue, May 24, 2022 at 5:00 PM Mark Wigzell <[email protected]> wrote:
> Cognition has been redefined already >> > Hi LInas, so I get it, that various parts of nature have been classified > according to an updated theory of cognition. > Please understand that biologists and neuroscientists are a diverse bunch. They argue amongst one-another. There is no one single "theory of cognition". Instead, there is a large amount of facts known about how bacteria and slime-molds communicate with one-another, and different researchers connect the dots differently, and are interested in different things. For example: COVID-19 and long covid: did you know that covid encodes for vesicles that are very similar to the vesicles your neurons use to transport neurotransmitters? This might be why some long-covid sufferers lose a sense of smell -- the neuron transport of signalling molecules is wrecked by the covid encoding. Great! Interesting idea! Back to slime molds: are they also using vesicles similar to those encoded by covid-19? If so, can covid disrupt slime-mold cognition? Why or why not? Details, details, details ... lose track of the details, and the result is a brutish, naive, simplistic understanding of extremely complex topics. But if you know the details, then you can make deep, sharp, precise statements. What I'm wondering about, is how does an AGI or more specifically, your > "learn" research involving pure symbolic induction work? > That has a simple answer, presented in this PDF. https://github.com/opencog/learn/blob/master/learn-lang-diary/agi-2022/grammar-induction.pdf -- Its backed by many hundreds of pages of other papers and diary notes. All in github. Is it that we run the pattern recognition part through the maze to create a > grammar then hand it off to a formal symbolic AI system that induces how to > "find food" ? > You've used my favorite buzzwords, but I don't understand the question. > So, no need to create a system in which such an algorithm "emerges" ? > The opencog "MOSES" subsystem is able to automatically discover algorithms that fit training data. The concepts that MOSES uses are wide-spread in the industry; there must be hundreds of papers describing similar ideas, and similar systems are the bread-n-butter of the multi-billion-dollar machine learning industry. A shorter answer: yes, absolutely, one must "create a system in which such an algorithm emerges" ! https://wiki.opencog.org/w/MOSES > I guess I mean: does AGI not involve an analog to the "protoplasmic > compute" that the single slime mold does, which also involves an external > chemical memory? > Details details details. What are the details of how "protoplasmic compute" actually works? Lord help if you invoke "tubulin", as then you wreck in the deeps of the Hammeroff-Penrose hypothesis. Mainstream bio rejects Hammeroff-Penrose, but ... who tf knows. Mainstream biologists will happily point out that biology is a lot more complicated than just gradient descent. Doesn't matter if we're talking about the gradient descent of quorum sensing in bacteria, or the gradient descent of the conditional log-liklihood of an RNN encoder/decoder or multi-head attention transformer network. It's "obvious" that one wants to create a system that can automatically learn algorithms such as transformer networks. It's just not obvious how to do this. I too am interested in the automated discovery of algorithms; but I'm interested in an explicitly symbolic approach. --linas > Regards, > Mark > > > On Sun, May 22, 2022 at 2:31 PM Linas Vepstas <[email protected]> > wrote: > >> I didn't watch the PBS special, but from the synopsis, it seems like >> they're a decade or two or four behind the times. Cognition has been >> redefined already: not just slime mold that use bacterial signalling >> methods, but also plants: There are youtube videos of plant leaves getting >> munched by insects, the munched part emits these >> polypeptides/neurotransmitters, which diffuse to other parts of the leaf in >> about 5-10 minutes, causing the entire leaf to emit bug repellant. It's >> sped up 100x so you can see it happening, and the chemical reaction is >> tagged with phosphorescent tags, to make it visible. >> >> There are also results on the computational abilities and problem solving >> by tree roots -- these also communicate, often using mycelial matts from >> mold to do so -- so, like nerves, in a way. Biologists are all over this >> kind of stuff. >> >> Here's one: search for TED talk "quorum sensing" -- I get Bonnie Bassler >> -- I think that's the right talk. Go for the full-length talk. >> >> An a related note, check out "Algorithmic botany" -- >> http://algorithmicbotany.org/papers/ -- it spells out in detail exactly >> how Turing machines, algorithms, grammars, syntax, Lindenmeyer systems, >> bacterial quorum sensing and plant development work -- complete with math. >> Prusinkiewicz has been working on this since the 1980's. You might learn >> the most, by reading the oldest papers first, and only then moving to the >> newer stuff. >> >> --linas >> >> On Sun, May 22, 2022 at 1:10 PM Mark Wigzell <[email protected]> >> wrote: >> >>> https://groups.google.com/g/opencog/c/Bfjvh_WFVq0 >>> >>> I understand that from a formal ai perspective its not a challenge >>> maybe, I was enamoured with the basic sentience following along causal >>> chains. >>> --Mark >>> >>> On Sun, May 22, 2022 at 10:41 AM Linas Vepstas <[email protected]> >>> wrote: >>> >>>> Hi Mark, My email inbox is slammed, I missed it. -- resend? -- linas >>>> >>>> On Sat, May 21, 2022 at 11:07 PM Mark Wigzell <[email protected]> >>>> wrote: >>>> >>>>> Hey Linas, I wrote you a while back about slime molds, did you see >>>>> that? I was hoping to hear your opinion on the subject. >>>>> Cheers, >>>>> Mark >>>>> >>>> >>>> >>>> -- >>>> Patrick: Are they laughing at us? >>>> Sponge Bob: No, Patrick, they are laughing next to us. >>>> >>>> >>>> >> >> -- >> Patrick: Are they laughing at us? >> Sponge Bob: No, Patrick, they are laughing next to us. >> >> >> -- Patrick: Are they laughing at us? Sponge Bob: No, Patrick, they are laughing next to us. -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA35Prrbg5Lkt1_63U_zXuC0Vg4VCnoCtwfSWgiKdg3Y%2BPg%40mail.gmail.com.
