Ben,

Thanks for sending a copy of your interesting article.  I have read it
twice and am still thinking about it. Here are a few brief thoughts.

I don't think variable binding is that big a problem for neural nets for,
at least, the following reasons.

1--Shruti by Shastri, as you acknowledge, shows how variable binding can be
represented by synchronies in neural nets for logic-like  functions.

2-- I agree with you that we currently don't understand how Shruti-like
mechanism could allow a neural net operating at the frequency of the
brain's gama waves to provide all the detail of binding we sense in
conscious experience.  Perhaps substantially more complex forms of
synchronous timing could be used.   Electronic brains could have
frequencies hundreds of time higher, than biological brains, meaning it
could have hundreds of time more Shruiti-like binding.

3--It is believed by many the prefrontal cortex has the ability to store
variable sequences of the activation states of mid and high level
representational concepts, which map into recent hierarchical activation
patterns in the cortex's generalizational and compositional hierachies.
These activation patterns stored in the hippocampus contain bindings of low
level instantiations of higher level activated concepts.  This functions
like a binding, because it stores temporal relationships between concepts
and their instantiation, and their associations in episodic perceptual
state. This is a very complex and powerful form of binding.

4--  In electronic neural nets it's easy to use variables to set weights in
the net, change net architecture, or feed variables into pipes -- such as
in LSTM memory or neural turing machines.

5--  Lastly, neural net artificial brains will be able to interface to a
lot of traditional computing hardware and software, and to take advantage
of the powerful forms of binding that can be executed efficiently by
traditional computing.


I don't understand combinatorial logic well enough to be certain I
understood your point about it.  To  me it seems like you were talking
about something similar to functional programming (about which I know not
much, but more).  As I understand your argument, there ARE things acting
somewhat like variables in the combinatorial logic you describe.   They are
what is passed along the one or more pipes connected into each of a network
of transformations (e.g.,functions). The signal passing along each pipe
would be a variable bound to the function it is piped into. This is
somewhat similar to the binding of low level instantiation of high level
concepts I mentioned above under "3".  Of course, in the brain the neural
net would be recurrent, making things more complex, but presumably
functional programming can have recurrencies.

Net-net, your article doesn't make me think any less of Hassabis's paper --
or any less of the probability we are close to near-human-level AGI.  He
provides promising approaches for solving every problem he cites in the
paper.  Even I have ideas, based on brain science, of how to address each
of them. My optimism is increased by the understanding that most of the
problems he suggest are interrelated, and progress on almost any one will
help progress on the other.  This means that the rate of their collective
progress will tend to grow at an exponential of the average rate of
progress on each of them alone, if separated from the benefit of the
advances on others..

My optimism for DeepMind is increased by the facts that: Demis is a rare
genius, who has already shown unusual success in multiple intellectual
chalenges:  he has hundreds of world-class people working for him; his
project is backed by the resources of one of the world's largest, richest,
and most advanced computing companies; and by the fact his papers show he
is properly focusing on some of the most important capabilities required to
make artificial brains.

Ben, you once told me, that when you tried to build your webmind AI near
the end of the dot.com boom, that one of the biggest problems you faced was
tuning the combinatorial parameter space required to get a complex semantic
network to work well.  In my mind one of the major problem standing between
humanity and powerful AGI is the amount of experimentation required to tune
brain architectures, and their parameters.  But even with far suboptimal
tuning, the archtectural changes Hassabis proposes could be very
commercially valuable -- even if that are substantially sub-human in many
respects.

Google and every other company that wants to be a major player in AGI, and
has the money to do so, should spend billions of dollars following
Hassabis's general roadmap.  It would be a very wise decision for any
company that could get the talent to do so.

Ed Porter

On Thu, Jul 27, 2017 at 8:50 PM, Mike Archbold <[email protected]> wrote:

> I read the article (but not yet the paper) and, no disrespect to the
> researchers, it sounds like recycled arguments and suggestions from
> the last 5,000 posts I've read about AGI.  As a side note, the kind of
> issues we used to talk about, formerly considered crackpot, are now
> the basis of runaway hype.  People I know are convinced that strong AI
> is imminent.   I don't think the field of AGI is hyped, by the way.
> It's more like the media and general climate of opinion.
>
> On 7/27/17, Ben Goertzel <[email protected]> wrote:
> > http://goertzel.org/Neural_Foundations_Symbolic_Thought.pdf
> >
> > On Thu, Jul 27, 2017 at 9:54 PM, EdFromNH . <[email protected]> wrote:
> >
> >> Ben, could you please send me a free author's copy of the paper at
> >> http://ieeexplore.ieee.org/document/6889662/ .  Ed Porter
> >>
> >> On Thu, Jul 27, 2017 at 12:44 AM, Nanograte Knowledge Technologies <
> >> [email protected]> wrote:
> >>
> >>> Ben
> >>>
> >>> Conceptually, I like where you are going with this. Your team's work
> >>> with
> >>> human-language-based robotic communication is astounding.
> >>>
> >>> I think your idea of a universal attractor has merit. I suppose, in the
> >>> end, when matter exists, it generates an elcetro-magnetic field. In a
> >>> genetic sense, the flux of such a field would act as an open and
> >>> closed-loop communications network. In this sense, the relevant data,
> >>> information, and a relative perspective of knowledge, would all be
> >>> packaged
> >>> within relative, genomic code. In other words, we are imagining a
> >>> relative
> >>> system of relative systems from which reality would functionally
> emerge.
> >>>
> >>> Given my systems methodology, what remains to be done in order to
> >>> visualize a model of human-like machine reasoning, is to be able to
> link
> >>> your "attractor" value to the information, from which it should become
> >>> possible to systematically emerge any informational concept at any
> level
> >>> of
> >>> abstraction within any, dimension of reasoning. The genetics of
> >>> resultant
> >>> information would in theory make forward and backchaining possible, and
> >>> much more.
> >>>
> >>> The completeness schema of functional, attractor values seems to be a
> >>> critical machine-reasoning component to pursue. It would probably also
> >>> assume the role of a priority systems constraint. I've been doing much
> >>> thinking about this as a next-step for my own research.
> >>>
> >>> I think you've got this. Keep up the great work.
> >>>
> >>> Rob
> >>>
> >>> ------------------------------
> >>> *From:* Ben Goertzel <[email protected]>
> >>> *Sent:* 27 July 2017 04:57 AM
> >>> *To:* AGI
> >>> *Subject:* Re: [agi] Neuroscience-Inspired AI
> >>>
> >>>
> >>> Well I would say that none of the work done at Deep Mind and also none
> >>> of
> >>> the ideas in Demis etc.'s paper address the questions I raised in this
> >>> paper
> >>>
> >>> http://ieeexplore.ieee.org/document/6889662/
> >>> How might the brain represent complex symbolic knowledge? - IEEE Xplore
> >>> Document <http://ieeexplore.ieee.org/document/6889662/>
> >>> ieeexplore.ieee.org
> >>> A novel category of theories is proposed, providing a potential
> >>> explanation for the representation of complex knowledge in the human
> >>> (and,
> >>> more generally,
> >>>
> >>>
> >>> (sorry for the paywall ... use sci-hub.cc ...)
> >>>
> >>> So there is no real plan for how to achieve abstract symbolic reasoning
> >>> as needed for human level general intelligence within a purely
> formal-NN
> >>> type approach
> >>>
> >>>
> >>> Obviously in opencog we are taking more of a symbolic-neural approach
> so
> >>> we don't have issues with abstraction
> >>>
> >>> Also if you look at the recent Markram et al paper on algebraic
> topology
> >>> and mesoscopic brain structure, there is nothing in the Hassabis etc.
> >>> universe that seems to address how such structures would be learned or
> >>> would emerge
> >>>
> >>>
> >>>
> >>> But sure in a big-picture historical sense the progress happening these
> >>> days on "narrow AI verging toward AGI" and on "making complex cognitive
> >>> architectures finally do stuff" is super exciting.   We are on the
> verge
> >>> of
> >>> multiple breakthroughs within the next few years.   Woo hoo !!
> >>>
> >>> - -Ben
> >>>
> >>>
> >>> On Thu, Jul 27, 2017 at 5:55 AM, EdFromNH . <[email protected]>
> wrote:
> >>>
> >>>> About the above linked Hassabis paper, Ben said, "It's sort of a high
> >>>> level inspirational paper... it does lay down pretty clearly what sort
> >>>> of
> >>>> thinking and approach Deep Mind is likely to be taking in the next
> >>>> years
> >>>> ... there are no big surprises here though as this has been Demis's
> >>>> approach, bias and interest all along, right?"
> >>>>
> >>>> From my knowledge of several articles and videos by, or about,
> Hassabis
> >>>> --
> >>>>  I totally agree.  But I am a little less ho-hum than Ben, perhaps
> >>>> because
> >>>> I'm not as up on the current state of AGI as Ben.
> >>>>
> >>>> Reading Hassabis's paper makes me bullish about how close we are to
> >>>> powerful, if not fully human-level AGI, within 5 years.
> >>>>
> >>>> Why?  Because all of the unsolved challenges Hassabis discusses seem
> >>>> like they could be easily solved if enough engineering and programming
> >>>> talent was thrown at them.  I feel like I could relatively easily
> >>>> -- within a few months -- weave plausible high level architectural
> >>>> descriptions for solving all of these problems, as, presumably, people
> >>>> like Demis and Ben could do even better. (Perhaps that is why Ben is
> so
> >>>> ho-hum about the paper.)  With the money that's being thrown into AGI,
> >>>> and
> >>>> the much greater ease of doing cognitive architectural experiments
> made
> >>>> possible with Neural Turing Machines -- which allow programmable,
> >>>> modular
> >>>> plug-and-play with pre-designed and pre-trained neural net modules --
> >>>> the
> >>>> world is going to get weird fast.
> >>>>
> >>>> Tell me why I am wrong.
> >>>>
> >>>> On Sun, Jul 23, 2017 at 8:29 PM, Ed Pell <[email protected]>
> wrote:
> >>>>
> >>>>> https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5467749/
> >>>>>
> >>>>>
> >>>>> On 7/23/2017 4:18 PM, Giacomo Spigler wrote:
> >>>>>
> >>>>>>
> >>>>>> An Approximation of the Error Backpropagation
> >>>>>> Algorithm in a Predictive Coding Network
> >>>>>> with Local Hebbian Synaptic Plasticity
> >>>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> -------------------------------------------
> >>>>> AGI
> >>>>> Archives: https://www.listbox.com/member/archive/303/=now
> >>>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> 8630185-a57a7
> >>>>> 4e1
> >>>>> Modify Your Subscription: https://www.listbox.com/member/?&;
> >>>>> Powered by Listbox: http://www.listbox.com
> >>>>>
> >>>>
> >>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> >>>> <https://www.listbox.com/member/archive/rss/303/19237892-5029d625> |
> >>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
> >>>> <http://www.listbox.com>
> >>>>
> >>>
> >>>
> >>>
> >>> --
> >>> Ben Goertzel, PhD
> >>> http://goertzel.org
> >>>
> >>> "I am God! I am nothing, I'm play, I am freedom, I am life. I am the
> >>> boundary, I am the peak." -- Alexander Scriabin
> >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> >>> <https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc> |
> >>> Modify <https://www.listbox.com/member/?&;> Your Subscription
> >>> <http://www.listbox.com>
> >>>
> >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> >>> <https://www.listbox.com/member/archive/rss/303/8630185-a57a74e1> |
> >>> Modify <https://www.listbox.com/member/?&;> Your Subscription
> >>> <http://www.listbox.com>
> >>>
> >>
> >> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> >> <https://www.listbox.com/member/archive/rss/303/19237892-5029d625> |
> >> Modify
> >> <https://www.listbox.com/member/?&;>
> >> Your Subscription <http://www.listbox.com>
> >>
> >
> >
> >
> > --
> > Ben Goertzel, PhD
> > http://goertzel.org
> >
> > "I am God! I am nothing, I'm play, I am freedom, I am life. I am the
> > boundary, I am the peak." -- Alexander Scriabin
> >
> >
> >
> > -------------------------------------------
> > AGI
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed: https://www.listbox.com/member/archive/rss/303/
> 11943661-d9279dae
> > Modify Your Subscription:
> > https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> >
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/8630185-a57a74e1
> Modify Your Subscription: https://www.listbox.com/
> member/?&
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to