Ben, could you please send me a free author's copy of the paper at
http://ieeexplore.ieee.org/document/6889662/ .  Ed Porter

On Thu, Jul 27, 2017 at 12:44 AM, Nanograte Knowledge Technologies <
[email protected]> wrote:

> Ben
>
> Conceptually, I like where you are going with this. Your team's work with
> human-language-based robotic communication is astounding.
>
> I think your idea of a universal attractor has merit. I suppose, in the
> end, when matter exists, it generates an elcetro-magnetic field. In a
> genetic sense, the flux of such a field would act as an open and
> closed-loop communications network. In this sense, the relevant data,
> information, and a relative perspective of knowledge, would all be packaged
> within relative, genomic code. In other words, we are imagining a relative
> system of relative systems from which reality would functionally emerge.
>
> Given my systems methodology, what remains to be done in order to
> visualize a model of human-like machine reasoning, is to be able to link
> your "attractor" value to the information, from which it should become
> possible to systematically emerge any informational concept at any level of
> abstraction within any, dimension of reasoning. The genetics of resultant
> information would in theory make forward and backchaining possible, and
> much more.
>
> The completeness schema of functional, attractor values seems to be a
> critical machine-reasoning component to pursue. It would probably also
> assume the role of a priority systems constraint. I've been doing much
> thinking about this as a next-step for my own research.
>
> I think you've got this. Keep up the great work.
>
> Rob
>
> ------------------------------
> *From:* Ben Goertzel <[email protected]>
> *Sent:* 27 July 2017 04:57 AM
> *To:* AGI
> *Subject:* Re: [agi] Neuroscience-Inspired AI
>
>
> Well I would say that none of the work done at Deep Mind and also none of
> the ideas in Demis etc.'s paper address the questions I raised in this paper
>
> http://ieeexplore.ieee.org/document/6889662/
> How might the brain represent complex symbolic knowledge? - IEEE Xplore
> Document <http://ieeexplore.ieee.org/document/6889662/>
> ieeexplore.ieee.org
> A novel category of theories is proposed, providing a potential
> explanation for the representation of complex knowledge in the human (and,
> more generally,
>
>
> (sorry for the paywall ... use sci-hub.cc ...)
>
> So there is no real plan for how to achieve abstract symbolic reasoning as
> needed for human level general intelligence within a purely formal-NN type
> approach
>
>
> Obviously in opencog we are taking more of a symbolic-neural approach so
> we don't have issues with abstraction
>
> Also if you look at the recent Markram et al paper on algebraic topology
> and mesoscopic brain structure, there is nothing in the Hassabis etc.
> universe that seems to address how such structures would be learned or
> would emerge
>
>
>
> But sure in a big-picture historical sense the progress happening these
> days on "narrow AI verging toward AGI" and on "making complex cognitive
> architectures finally do stuff" is super exciting.   We are on the verge of
> multiple breakthroughs within the next few years.   Woo hoo !!
>
> - -Ben
>
>
> On Thu, Jul 27, 2017 at 5:55 AM, EdFromNH . <[email protected]> wrote:
>
>> About the above linked Hassabis paper, Ben said, "It's sort of a high
>> level inspirational paper... it does lay down pretty clearly what sort of
>> thinking and approach Deep Mind is likely to be taking in the next years
>> ... there are no big surprises here though as this has been Demis's
>> approach, bias and interest all along, right?"
>>
>> From my knowledge of several articles and videos by, or about, Hassabis --
>>  I totally agree.  But I am a little less ho-hum than Ben, perhaps because
>> I'm not as up on the current state of AGI as Ben.
>>
>> Reading Hassabis's paper makes me bullish about how close we are to
>> powerful, if not fully human-level AGI, within 5 years.
>>
>> Why?  Because all of the unsolved challenges Hassabis discusses seem like
>> they could be easily solved if enough engineering and programming talent
>> was thrown at them.  I feel like I could relatively easily -- within a
>> few months -- weave plausible high level architectural descriptions for
>> solving all of these problems, as, presumably, people like Demis and Ben
>> could do even better. (Perhaps that is why Ben is so ho-hum about the
>> paper.)  With the money that's being thrown into AGI, and the much greater
>> ease of doing cognitive architectural experiments made possible with Neural
>> Turing Machines -- which allow programmable, modular plug-and-play with
>> pre-designed and pre-trained neural net modules -- the world is going to
>> get weird fast.
>>
>> Tell me why I am wrong.
>>
>> On Sun, Jul 23, 2017 at 8:29 PM, Ed Pell <[email protected]> wrote:
>>
>>> https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5467749/
>>>
>>>
>>> On 7/23/2017 4:18 PM, Giacomo Spigler wrote:
>>>
>>>>
>>>> An Approximation of the Error Backpropagation
>>>> Algorithm in a Predictive Coding Network
>>>> with Local Hebbian Synaptic Plasticity
>>>>
>>>
>>>
>>>
>>> -------------------------------------------
>>> AGI
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/8630185-a57a7
>>> 4e1
>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/19237892-5029d625> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> "I am God! I am nothing, I'm play, I am freedom, I am life. I am the
> boundary, I am the peak." -- Alexander Scriabin
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/8630185-a57a74e1> | Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to