Interesting.  Is there any good knowledge about how episodic memory is
organized in bird brains?

We need to beef up OpenCog's episodic memory shortly, so it's a topic
on my mind...

OpenBirdBrain? ;D

-- ben


On Sat, May 23, 2015 at 8:06 AM, ARAKAWA Naoya <[email protected]> wrote:
> Hello Benjamin & al.
>
> I'm reacting to the word 'birds' (semi-automatically :-) as I'm
> a fan of bird (or corvidae) intelligence.
> E.g.,
> http://www.democraticunderground.com/10024455481
> http://www.researchgate.net/profile/Sabine_Tebbich/publication/241274123_Social_manipulation_causes_cooperation_in_keas/links/00b7d528aeb185cda5000000.pdf
>
> Somehow their intelligence seems specialized in the time domain
> (planning, episodic memory, etc.).
>
> As for anatomical comparison, I found this article interesting:
> http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2507884/
>
> Birds seem to have the pallium instead of the neo-cortex.
>
> I wonder if the Prefrontal Cortex Basal Ganglia Working Memory hypothesis
> http://en.wikipedia.org/wiki/Prefrontal_Cortex_Basal_Ganglia_Working_Memory ,
> which may explain temporal information processing of mammals
> to a certain extent, also applies to birds...
>
> -- Naoya Arakawa
>
> 2015-05-23 3:25, Benjamin Kapp <[email protected]> wrote:
>
>> I was thinking about birds today.. It seems as though they have high 
>> selection pressure to have light weight brains.  And as such only the most 
>> essential parts of the brain would be retained through evolution.  I wonder 
>> if anatomical comparison between birds and other kinds of brains could shed 
>> light on those aspects of the brain which are (and perhaps just as 
>> importantly are not) absolutely critical for an AGI to have?  Perhaps this 
>> could help us prioritize which aspects of the mind we focus our efforts on 
>> creating first?  Thoughts?
>>
>>
>> On Fri, May 22, 2015 at 1:31 PM, Piaget Modeler <[email protected]> 
>> wrote:
>> If Watson were front ended with a coherent Chatbot, then it would be the 
>> equivalent of SAL in the movie "2010".
>> Right now most Chatbots are incoherent in that they don't maintain an 
>> adequate model of the user(s) they interact
>> with, or an adequate conversation history. But if a chatbot were able to 
>> retrieve information using a Watson API it
>> would be formidable.
>>
>> ~PM
>>
>> Date: Fri, 22 May 2015 12:57:59 -0400
>> Subject: Re: [agi] H-AGI towards S-AGI
>> From: [email protected]
>> To: [email protected]
>>
>>
>> For AGI, I wonder how General AI has to be in order to be considered AGI.  
>> If a AI system can only play chess we would say that is a bit too narrow to 
>> be considered AGI.  If it can play a bunch of Atari games then certainly 
>> this is far more general than being designed to play a single game.  Would 
>> this be AGI?  I don't think you can call something AGI based solely on its 
>> results (number of games it can play), this is because i could wire together 
>> a bunch of narrow AI's each specifically design for each of the games.  For 
>> example i could have one for playing chess, a different one for playing 
>> breakout, a third for space invaders, and so on and so forth.  Then i could 
>> have a system that detects which game we are presented with and it could 
>> then select the appropriate narrow AI to play the game.  The system as a 
>> whole would appear to be a general AI based on its results, but of course 
>> its essential nature would be that of a narrow AI.  As such you can't 
>> classify an AI system as AI or AGI solely based on results.  The 
>> implementation details are needed to make the classification.
>>
>> Does this make sense?
>>
>> On Fri, May 22, 2015 at 12:17 PM, Logan Streondj <[email protected]> wrote:
>> watson is as much or more AGI as OpenCOG applying same core to different 
>> domains and getting good results for-example jeopardy, cooking and medicine.
>>
>> Dorian Aur <[email protected]> wrote:
>> Ben, very useful  survey, excellent  key points:
>> 1.Training  on text based models does not generate AGI - IBM's Watson
>> 2.The essential part of the system that was creating AGI would be my brain, 
>> not google
>> Conclusion: Wiring together a bunch of non AGI  systems may never generate 
>> AGI
>>
>> Mike: "I don't like the way that people create things that are intentionally 
>> difficult and known only to the in-group."
>> You are right,   we should try to avoid anything that is  too 
>> specific/specialized    (e.g biological engineering pluripotent cells and 
>> related topics) it makes little sense in other fields
>>
>> 1. The paper should present our general vision,  simple sentences  easy to 
>> understand in computer science or engineering
>> 2.  The basic idea is simple - working on a "reduced model" of computation 
>> (digital -Turing) may never lead to AGI
>> In addition to algorithms that can run on  digital computers one can use 
>> biological building blocks to build a "full model of computation".   One can 
>> shape and "program" a biological structure and  "connected" it with digital 
>> computers to develop human  like intelligence. It will be the new tool for 
>> discovery, far more powerful than any digital system alone.
>> 3.  At least two phases are needed  to construct "a mind" using biological 
>> building blocks - see the two step implementation (A &B) they need to be 
>> briefly mentioned. Details regarding other sub-steps  in biological 
>> engineering implementation should make the object of a more specialized paper
>>
>> At this point in time everyone can understand that we need to solve a 
>> technological problem. Many academic  labs are highly specialized and can be 
>> our collaborators. They may have the knowledge however they  do not have 
>> enough resources and their main goal is not to pursue bigger technological 
>> projects ( see similar projects-  Manhattan Project -gov, German Rocket  von 
>> Braun's technology -gov, computer and iPhone Job's technology - private, 
>> Venter's technology - private).
>>
>>
>> Why we may need political lobbying?  They've   strongly misled that our 
>> brain can be thoroughly  mapped and fully simulated on digital computers
>>
>>
>> Note: The two step implementation is just one way to approach the 
>> development of H-AGI
>>
>>
>> On Wed, May 20, 2015 at 7:48 PM, Mark Seveland <[email protected]> wrote:
>> Just a suggestion. Google+ Meetups are a good way for everyone to meet each 
>> other, and in live voice and/or video chat discuss topics.
>>
>> On Wed, May 20, 2015 at 7:33 PM, Colin Hales <[email protected]> wrote:
>> Hi Dorian et. al.,
>> I am having trouble getting time to properly participate here because of 
>> family stuff and my other commitments. I'm checking in to acknowledge how 
>> encouraging it is to see the activity is ongoing, and the birth of a 
>> possible paper that might underpin whatever this IGI initiative turns into.
>>
>> I'd like to focus my efforts on the paper primarily as a way to discover IGI 
>> directions. So if you could bear with a patchy contribution from me for a 
>> little while it would be greatly appreciated. I have a particularly 
>> difficult week ahead of me. There's no huge crashing need for speed here, so 
>> I'm hoping slow and steady might be OK.
>>
>> Whatever form this website takes: fantastic. It may only ever be a 'line in 
>> the sand'. But it's a significant one in the greater scheme of AGI futures 
>> and really good to see after being sidelined for so long. Yay!
>>
>> cheers
>> Colin Hales
>>
>>
>>
>> On Thu, May 21, 2015 at 10:07 AM, Mike Archbold <[email protected]> wrote:
>> Why don't you just call it "AI" and if somebody asks THEN you can
>> clarify it?  I mean, why be arcane about it?  One of the reasons I got
>> into AI is because I don't like the way that people create things that
>> are intentionally difficult and known only to the in-group.  Now here
>> you go with a boatload of new acronyms, known only to the select tiny
>> group that knows the secret meaning behind it.  So, I guess I am
>> getting into Alan Grimes vent space with this.
>>
>> On 5/20/15, Dorian Aur <[email protected]> wrote:
>> > *Colin et al,*
>> >
>> >
>> > A possible plan for H-AGI towards S-AGI paper
>> >
>> >
>> >
>> > *Hybrid artificial general intelligent systems towards S-AGI*
>> >
>> > *Introduction* – a short presentation of AI systems and general goal to
>> > build human general intelligence
>> >
>> > Why H-AGI?
>> >
>> >    - Present different forms of computation , ( particular forms of
>> >    computation analog, digital -Turing machines )
>> >    - Computations in the brain (examples of computations that are hardly
>> >    replicated on digital computers)
>> >    - H-AGI can include all forms of computations, algorithmic /
>> >    non-algorithmic, analog, digital,* quantum and classical *since
>> >     biological structure is incorporated in the system
>> >
>> > *Steps to develop  H-AGI*
>> >
>> >    - A.  Build the structure using either natural stem cells or  induced
>> >    pluripotent cells  a three-dimensional vascularized structure, test 3D
>> >    printing possibilities
>> >    - Shape the structure and control  spatial organization of cells
>> >    - Detect the need of neurotrophic factors, nutrients and oxygen ...use
>> >    nanosensor devices, carbon nanotubes...
>> >    - Regulate, control the entire phenomenon using a computer interface,
>> >    ability to use combine analog/digital and biophysical computations
>> >
>> > B. Train the hybrid system
>> >
>> >    - Enhance bidirectional communication between biological structure and
>> >    computers
>> >    - Create and use  a virtual world to provide accelerated training, use
>> >    machine learning, DL,  digital/algorithmic  AI or AGI if something is
>> >    developed on digital systems
>> >    - The interactive training system should also shape the evolution of
>> >    biological structure,  natural language and visual information can be
>> >    progressively included
>> >
>> >  see  details in Can we build a conscious machine,
>> > http://arxiv.org/abs/1411.5224
>> >
>> >
>> > *Goals of H-AGI*
>> >
>> > H-AGI  can be seen as a transitional step required to understand  which
>> > parts can be fully replicated in a synthetic form to  build a more powerful
>> > system,
>> >
>> > ·        Natural language processing, robotics...
>> >
>> > ·        Space exploration, colonization..... etc
>> >
>> > ·        Techniques for therapy (brain diseases, cancer ....) since we will
>> > learn how to shape biological structure
>> >
>> >
>> >
>> >
>> > Dorian
>> >
>> >
>> > PS This brief presentation may  also provide an idea about possible
>> > collaboration list 1- list 3
>> >
>> >
>> >
>> > On Tue, May 19, 2015 at 11:20 PM, Mike Archbold <[email protected]>
>> > wrote:
>> >
>> >> > A summary ....we are looking at the idea that there are 2 fundamental
>> >> kinds
>> >> > of putative AGI (1) & (3), and their hybrid (2) that forms a third
>> >> approach
>> >> > as follows:
>> >> >
>> >> > (1) C-AGI      computer substrate only. Neuromorphic equivalents of it.
>> >> > (2) H-AGI      hybrid of (1) and (3). The inorganic version is a new
>> >> > kind
>> >> > of neuromorphic chip. The organic version has ... erm... organics in
>> >> > it.
>> >> > (3) S-AGI      synthetic AGI. organic or inorganic. Natural brain
>> >> > physics
>> >> > only. No computer.
>> >> >
>> >> > (aside: S-AGI just came out of my fingers. I hope this is OK, Dorian!)
>> >> >
>> >>
>> >> This is a cool idea, somewhat mind boggling in its possibilities.
>> >> Cool though!
>> >>
>> >> Personally I would favor something more like "EM-AGI" for
>> >> electromagnetic AGI.  I mean, I don't understand the details of the
>> >> approach, only the generalities.  But, "S" seems a bit vague/ambiguous
>> >> while EM hits it more or less on target IMHO.
>> >>
>> >> MIke A
>> >>
>> >>
>> >> > Think this way: What we have now is 100% computer. S-AGI is 100%
>> >> > natural
>> >> > physics (organic or inorganic). H-AGI is set somewhere in between.
>> >> > It's
>> >> > the level of computer computation/natural computation that is at issue.
>> >> All
>> >> > are computation.
>> >> >
>> >> > The human brain is a natural version of (3) with a neuronal/astrocyte
>> >> >  substrate. (3) has no computer whatever in it. it retains all the
>> >> natural
>> >> > physics (whatever that is). H-AGI targets the inclusion of the
>> >> > essential
>> >> > natural brain physics in the substrate of (2) and to incorporate (1)
>> >> > computer-substrates and software to an extent to be determined. In my
>> >> case
>> >> > an H-AGI would be inorganic. Others see differently.
>> >> >
>> >> > Where you might have a stake in this?
>> >> >
>> >> > The history of AGI can be summed up as an experiment that seeks to see
>> >> > if
>> >> > the role of (1) C-AGI as a brain is fundamentally indistinguishable
>> >> > from
>> >> > (3) S-AGI under all conditions. That is the hypothesis. The 65 year old
>> >> bet
>> >> > that has attracted 100% of the investment to date. H-AGI does not make
>> >> that
>> >> > presupposition and seeks to contrast (1) and (3) in revealing ways that
>> >> > then allow us to speak authoritatively about the (1)/(3) relationship
>> >> > in
>> >> > AGI potential. Only then will we really understand the difference
>> >> > between
>> >> > (1) and (3). So far that difference is entirely and intuition. A good
>> >> one.
>> >> > But only intuition. Its time for that intuition to be turned into
>> >> science.
>> >> > Experiments in (1) have ruled to date. Now we seek to do some (2)...
>> >> > E.E.
>> >> > we have 65 years of 'control' subject. H-AGI builds the first 'test'
>> >> > subject.
>> >> >
>> >> > How about this?
>> >> >
>> >> > What would be super cool is if this mighty AGI beast you intend making
>> >> > could be turned into the brain of a robot. Then we could contrast what
>> >> > it
>> >> > does with what an IGI candidate brain does in an identical robot in the
>> >> > same test. That kind of testing vision (as far off as it may seem) is a
>> >> > potential way your work and the IGI might interface. Which candidate
>> >> robot
>> >> > best encounters radical novelty, without any human
>> >> intervention/involvement
>> >> > whatever? .... is a really good question. To do this test you'd not
>> >> > need
>> >> to
>> >> > reveal anything about its workings. Observed robot behaviour is
>> >> > decisive.
>> >> >
>> >> > It seems to me that whatever venture you plan, it might be wise to keep
>> >> an
>> >> > eye on any (2)/(3) approaches. IGI or not. Because it is directly
>> >> informing
>> >> > expectations of outcomes in (1). We are currently asking the question
>> >> "*If
>> >> > H-AGI were to be championed into existence, what would the first
>> >> > vehicle
>> >> > for that look like?*" If the enthusiasm maintains it will be sketched
>> >> into
>> >> > a web page and we'll see what it tells us and what to do next. It may
>> >> halt.
>> >> > It may go. I don't know. Worth a shot? You bet.
>> >> >
>> >> > With your (1) C-AGI glasses firmly strapped to your head, your wisdom
>> >> > at
>> >> > all stages in this would be well received, whatever the messages. So if
>> >> you
>> >> > have time to keep an  eye on happenings, I for one would appreciate it.
>> >> >
>> >> > regards
>> >> >
>> >> > Colin Hales
>> >> >
>> >> >
>> >> >
>> >> > On Wed, May 20, 2015 at 6:58 AM, Peter Voss <[email protected]> wrote:
>> >> >
>> >> >> Thanks for asking. Haven’t followed the IGI discussions.
>> >> >>
>> >> >>
>> >> >>
>> >> >> Is this about non-computer based approaches to AGI?  If so, I don’t
>> >> think
>> >> >> I have anything positive to contribute.
>> >> >>
>> >> >>
>> >> >>
>> >> >> More generally, non-profit orgs need strong focus and champions.  And
>> >> >> specific goals.
>> >> >>
>> >> >>
>> >> >>
>> >> >> *From:* Benjamin Kapp [mailto:[email protected]]
>> >> >> *Sent:* Tuesday, May 19, 2015 12:23 PM
>> >> >> *To:* AGI
>> >> >> *Subject:* Re: [agi] Institute of General Intelligence (IGI)
>> >> >>
>> >> >>
>> >> >>
>> >> >> Mr. Voss,
>> >> >>
>> >> >> Given your understanding of the AGI community do you believe an IGI
>> >> would
>> >> >> be redundant?  Would your organization be open to collaborating with
>> >> >> the
>> >> >> IGI?  Do you have any advice for how we could be successful in
>> >> >> starting
>> >> >> up
>> >> >> this organization?  Perhaps you would be open to being a member of the
>> >> >> board?
>> >> >>
>> >> >>
>> >> >>
>> >> >> On Tue, May 19, 2015 at 2:03 PM, Peter Voss <[email protected]> wrote:
>> >> >>
>> >> >> Not something that can be adequately covered in a few words, but….
>> >> “We’re
>> >> >> building a fully integrated, top-down & bottom-up, real-time, adaptive
>> >> >> knowledge (& skill) representation, learning and reasoning engine.
>> >> >> We’re
>> >> >> using a combination of graph representation and NN techniques overlaid
>> >> >> with
>> >> >> fuzzy, adaptive rule systems” – ha!
>> >> >>
>> >> >> Here again are links for some clues:
>> >> >>
>> >> >>
>> >> http://www.kurzweilai.net/essentials-of-general-intelligence-the-direct-path-to-agi
>> >> >>
>> >> >> http://www.realagi.com/index.html
>> >> >>
>> >> >> https://www.facebook.com/groups/RealAGI/
>> >> >>
>> >> >>
>> >> >> *From:* Benjamin Kapp [mailto:[email protected]]
>> >> >>
>> >> >> Mr. Voss,
>> >> >>
>> >> >> Since you are the founder I'm certain you know what agi-3's
>> >> >> methodology
>> >> >> is.  In a few words (maybe more?) could you share with us what that
>> >> >> is?
>> >> >>
>> >> >> On Tue, May 19, 2015 at 1:24 PM, Peter Voss <[email protected]> wrote:
>> >> >>
>> >> >> *>*http://www.agi-3.com  They just glue together anything and
>> >> everything
>> >> >> that works.
>> >> >>
>> >> >> Actually, no.  We have a very specific theory of AGI and architecture
>> >> >>
>> >> >> *Peter Voss*
>> >> >>
>> >> >> *Founder, AGI Innovations Inc.*
>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/212726-deec6279
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com



-- 
Ben Goertzel, PhD
http://goertzel.org

"The reasonable man adapts himself to the world: the unreasonable one
persists in trying to adapt the world to himself. Therefore all
progress depends on the unreasonable man." -- George Bernard Shaw


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to