I understand that AGI specifically pertains to machine intelligence copying 
human functionality. Further, I understand how XAI might be something way 
beyond AGI, specific to tracing and predicting highly-abstract decision making. 
Last, I understand that the obsession with studying the human brain, as 
most-suitable model to inform AGI with, seemingly makes sense.

What I do not understand is the assumption that the human brain is the only 
feasible model of intelligence that would inform future AI. This is an 
assumption worth testing. For example, consider toxoplasma and latest findings 
on its probable impact on the human brain. Further, consider the effects of 
electromagnetic waves and microwaves on brain functioning. Last, consider a 
shock to the brain.

All these events may seriously impact negatively on the reliability of the 
brain to serve the survival of its host. In an evolutionary sense, it would to 
a degree short-circuit the survivability of the affected host and all its 
direct descendants after the fact. We may conclude how any one of these events 
may negatively affect general intelligence.

Why would science and commerce trust an instrument of intelligence that has 
critical, inherent flaws and vulnerabilities? Would we really want machine 
intelligence to be equally fallible?

Suppose then the original assumption was incorrect? What are the other options 
for naturally-selected models of intelligence?  I'm excluding nurture from this 
argument, for as a standard operating model of intelligence, I think that too 
is seriously flawed.


Your thoughts and comments would be appreciated.

Robert Benjamin

________________________________
From: Jim Bromer <jimbro...@gmail.com>
Sent: 20 February 2017 03:27 PM
To: AGI
Subject: Re: [agi] IIT: Conscious Programming Structures

Now that I think about I have seen glimmers of this kind of
self-awareness in Watson but since Watson was not able to follow up
and learn something new from these glimmers I concluded that is was
probably a bot-like algorithm that someone had pasted onto Watson.
Jim Bromer


On Mon, Feb 20, 2017 at 6:51 AM, Jim Bromer <jimbro...@gmail.com> wrote:
> I started reading a couple of the links to Integrated Information
> Theory that Logan supplied and I really do not see how it can be seen
> relevant to AI or AGI. To me it looks like a case study of how an
> over-abstraction of philosophical methodologies in an attempt to make
> the philosophy more formal and more like a technical problem can go
> wrong. We do not know how consciousness in all of its forms arise. We
> can't use contemporary science to explain the causes of consciousness
> as Chalmer described in his Hard Problem. To say that it simply exists
> as an axiom is fine but Logan (to the best of my understanding)
> started this thread by trying to apply that axiom to minimal computer
> algorithmic methods or circuits. Logan's initial question was
> interesting to me when I interpreted 'consciousness' in a way that
> could reasonably be considered for an AI program. That is, are there
> minimal sub-programs (abstractions of computer programs) which, for
> example, might explain self-awareness. Going from there are there
> minimal abstractions of programs which might be capable of more
> efficient integration and differentiation of knowledge, especially
> concerning self-awareness. We might and should ask about
> self-awareness of our own thinking and how it might be used to further
> understanding, and how this kind of knowledge might be used to develop
> better AI AGI programs.
>
> My view is that GOFAI should have worked. The questions then are why
> didn't it and how might it? We should see glimmers of AGI, capable of
> self-awareness in at least the minimal sense of useful insight about
> what the program is itself doing and discovering reasons why it
> responded in a way that was not insightful. I say this kind of
> artificial self-awareness should be feasible for a computer program. I
> also thought that this is a minimal form of consciousness that could
> be relevant to our discussions. I haven't seen a glimmer of this kind
> of conscious self-awareness in AI. So is there something about minimal
> self-awareness for computer programs that could be easily tested and
> used to start a more robust form of AI? Could some kind computer
> methodology be developed that could explain artificial self-awareness
> and which could be used to simplify the problem of creating an AI
> program?
> Jim Bromer


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
Listbox &bull; Email Marketing and Discussion 
Groups<https://www.listbox.com/member/archive/303/=now>
www.listbox.com
Easy-to-use Email marketing services, where you can create your campaign in our 
online composer or from your own email program like Outlook or Mac Mail. 
Listbox also offers discussion lists, so you can manage all your mass email in 
one spot.


RSS Feed: https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc
Modify Your Subscription: https://www.listbox.com/member/?&;
Listbox &bull; Email Marketing and Discussion 
Groups<https://www.listbox.com/member/?&;>
www.listbox.com
Easy-to-use Email marketing services, where you can create your campaign in our 
online composer or from your own email program like Outlook or Mac Mail. 
Listbox also offers discussion lists, so you can manage all your mass email in 
one spot.


Powered by Listbox: http://www.listbox.com
Listbox &bull; Email Marketing<http://www.listbox.com/>
www.listbox.com
Easy-to-use Email marketing services, where you can create your campaign in our 
online composer or from your own email program like Outlook or Mac Mail. 
Listbox also offers discussion lists, so you can manage all your mass email in 
one spot.





-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to