I started reading a couple of the links to Integrated Information
Theory that Logan supplied and I really do not see how it can be seen
relevant to AI or AGI. To me it looks like a case study of how an
over-abstraction of philosophical methodologies in an attempt to make
the philosophy more formal and more like a technical problem can go
wrong. We do not know how consciousness in all of its forms arise. We
can't use contemporary science to explain the causes of consciousness
as Chalmer described in his Hard Problem. To say that it simply exists
as an axiom is fine but Logan (to the best of my understanding)
started this thread by trying to apply that axiom to minimal computer
algorithmic methods or circuits. Logan's initial question was
interesting to me when I interpreted 'consciousness' in a way that
could reasonably be considered for an AI program. That is, are there
minimal sub-programs (abstractions of computer programs) which, for
example, might explain self-awareness. Going from there are there
minimal abstractions of programs which might be capable of more
efficient integration and differentiation of knowledge, especially
concerning self-awareness. We might and should ask about
self-awareness of our own thinking and how it might be used to further
understanding, and how this kind of knowledge might be used to develop
better AI AGI programs.

My view is that GOFAI should have worked. The questions then are why
didn't it and how might it? We should see glimmers of AGI, capable of
self-awareness in at least the minimal sense of useful insight about
what the program is itself doing and discovering reasons why it
responded in a way that was not insightful. I say this kind of
artificial self-awareness should be feasible for a computer program. I
also thought that this is a minimal form of consciousness that could
be relevant to our discussions. I haven't seen a glimmer of this kind
of conscious self-awareness in AI. So is there something about minimal
self-awareness for computer programs that could be easily tested and
used to start a more robust form of AI? Could some kind computer
methodology be developed that could explain artificial self-awareness
and which could be used to simplify the problem of creating an AI
program?
Jim Bromer


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to