Steve,
A quick response for now. I was going to reply to an earlier post of yours, in
which you made the most important point for me:
"The difficulties in proceeding in both neuroscience and AI/AGI is NOT a lack
of technology or clever people to apply it, but is rather a lack of
understanding of the real world and how to effectively interact within it."
I had already had a go at expounding this,and I think I've got a better way
now. (It's actually v. important to philosophically conceptualise it precisely
- and you're not quite managing it any more than I was).
I think it's this:
everyone in AGI is almost exclusively interested in general intelligence as
INFORMATION PROCESSING - as opposed to KNOWLEDGE (about the world).
IOW everyone is mainly interested in the problems of storing and manipulating
information via hardware and software, and what logic/maths/programs etc to
use., which is of course, what they know all about, and is essential.
People aren't interested in, though, in what is also essential: the problems of
acquiring knowledge about the world. For them knowledge is all "data."
Different kinds and forms of knowledge? "Dude, they're just bandwidth."
To draw an analogy, it's like being interested only in developing a wonderfully
powerful set of cameras, and not in photography. To be a photographer, you have
to know about your subject as well as your machine and its s/ware. You have to
know, say, human beings and how their faces change and express emotions, if you
want to be a portrait photographer - or animals and their behaviour if you want
to photograph them in the wild. You have to know the problems of acquiring
knowledge re particular parts of the world. And the same is true of AGI.
This lack of interest in knowledge is at the basis of the fantasy of a superAGI
taking off. That's an entirely mathematical fantasy derived from thinking
purely about the information processing side of things. "Computers are getting
more and more powerful; as my computer starts to build a body of data, it will
build faster and faster, get recursively better and better... and whoops..
it'll take over the world." On an information processing basis, that seems
reasonable - for computers definitely will keep increasing amazingly in
processing power
>From a knowledge POV, though, it's an absurd fantasy. As soon as you think in
>terms of acquiring knowledge and solving problems about any particular area of
>the world, you realise that knowledge doesn't simply expand mathematically.
>Everywhere you look, you find messy problems and massive areas of ignorance,
>that can only be solved creatively. The brain - all this neuroscience and we
>still don't know the "engram" principle. The body - endless diseases we
>haven't solved. Women - what the heck *do* they want? And so on and on. And
>unfortunately the solution of these problems - creativity - doesn't run to
>mathematical timetables. If only..
And as soon as you think in knowledge as opposed to information terms, you
realise that current AGI is based on an additional absurd fantasy - "the
bookroom fantasy." When you think just in terms of data, well, it seems
reasonable that you can simply mine the texts of the world, esp. via the Net,
and supplement that with instruction from human teachers, and become ever more
superintelligent. You or your agent, says the fantasy, can just sit in a room
with your books and net connection, and perhaps a few visitors, and learn all
about the world.
Apparently, you don't actually have to go out in the world at all - you can
learn all about Kazakhstan without ever having been there, or sex without ever
having had sex, or sports without ever having played them, or diseases without
ever having been in surgeries and hospitals and sickrooms etc. etc.
When you think in terms of knowledge, you quickly realise that to know and
solve problems about the world or any part, you need not just information in
texts, you need EXPERIENCE, OBSERVATION, INVESTIGATION, EXPERIMENT, and
INTERACTION with the subject, and maybe a stiff drink. A computer sitting in a
room, or a billion computers in a billion rooms, are not going to solve the
problems of the world in magnificent isolation. (They'll help an awful lot, but
they won't finally solve the problems).
Just thinking in terms of science as one branch of knowlege, and how science
solves problems, would tell you this. Science without in-the-lab experiment and
in-the-field observation is unthinkable.
The bookroom fantasy is truly absurd if you think about it in knowledge terms,
but AGI-ers just aren't thinking in those terms.
You, Steve, it seems to me, are unusual here because you have had to think very
extensively in terms of knowledge - and a particular subject area, i.e.
health, and so you're acutely and unusually aware of the problems of acquiring
knowledge there rather than just data.
It has to be said, that it's v. hard to think about intelligence from the
knowledge side - (although easy to start talking about it for a while) -
because it comes under philosophy as well as psychology, and a branch of
philosophy that doesn't exist. There is a branch of philosophy for every branch
of knowledge - from sci., and tech, to arts, history, and management and
business. But there is no overall branch that surveys the whole of philosophy.
So different branches, like philosophy of science, can tell you something about
the problems of acquiring knowledge in particular areas. But there is no
super-branch that can generalise about all the different problems in different
areas.
Anyway, I'll stop there for now...
Mike Tintner, et al,
After failing to get ANY response to what I thought was an important point
(Paradigm Shifting regarding Consciousness) I went back through my AGI inbox to
see what other postings by others weren't getting any responses. Mike Tintner
was way ahead of me in no-response postings.
A quick scan showed that these also tended to address high-level issues that
challenge the contemporary herd mentality. In short, most people on this list
appear to be interested only in HOW to straight-line program an AGI (with the
implicit assumption that we operate anything at all like we appear to operate),
but not in WHAT to program, and most especially not in any apparent
insurmountable barriers to successful open-ended capabilities, where attention
would seem to be crucial to ultimate success.
Anyone who has been in high-tech for a few years KNOWS that success can come
only after you fully understand what you must overcome to succeed. Hence, based
on my own past personal experiences and present observations here, present
efforts here would seem to be doomed to fail - for personal if not for
technological reasons.
Normally I would simply dismiss this as rookie error, but I know that at
least some of the people on this list have been around as long as I have been,
and hence they certainly should know better since they have doubtless seen many
other exuberant rookies fall into similar swamps of programming complex systems
without adequate analysis.
Hey you guys with some gray hair and/or bald spots, WHAT THE HECK ARE YOU
THINKING?
Steve Richfield
------------------------------------------------------------------------------
agi | Archives | Modify Your Subscription
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com