Validimir and Mike,
For humans, much of our experience is grounded on sensory information, and
thus much of our understanding is based on experiences and analogies
derived largely from the physical world. So Mike you are right that for
us humans, much of our thinking is based on recasting of
RE: [agi] Do the inference rules.. P.S.Edward,
Thankyou for you putting your POV very clearly - and the hope perhaps of most
AGI-ers still.
I think you are dead wrong - and it's the most expensive professional mistake
of any AGI'er's life. I would suggest thinking a great deal further about
On 10/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
Am I the only one, or does anyone else agree that politics/political
theorising is not appropriate on the AGI list?
Agreed. There are many other forums where political ideology can be debated.
-
This list is sponsored by AGIRI:
I also agree except ... I think political and economic theories can inform AGI
design, particularly in areas of AGI decision making and
friendliness/roboethics. I wasn't familiar with the theory of Comparative
Advantage until Josh and Eric brought it up. (Josh discusses in conjunction
with
I think that building a human-like reasoning system without /visual/
perception is theoretically possible, but not feasible in practice. But
how is it human like without vision? Communication problems will
arise. Concepts cannot be grounded without vision.
It is impossible to completely
Yes, I think that too.
On the practical side, I think that investing in AGI requires
significant tax cuts, and we should elect a candidate that would do that
(Ron Paul). I think that the government has to have more respect to
potential weapons (like AGI), so we should elect a candidate who is
Dear indefinate article,
Agreed, a human-like reasoning system -- that is one that has
associations for concepts similar to a human -- requires human-like
grounding. I have said exactly that for years.
Edward W. Porter
Porter Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax
I agree though there may be some room for discussing AGI dealing with
politics as a complex system. How an AGI would interact politically with
groups of people. And AGI embedded in governmental computer systems.
And AGI running for office? Gosh. Let's kill it before it happens :)
John
From:
Concepts cannot be grounded without vision.
So . . . . explain how people who are blind from birth are functionally
intelligent.
It is impossible to completely understand natural language without
vision.
So . . . . you believe that blind-from-birth people don't completely
understand
I agree . . . . there are far too many people spouting off without a clue
without allowing them to spout off off-topic as well . . . .
- Original Message -
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, October 11, 2007 4:44 PM
Subject: [agi] Re:
Mark Waser wrote:
Concepts cannot be grounded without vision.
So . . . . explain how people who are blind from birth are
functionally intelligent.
It is impossible to completely understand natural language without
vision.
So . . . . you believe that blind-from-birth people don't
I'll buy internal spatio-perception (i.e. a three-d world model) but not the
visual/vision part (which I believe is totally unnecessary).
Why is *vision* necessary for grounding or to completely understand
natural language?
- Original Message -
From: a [EMAIL PROTECTED]
To:
I also agree. Hell, if nothing else move it to the Singularity listserv. But
I'd rather see the political discussed only in the narrow circumstances
where it's directly relevant to building an AGI or (if on the Singularity
listserv) directly related to the Singularity.
Josh Cowan
From:
Yes, IHMO, this discussion is perfectly acceptable and directly
relevant to the Singularity listserv, but not this listserv. This
discussion is about advocating and promoting AGI through political
change, and agrees with its description: Maintained by SIAI Director of
Outreach, Bruce Klein,
Mark Waser wrote:
I'll buy internal spatio-perception (i.e. a three-d world model) but
not the visual/vision part (which I believe is totally unnecessary).
Why is *vision* necessary for grounding or to completely understand
natural language?
My mistake. I misinterpreted the definitions of
spatial perception cannot exist without vision.
How does someone who is blind from birth have spatial perception then?
Vision is one particular sense that can lead to a 3-dimensional model of the
world (spatial perception) but there are others (touch echo-location
hearing to name two).
Mark Waser wrote:
spatial perception cannot exist without vision.
How does someone who is blind from birth have spatial perception then?
Vision is one particular sense that can lead to a 3-dimensional model
of the world (spatial perception) but there are others (touch
echo-location hearing
Mark Waser wrote:
Why can't echo-location lead to spatial perception without vision?
Why can't touch?
For instance, how can humans mentally manipulate or mentally rotate
spatial objects without visualizing them?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe
...and also why can't 3D world model be just described abstractly, by
presenting the intelligent agent with bunch of objects with attached
properties and relations between them that preserve certain
invariants? Spacial part of world model doesn't seem to be more
complex than general problem of
Consider, however, the case of someone who was not only blind, but also
deaf and incapable of taste, smell, tactile, or goinometric perception.
I would be dubious about the claim that such a person understood
English. I might be dubious about any claim that such a person was
actually
MW:
Concepts cannot be grounded without vision.
So . . . . explain how people who are blind from birth are functionally
intelligent.
It is impossible to completely understand natural language without
vision.
MW:So . . . . you believe that blind-from-birth people don't completely
understand
This is how I envision it when a text only AGI is fed its world view as
text only. The more text it processes the more a spatial physical system
would emerge internally assuming it is fed text that describes physical
things. Certain relational laws are common across text that describe and
reflect
Re: postings of John Rose and Vladimir Nesov below:
I generally agree with both of your postings. Grounding is a relative
concept. There are many different degrees of grounding, some much more
powerful than others. Many expert systems had a degree (a relatively low
one) of grounding in their
Vladimir: ..and also why can't 3D world model be just described abstractly,
by
presenting the intelligent agent with bunch of objects with attached
properties and relations between them that preserve certain
invariants? Spacial part of world model doesn't seem to be more
complex than general
It's impossible for a human reading a book written in an exotic foreign
language, so you are going too far. It's like cracking a Rijndael
encrypted file with a 1000-bit key size, but worse. Infinite
possible interpretations.
John G. Rose wrote:
This is how I envision it when a
In 2000, Hutter [21,22] proved that finding the optimal behavior of a
rational agent is equivalent to compressing its observations.
Essentially he proved Occam's Razor [23], the simplest answer is
usually the correct answer.
Vision is the simplest answer.
-
This list is sponsored by AGIRI:
This is in response to Mike Tintners 10/11/2007 7:53 PM post. My
response is in all-caps.
Vladimir: ..and also why can't 3D world model be just described
abstractly,
by
presenting the intelligent agent with bunch of objects with attached
properties and relations between them that preserve
RE: [agi] Do the inference rules.. P.S.Edward,
Thanks for interesting info - but if I may press you once more. You talk of
different systems, but you don't give one specific example of the kind of
useful ( significant for AGI) inferences any of them can produce -as I do with
my cat example.
28 matches
Mail list logo