- We know that these problems are not inherently insurmountable,
because we deal with them every day and we don't usually crash. If we
can do it, we can program an AGI to do it better.
- It should be bloody obvious that intelligence is an attribute of an
individual, not society. An individual is much more clearly defined
than a society is, and the individual is the primary design-and-test
unit for the evolutionary process that designed us. If I want to know
all the prime numbers between 300 and 400, I can sit down with pencil
and paper and work it out; no other species can do this kind of thing.
For more on how the social sciences have been led astray into "culture
worship", see http://207.210.67.162/~striz/docs/tooby-1992-pfc.pdf.
- A good computer virus, a very simple self-replicating agent, can
thrive in the real world with absolutely no human assistance (and
often in the face of active human opposition).
- What evidence do you have for the idea that a single AGI can never
follow its own goals without working within our society? You simply
state it as fact.
- Our problems are roughly as difficult to us as a chimpanzee's
problems are to a chimp. This does not mean that our problems or chimp
problems are *inherently* difficult; difficulty is not some kind of
magical sticky stuff that hides within the problem.
- If you walked up to your average, college-educated rational person
in 1950, and asked them about the feasibility of flying to the Moon,
they would probably say it was effectively impossible. After all, it
would require billions of dollars in spending, rockets dozens of times
larger than anything built so far, and the solutions to a huge number
of technological problems, many of which still weren't even
understood.

 - Tom

On Oct 29, 2007 8:13 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
>
>
> Ben,
>
> Yes, this is the general type of thing I was referring to in calling for a
> mathematical expression of problematic problems. Of course, you're focussing
> in your paper on the impossibility of guaranteeing friendliness. It's the
> actual nature of the problems we - and therefore any superAGI and actually
> any ordinary AGI will have to - face, that I'm concerned with. Part of their
> problematicity is that there are often many more factors than you can
> possibly know about. Another is that the known factors are unstable and even
> potentially contradictory - the person or people who loved you yesterday,
> may hate you today through no action of yours.  Another part of the
> problematicity is the amount of evidence that can be gathered - how much
> evidence should you gather if you're defending OJ/a murderer, or writing an
> essay on AGI, or want to bet on a stockmarket movement? Ideally, an infinite
> amount. The only limit is practicality rather than reason. (Shouldn't be too
> hard to put all that into a formula?!)
>
> A separate point: the EMBEDDEDNESS of intelligence. I went through your
> paper v quickly so I may have missed something on this. Ironically, I had
> just come to a similar idea before I saw your expression. I'm not sure how
> much you are thinking on similar lines to me.
>
> The idea is: here we are talking about "intelligence" as if it were the
> property of an individual (human/animal/AGI). Actually, human intelligence -
> the finest example we know of - is the property of individuals working
> within a society with a v. complex culture (including science - collective
> knowledge about the world - and technology - collective know-how about how
> to deal with the world).  Our individual intelligence is extremely dependent
> on that of our society -  we each stand, pace Newton, on the shoulders of a
> vast pyramid of other people - and also dependent on a vast collection of
> artefacts and machines.
>
> No AGI or agent can truly survive and thrive in the real world, if it is not
> similarly part of a collective society and a collective science and
> technology - and that is because the problems we face are so-o-o
> problematic. Correct me, but my impression of all the discussion here is
> that it assumes some variation of the classic science fiction scenario, pace
> 2001/ The Power etc where an individual computer takes power, if not takes
> off by itself. Ain't gonna happen - no isolated individual can truly be
> intelligent.
>
>
> Ben:
>
>
>
> Please check out an essay I wrote a couple years ago,
>
> http://www.goertzel.org/papers/LimitationsOnFriendliness.pdf
>
> which is related to the issues you mention.  As I note there
>
> "
> My goal in this essay is to explore some particular aspects of the
> difficulty of
> creating Friendly AI, which ensue not from the subtleties of AI design but
> rather from the
> complexity of the notion of Friendliness itself, and the complexity of the
> world in which
> both humans and AI's are embedded.
>
> ...
>
> ... the basic arguments I present here regarding Friendliness are as
> follows:
>
> • Creating accurate formalizations of current human notions of action-based
> Friendliness, while perhaps possible in the future with very significant
> effort, is
> unlikely to lead to notions of action-based Friendliness that will be robust
> with
> respect to future developments in the world and in humanity itself
> • The world appears to be sufficiently complex that it is essentially
> impossible for
> seriously resource-bounded systems like humans to guarantee that any
> system's
> actions are going to have beneficent outcomes.  I.e., guaranteeing (or
> coming
> anywhere near to guaranteeing) outcome-based Friendliness is effectively
> impossible.  And this conclusion holds for basically any highly specific
> property,
> not just for Friendliness as conventionally defined.  (What is meant by a
> "highly
> specific property" will be defined below.)
>
> "
>
> I don't conclude that the complexity of the world means AGI is impossible
> though.  I just conclude that it means that creating very powerful AGI's
> with
> predictable effects is quite possibly not possible ;-)
>
> -- Ben G
>
>
>
> On 10/29/07, Mike Tintner < [EMAIL PROTECTED]> wrote:
> >
> >
> > Check out
> >
> >
> http://environment.newscientist.com/article/dn12833-climate-is-too-complex-for-accurate-predictions.html
> >
> > which argues:
> >
> > "Climate change models, no matter how powerful, can never give a precise
> prediction of how greenhouse gases will warm the Earth, according to a new
> study."
> >
> > What's that got to do with superAGI's? This: the whole idea of a superAGI
> "taking off" rests on the assumption that the problems we face in life are
> soluble if only we - or superAGI's- have more brainpower.
> >
> > The reality is that the problems we face are actually infinite or
> "practically endless."  Problems like predicting the weather, working out
> what to do in Iraq, how to seduce or persuade another person, working out
> what career path to follow, deciding how to invest on the stockmarket etc.
> You can think about them forever and screw up just as badly or worse than if
> you think about them for a minute. And a superAGI may be just as capable of
> losing a bundle on the market as we are, or producing a product that no one
> wants.
> >
> > That doesn't mean that a superior brain wouldn't have advantages, but
> rather that there would be considerable limits to its powers.Even a vast
> brain will have problems dealing with problematic, infinite problems. (And
> even mighty America with all its collective natural and artificial
> brainpower still has problems dealing with dumb peasants).
> >
> > What is rather disappointing to me , given that there is an awful lot of
> mathematical brainpower around here, is that there seems to be no interest
> in giving mathematical expression to the ideas I have just expressed.
> ________________________________
>  This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
>
>  ________________________________
>  This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
>  ________________________________
>
>
> No virus found in this incoming message.
> Checked by AVG Free Edition.
> Version: 7.5.503 / Virus Database: 269.15.12/1097 - Release Date: 10/28/2007
> 1:58 PM
> ________________________________
>  This list is sponsored by AGIRI: http://www.agiri.org/email
>
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=58872477-91144c

Reply via email to