>> I disagree strenuously.  If our arguments will apply to *all* intelligences
>> (/intelligent architectures) -- like Omohundro attempts to do --  instead of
>> just certain AGI subsets, then I believe that our lack of knowledge about
>> particular subsets is irrelevant.
> yes, but I don't think these general arguments are going to tell us
> all that much
> about particular AGI systems ... they can go only so far, and not far 
> enough...

Huh?  If an argument applies to *all* systems, it will apply to *any* 
particular system.

The problem with Omohundro's arguments are that they provably don't apply to 
*all* systems (if, indeed, *any* systems) since obvious counter-examples exist 
(human beings).

>> I believe that there is a location in the state space of intelligence that
>> is a viable attractor that equates to Friendliness and morality.  I think
>> that a far more effective solution to the Friendliness problem would be to
>> ensure that we place an entity in that attractor rather than attempt to
>> control it's behavior via it's architecture.
> 
> Ah, so you're OK with "beliefs" but not "intuitions" ???   '-)

:-)  Great jab  :-)

Beliefs and intuitions are exact synonyms as far as I'm concerned.  I'm okay 
with both as long as they are both required to be backed with facts and logical 
reasoning -- even if they aren't 100% provable.  My problem with "intuitions" 
is that most people think that they have greater weight than mere opinions (or 
beliefs :-).  I'm currently finishing writing up my description of the 
attractor so that people can throw darts at it.  You may laugh at me if you 
complete the initial OpenCog docs before you see it.  ;-)

> I hope such an attractor exists but I'm not as certain as you seem to be

A fair statement.  I've put *a lot* of time, work, and research into this that 
you don't have the benefit of -- yet.

>>> What we can do is conduct experiments designed to gather data about
>>> AGI goal systems and
>>> AGI dynamics, which can lead us to more robust AGI theories, which can
>>> lead us to detailed
>>> plans about how to ensure AGI safety (or, pessimistically, detailed
>>> knowledge as to why
>>> this is not possible)
>>
>> I think that this is all spurious pseudo-scientific BS.  I think that the
>> state space is way too large to be thoroughly explored from first
>> principles.  Start with human friendliness and move out and you stand a
>> chance.  Trying to compete with billions of years of evolution and it's
>> parallel search over an unimaginably large number of entities by
>> re-inventing the wheel is just plain silly.
> 
> I disagree.  Obviously you could make the same argument about airplanes.

No.  The airplane argument is a silly strawman.  Airplanes have nothing to do 
with birds.  Airplanes are based upon exactly *one* scientific principle that 
is demonstrated easily by Cub Scouts.  Airplanes were difficult when the 
(easily isolated) pre-requisites weren't there (like lightweight, powerful 
engines and reasonable control structures).  I can (and have) easily made an 
airplane out of balsa wood and a rubber band.  Everything else is just an 
elaboration on that seed.  If your experiments started with a seed AGI, then I 
would buy your arguments -- but not otherwise.

> Experiments with different shaped wings helped us to refine the relevant
> specializations of fluid dynamics theory, which now let us calculate a bunch
> more relevant stuff from first principles than we could before these
> experiments and
> this theory were done.  But we still can't solve the Navier-Stokes Equation
> in general in any useful way.

But the point is -- functioning wings existed before the refinement 
experiments.  Omohundro and others are trying to perform analysis and 
refinement of *non-existent* systems.  The Navier-Stokes equations (plural is 
correct) describe a complex system (fluid dynamics).  It is entirely 
possible/probable that there NEVER will be a totally general, easily calculable 
solution for all circumstances -- but a totally general solution is not 
necessary.  In many areas of the fluid dynamics state space (particularly the 
most accesssible ones), there are more than enough accurate simplifying 
assumptions and attractors that we don't need a general solution.  Any wing 
that has a certain general shape and texture and doesn't try to move faster 
than a certain speed relative to the fluid (which cannot have greater than a 
certain viscosity) will provide lift *regardless* of the exact details of the 
system.  In order for your "scientific" experiments to work though, you need to 
prove that the same things are true of goals, ethics, and friendliness.  And 
these proofs are *always* derived by varying from existings solution -- just as 
they were for wings.

A clearer example of why this sort of reasoning without examples is foolish is 
the stupid oft-repeated statement that "According to science, bumblebees 
shouldn't be able to fly."  Science explicitly recognizes the fact that there 
always could be some variables that we don't recognize that will totally change 
any system from impossible to possible or vice versa.  Decomposition 
experiments on complex systems are known *NOT* to work in the vast majority of 
cases.  Yet, that is what you are proposing and what I am calling 
pseudo-scientific BS.  (No?  Or am I missing something that will make your 
experiments a valid *scientific* endeavor?)  What if your "science" says that 
your AGI can't be unfriendly?  Omohundro's "science" says that it can't help 
*BUT* be unfriendly.  Who should I believe and why?

> In a field without any solid theory yet, how do you choose which experiments
> to run, except via intuition [or replace some related word if you
> don't like that one]

How about logical reasoning?  Instead of saying "my intuition tells me . . . . 
", why don't we use formalizations like "well, these three examples seem to 
indicate X but they don't rule out Y, so we will test it by . . . . ".  
Intuition is just BS code for "I can't/don't want to explain *why* I'm doing 
what I'm doing."  You might as well just say "I'm guessing" or "I believe" and 
leave it as that.  There is *NO* special cachet to intuition and there is a 
HUGE downside in that, for some reason, people equate it as being more reliable 
than guessing.

>> I would scientifically/logically argue that your "intuition" is correct
>> because it is more possible to analyze, evaluate, and *redirect* a
>> goal-achievement architecture than a system with inscrutable neural net
>> dynamics at its core.
>>
>> Your "intuition" wasn't particularly helpful because it gave no reasoning or
>> basis for your belief.  My statement was more worthwhile because it gave
>> reasons that can be further analyzed, refined, and/or disproved.
> 
> I have reasoning and basis for that intuition but I omitted it due to not 
> having
> time to write a longer email.  Also I though the reasoning and basis were
> obvious.

To me, sure.  To everyone, I doubt it.  And if it was that obvious, why did you 
bother saying it again?  When you say things and base them upon intuition, it 
gives everyone else a license to do the same.  Part of the reason I call *you* 
on it -- despite the fact that I *know* that your intuitions most frequently 
have provable facts and solid reasoning behind them -- is because EVERYBODY 
*needs* to be called on it (including me).  That's the only way to make 
scientific progress forward.  Intuition doesn't advance anything.

> Note however that NN dynamics are not totally inscrutable, e.g. folks have
> analyzed the innards of backprop neural nets using PCA and other statistical
> methods.  And of course a huge logic system with billions of propositions
> continually combining via inference may be pretty damn inscrutable.

I used the term "inscrutable neural network dynamics".  You said "of course a 
huge logic system with billions of propositions continually combining via 
inference may be pretty damn inscrutable"  So why are you arguing?

> So the point is not irrefutable by any means, which is why I called
> it an intuitive argument rather than a rigorous one.

While the point is not totally irrefutable, there is a heavy preponderance of 
evidence going one way which *MUST* be accounted for before you can 
convincingly argue the reverse.  The implication that lack of 100% surety means 
that we must treat all alternatives as equally likely is ridiculous (like the 
argument against global warming).  An intuitive argument with no explanation is 
no better than a guess.  An argument with good solid reasoning behind it is far 
better than a guess -- even if it turns out to be incorrect because you can 
than analyze where it went wrong and why.  That's known as science.  This 
intuition BS gives us absolutely nothing to build on -- no way to look for 
possible failure modes, no explanations, no nothing.  Please, if you're going 
to argue something -- please take the time to argue it and don't pretend that 
you can't magically solve it all with your "guesses" (I mean, intuition).



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to