Josh: Any well designed AI system should not have the masturbatory  tendencies 
to take unjustified risks. 

Josh,

Jeez, you guys will not face reality. MOST of the problems we deal with involve 
risks (and uncertainty). That's what human intelligence does most of the time - 
that's what any adaptive intelligence does & will have to do  - deal with 
problems involving risks and uncertainty. Even your superduperintelligence will 
have to face those too. (Who knows how those pesky, rebellious humans may try 
to resist its benevolent decisions?) Let me quote again 
Most of the problems that we face in our everyday lives are ill-defined 
problems. In contrast, psychologists have focussed mainly on well-defined 
problems. Why have they done this? One important reason is because well-defined 
problems have a best strategy for their solution. As a result it is usually 
easy to identify the errors and deficiencies in the strategies adopted by human 
problem-solvers..

Michael Eysenck, Principles of Cognitive Psychology, East Sussex: Psychology 
Press 2001

You guys are similarly copping out - dealing with the easy rather than the hard 
(risk & uncertainty) problems - the ones with the neat answers, as opposed to 
the ones that are open-ended.  The AI as opposed to the AGI problems.

Won't somebody actually deal with the problem  - how will your AGI system 
decide to invest or not to invest $10,000 in a Chinese mutual fund tomorrow? 
(You guys are supposed to be in the problem-solving business).



Mark, it seems that you're missing the point.  We as humans aren't ABSOLUTELY 
CERTAIN of anything.  But we are perfectly capable of operating on the fine 
line between assumed certainty and uncertainty.  We KNOW that molecules are 
made of up bonded atoms, but past a certain point, we can't say what a basic 
unit of energy is.  But we know what a molecule looks like so we can bond atoms 
to form them.  Much is the same with intelligence.  We simply mimic behaviors 
and find parallels in code and systems.  If we can prove that a turing machine 
is universal, or that a code base is universal, then there must be some 
configuration of code that is capable of representing every subatomic 
interaction occurring in our universe down to the most minute detail, thus 
duplicating our experience entirely (regardless of the fact that this would 
take several universe lifetimes to do manually).  So if this is plausible, it's 
perfectly sound to discuss optimization of the stated code.  Our brains (and 
their emergent trait of our fancy, intelligence) are a much smaller piece of 
this (computable) chemical puzzle.  Since they were derived through evolution 
(the high intelligence, low efficiency topic), there are many inefficient 
mechanisms that have evolved for reasons other than exponentially increasing 
our brains processing power.  Why is this hard to grasp? 

In terms of your investment question, it's all a matter of needs.  That is a 
simple risk assessment.  To any intelligent being, the money gained is only an 
ends to a means.  To an AI interested in furthering it's knowledge, or 
bettering mankind (or machine kind), money simply means more energy, power, 
resources, etc.  Ultimately, if your goal is just to amass money without any 
reasoning, your goals system is flawed.  Any well designed AI system should not 
have the masturbatory  tendencies to take unjustified risks.  We're talking 
about multiple priority levels here.  The Nash equilibrium would be sought 
after on many levels.  The computer is going to give the system it's best shot 
and guess. 



  On 5/17/07, Mike Tintner <[EMAIL PROTECTED]> wrote:

    Pei: AI is about "what is the best solution to a problem if
    > the system has INSUFFICIENT knowledge and resources".

    Just so. I have just spent the last hour thinking about this area, and you
    have spoken the line I allotted to you almost perfectly.

    Your definition is a CONTRADICTION IN TERMS.

    If a system has insufficient knowledge there is NO BEST, no "optimal,"
    solution to any decision - that's a fiction. 
    If a system is uncertain, there is no certain way to deal with it.
    If there is no right answer, there really is no right answer.

    The whole of science has spent the last hundred years caught in the above
    contradiction. Recognizing uncertainty and risk everywhere - and then 
trying 
    to find a "certain," "right", "optimal",  way to deal with them. This runs
    right through science. "Oh yes, we can see that life is incredibly
    problematic and uncertain...  but this is the certain, best way to deal 
with 
    it."

    So science has developed games theory - arguably the most important theory
    of behaviour - and then spent its energies trying to find the right way to
    play games. The perfect equilibrium etc. And missed the whole point. There 
    is no right way to play games - on the few occasions that one is discovered,
    (and there are a few occasions and situations), people STOP PLAYING IT -
    because it ceases to be a proper game,and it ceases to be a model of real 
    life.In one sense, it's no wonder Nash went mad..

    But scientists and techie's so badly want a right answer, they haven't been
    able to admit there isn't one.

    "What made [Stephen Jay] Gould unique as a scientist was that he had a 
    historian's mind and not an engineer's. He liked mess, confusion and
    contradiction. Most scientists, in my experience, are the opposite. They are
    engineers at heart. They think the world is made up of puzzles, and 
    somewhere out there is the one correct solution to every puzzle."

    Andrew Brown

    Hey, this is a fundamentally pluralistic world, not a [behaviourally]
    monistic one. Deal with it:

    Here's a simple problem for you with insufficient knowledge and resources:: 
    you have $10,000 to invest tomorrow. You're thinking of investing in a
    Chinese stockmarket mutual fund, because the market is on the up and you
    reckon there could be a lot of money still to be made. (And there really 
    could). On the other hand, maybe it's a crazy bubble about to burst, and you
    could lose a lot of money too. So what do you do tomorrow - buy or do
    nothing - invest or keep your money in that savings account? 

    What's the "best" decision, or the best way of reaching a decision, or the
    best way of finding a way of reaching a decision- in the next 24 hours (or,
    in the end, in any time period)? [And what do you think, Ben?] 

    If you don't have the "best" answer, then your whole approach both to
    defining and implementing intelligence is fundamentally flawed.  A few
    hundred million investors will be waiting to hear your reply. They'd love 
to 
    know the best answer.  No more need for all these different schools of
    investors to argue so furiously, no more need for all these schools just of
    investment AI/ computation alone to keep arguing either.Pei's cracked it, 
    guys. Over here.

    You really would do well to think very long and hard about that simple
    problem - it will change your life. I hope you will have the courage to
    answer the problem.

    (BTW MOST of the problems humans face in everyday life can be represented 
as 
    investment problems - it's a basic, not an eccentric problem).



    -----
    This list is sponsored by AGIRI: http://www.agiri.org/email
    To unsubscribe or change your options, please go to: 
    http://v2.listbox.com/member/?&;

    --
    This message has been scanned for viruses and 
    dangerous content by MailScanner, and is
    believed to be clean.




  -- 
  Josh Treadwell
     [EMAIL PROTECTED]
     480-206-3776 
------------------------------------------------------------------------------
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;


------------------------------------------------------------------------------


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.467 / Virus Database: 269.7.1/807 - Release Date: 16/05/2007 
18:05

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to