>> I then asked if anyone in the room had a 98.6F body temperature, and NO ONE 
>> DID. 

Try this in a room with "normal" people.  You'll get almost the same answer.  
98.6 is just the Fahrenheit value of a rounded Celsius value -- not an accurate 
gauge.  My standard temperature is 96.8 -- almost two degrees low -- and this 
is perfectly NORMAL.  Any good medical professional understands this.  Don't 
criticize others for your assumptions of what they believe.
  ----- Original Message ----- 
  From: Steve Richfield 
  To: [email protected] 
  Sent: Sunday, April 13, 2008 4:42 PM
  Subject: Re: [agi] Comments from a lurker...


  Mike,


  On 4/12/08, Mike Tintner <[EMAIL PROTECTED]> wrote: 
    Steve:If you've
    got a messy real-world problem, you know little, if you have an
    algorithm giving the solution, you know all. 

    This is the bit where, like most, you skip over the nature of AGI -  messy 
real-world problems. What you're saying is: "hey if you've got a messy problem, 
it's great, nay perfect if you have a neat solution." Contradiction in terms 
and reality. If it's messy, there isn't a neat solution.

  However, there are MANY interesting points in between these two extremes. 
Typically, given the best "experts" (quotes used to highlight the fact that 
claiming expertise in something that is poorly understood, as doctors routinely 
do, is a bit of an oxymoron) available, you can identify several 
cause-and-effect chain links that are contributing to your problem, even though 
there remains most of the problem that you still do NOT understand. If you can 
ONLY identify a cure to a single link between the root cause and the 
self-sustaining loop at the end, and identify any way at all to temporarily 
interrupt (doctors call this a "treatment) any link in the self-sustaining loop 
at the end, you can permanently cure the difficult problem, even though most of 
it remains a complete mystery. That this simple fact has remained hidden has 
misled AI and AGI, and will continue to mislead it until everyone involved 
understands this.

    Take most cancers. If you have one, what do you do? Well, there are a lot 
of people out there offering you a lot of v. conflicting treatments and 
proposals, and there is no neat, definitive answer to your problem.

  Only because various misdirected interests are misleading the process. To 
illustrate, about a year ago I delivered a presentation to a roomfull of cancer 
survivors (and people who were trying to survive it). I explained the complex 
part that body temperature apparently played, and exactly why it was almost 
unknown for a cancer patient to have a "normal" 98.6F=37C body temperature. I 
then asked if anyone in the room had a 98.6F body temperature, and NO ONE DID. 
THERE is a pretty definitive answer, but getting it out to the "experts" is 
probably impossible because they have other dysfunctional models to use. I have 
an article about this if you would like it. There is a safe and simple one-day 
cure for erroneous body temperature, yet no cancer sufferer that I know of has 
ever done it!!!


    That's the kind of problem a human general intelligence has to deal with, 
and was designed to deal with.

  Above is a simple case where even when presented with the answer, there is no 
way of propagating it to the rest of the human race. I have a friend who is the 
Director of Research for the Medical Center of a major University, whose own 
personal surgical experiences supported everything I said so he openly accepted 
it. I spent 4 hours discussing various approaches to getting this message out. 
His take - there was no path that he could identify to accomplish this. The 
detailed explanations of the paths that we considered would fill a small book. 
Places like Wikipedia have a filtering process that is guaranteed to block any 
such postings.

  In short, I wouldn't look at "human general intelligence" too closely, as 
except for some rare cases, it too is an oxymoron. It would be MUCH easier to 
build a really intelligent system than to build a "humanly intelligent" system.


    Not the neat ones.

    (And how do I communicate that to you - get you & other AGI-ers to focus on 
that? Because what you'll do is say: "Oh sure it's messy, but there's gotta be 
a neat solution." You won't be able to stay with the messiness. It's too 
uncomfortable. My "communication problem" is in itself a messy one - like most 
problems of communicating to other people, e.g. how do you sell your AGI system 
or get funding?)

  YES, there IS a topic of mutual interest. There used to be people called 
"venture capitalists", but people doing this function no longer exist. There 
are now people calling themselves "venture capitalists" whom people used to 
call "investment bankers". There are "angel investors" who do the initial seed 
investing, but who lack the resources to follow up with major investments once 
the seed investment has succeeded. In short, I have sort of given up on finding 
anyone who has the CAPACITY to invest in any sort of AI/AGI, as all investors 
have money raised on a prospectus which, upon careful reading, guaranteed that 
they will NOT invest in AI/AGI. Some of the common exclusional reasons include:
  1.  Where are your paying customers?
  2.  What prior University research is this built upon?
  3.  Where is your intellectual property protection?
  4.  Where am I going to find other investors with whom to share the risk?

  Steve Richfield


------------------------------------------------------------------------------
        agi | Archives  | Modify Your Subscription  

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to