James,
your comments are appreciated.
 a few comments below
Stan


James Ratcliff wrote:
Your train of reasoning is lacking somewhat in many areas, and does not directly point to your main assertion.
thanks for the feedback. As I follow other discussions and read the papers they refer to, I realize that my writings are lacking. Perhaps they are more blog like than scientific.


The problem of calculating values of certain states is a difficult one, and one that a goo AGI MUST be able to do, using facts of the world, and subjective beliefs and measures as well.
I'm not sure I get the MUST part. Is this for troubleshooting purpose or for trust issues? Or is it required for "steering" the contemplation or attention of the machine?

Whether healthcare or education spending is most beneficial must be calculated, and compared against eachother, based on facts, beliefs, past data and statistics, and trial and error. And these subjective beliefs are ever changing and cyclical. As a better example, would be a limited AGI whose job was to balance the National budget, its job would be to choose the best projects to spend money on. Maximizing Benefit Units (BU) here as a measure of 'worth' of each project is needed and required. One intelligence (human) may be overwelmed with the sheer amount of data and statistics to come to the best decision. An AGI with subjective beliefs about the benefit of of each could use potentially more of teh data to come to a more maximized solution.

It is the "future scenarios" that are often the most compelling justification or "evidence" for value of something, and in my opinion the most unreliable. Whether it is man or machine giving his case, there will be speculation involved in the common sense domain.

Will the scenario be "You say this... now prove it. If you can't prove it don't use that in the justification..." Very limiting.


On your other note about any explanation being too long or too complicated to understand...... Any decision must be able to be explained. It can be done at different levels, and expanded as much as the AGI is told to do so, but there should be NO decicions that you ask the machine, Why do you decide X? and the answer is nothing, or 'I dont know'

If the architecture of the machine is "flow" based, that is, the prior events helped determine current events, then the burden of explaining would overwhelm the system. Even if only logic based, as you pointed out the "values" will be dynamic and to explain one would need to keep a record of the values that went into the decision process - a snapshot of the "world" as it was at the time.

What if the system attempted to explain and finally concluded "if I were making the decision right now, it would be different." We wouldn't consider it especially brilliant since we hear it all the time.

Any machine we create that has answers without the reasoning, is very scary.

and maybe more than scary if it is optimized to offer reasoning that people will buy, especially the line "trust me."


James Ratcliff



*/Stan Nilsen <[EMAIL PROTECTED]>/* wrote:

    Greetings Samantha,

    I'll not bother with detailed explanations since they are easily
    dismissed with a hand wave and categorization of irrelevant.

    For anyone who might be interested in the question of:
    Why wouldn't a super intelligence be better able to explain the aspects
    of reality? (assuming the point is providing explanation for choices.)
    I've placed an example case online at

    http://www.footnotestrongai.com/examples/bebillg.html

    It's an "exploration" based on becoming Bill Gates, (at least having
    control over his money) and how a supercomputer might offer
    "explanations" given the situation. Pretty painless, easy read.

    I find the values based nature of our world highly relevant to the
    concept of an emerging "super brain" that will make super decisions.

    Stan Nilsen


    Samantha Atkins wrote:
     >
     > On Dec 26, 2007, at 7:21 AM, Stan Nilsen wrote:
     >
     >> Samantha Atkins wrote:
     >>>
     >>
     >>> In what way? The limits of human probability computation to form
     >>> accurate opinions are rather well documented. Why wouldn't a mind
     >>> that could compute millions of times more quickly and with far
     >>> greater accuracy be able to form much more complex models that
    were
     >>> far better at predicting future events and explaining those
    aspects
     >>> of reality with are its inputs? Again we need to get beyond the
     >>> [likely religion instilled] notion that only "absolute
    knowledge" is
     >>> real (or "super") knowledge.
     >>
     >> Allow me to address what I think the questions are (I'll
    paraphrase):
     >>
     >> Q1. in what way are we going to be "short" of super intelligence?
     >>
     >> resp: The simple answer is that the most intelligent of future
     >> intelligences will not be able to make decisions that are clearly
     >> superior to the best of human judgment. This is not to say that
     >> weather forecasting might not improve as technology does, but
    meant to
     >> say that predictions and decisions regarding the "hard" problems
    that
     >> fill reality, will remain hard and defy the intelligentsia's
    efforts
     >> to fully grasp them.
     >
     > This is a mere assertion. Why won't such computationally much more
     > powerful intelligences make better decisions than humans can or will?
     >
     >>
     >>
     >> Q2. why wouldn't a mind with characteristics of ... be able to form
     >> more complex models?
     >>
     >> resp: By "more complex" I presume you mean having more
    "concepts" and
     >> "relevance" connections between concepts. If so, I submit that
     >> wikipedia estimate of synapse of the human brain at 1 to 5
    quadrillion
     >> is major complexity, and if all those connections were properly
    tuned,
     >> that is awesome computing. Tuning seems to be the issue.
     >>
     >
     > I mean having more active data, better memory, tremendously more
     > accurate and powerful computation. How complex our brain is at the
     > synaptic level has not all that much to do with how complex a
    model we
     > can hold in our awareness and manipulate accurately. We have no way
     > of "tuning the mind" and you would likely a get a biological
    computing
     > vegetable if you could. A great deal of our brain is design for and
     > supports functions that have nothing to do with modeling or abstract
     > computation.
     >
     >
     >> Q3 why wouldn't a mind with characteristics of ... be able to build
     >> models that "are far better at predicting future events"?
     >>
     >> resp: This is very closely related to the limits of
    intelligence, but
     >> not the only factor contributing to intelligence. Predictable
    events
     >> are easy in a few domains, but are they an abundant part of life?
     >> Abundant enough to say that we will be able to make "super"
     >> predictions? Billions of daily decisions are made, and any one of
     >> them could have a butterfly effect.
     >>
     >
     > Not really and it ignores the actual question. If a given set of
     > factors of interest are inter-related with a larger number of
    variables
     > than humans can deal with then an intelligence that can work with
    such
     > more complex inter-dependencies will make better decisions in those
     > areas. We already have expert systems that make better decisions
    more
     > dependably in specialized areas than even most human experts in
    those
     > domains. I see no reason to expect this to decrease or hit a wall.
     > And this is just using weak AI.
     >
     >> Q4 why wouldn't a mind... be far better able to explain "aspects of
     >> reality"?
     >>
     >> resp: may I propose a simple exercise? Consider yourself to be Bill
     >> Gates in philanthropic mode (ready to give to the world.) Make a
    few
     >> decisions about how to do so, then explain why you chose the avenue
     >> you took. If you didn't delegate this to committee, would you be
    able
     >> to explain how the checks you wrote were the best choices in
    "reality"?
     >>
     >
     > This is not relevant to the question at hand. Do you think an
     > intelligence with greater memory, computational capacity and vastly
     > greater speed can keep track of more data and generate better
    hypothesis
     > to explain the data and tests and refinements of those hypotheses? I
     > think the answer is obvious.
     >
     >>
     >>
     >>>>
     >>>>
     >>>> Deeper thinking - that means considering more options doesn't it?
     >>>> If so, does extra thinking provide benefit if the evaluation
    system
     >>>> is only at level X?
     >>
     >>
     >>> What does this mean? How would you separate "thinking" from the
     >>> "evaluation system"? What sort of "evaluation system" do you
    believe
     >>> can actually exist in reality that has characteristics
    different from
     >>> those you appear to consider woefully limited?
     >>
     >> Q5 - what does it mean, or how do you separate thinking from an
     >> evaluation system?
     >>
     >> resp: Simple example in two statements:
     >> 1. Apple A is bigger than Apple B.
     >> 2. Apples are better than oranges.
     >>
     >> Does it matter how much you know about apples and oranges? Will
    deep
     >> thinking about the DNA of apples, the proteins of apples, the
    color of
     >> apples or history of apples, help to prove the second statement?
    Will
     >> deep analysis of oranges prove anything?
     >>
     >> Will fast and accurate recall of every related fact about Apples
    and
     >> oranges help in our proof of statement 2? Even if the second
     >> statement had been 2. Apple A is better than Apple B, we would have
     >> had trouble deciding if the superior color of A is greater than the
     >> better taste of B.
     >>
     >
     > This is a silly argument as (2) is a subjective value judgment
    having
     > nothing to do with more or less intelligence.
     >
     >> This is what I mean by evaluation system. Foolish example? Think
     >> instead "economic prosperity" is better than "CO2 pollution" if you
     >> want to be real world.
     >>
     >> Q6 - what sort of "evaluation system" can exist that has
     >> characteristics differing from what I consider woefully limited.
     >>
     >> resp: I'm not clear what communicated the idea that I consider
    either
     >> the machine intelligence or the human intelligence to be woefully
     >> limited. I concede that machine intelligence will likely be as good
     >> as human intelligence and maybe better than the average human. Is
     >> this super?
     >> Was the "woefully inadequate" in reference to a personal opinion?
     >> Those are not my words, I consider human intelligence a work of
    art,
     >> brilliant.
     >>
     >>
     >
     > You assert it will not be "super" but you have not made an effective
     > argument for your position. Perhaps you will in the website you
    mention
     > but I doubt it.
     >
     > Exactly what is it about human minds that makes us better decision
     > makers and more capable than any other creatures on the planet?
    Do you
     > believe that we are some pinnacle of intelligence and nothing can
    come
     > along significantly smarter than us? You seem to be arguing such a
     > position.
     >
     > If you do not believe this then why would you think it is
    impossible to
     > build an AGI significantly smarter than us?
     >
     >
     > - samantha
     >
     > -----
     > This list is sponsored by AGIRI: http://www.agiri.org/email
     > To unsubscribe or change your options, please go to:
     > http://v2.listbox.com/member/?&;
     >
     >

    -----
    This list is sponsored by AGIRI: http://www.agiri.org/email
    To unsubscribe or change your options, please go to:
    http://v2.listbox.com/member/?&;




_______________________________________
James Ratcliff - http://falazar.com
Looking for something...

Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. <http://us.rd.yahoo.com/evt=51733/*http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ > This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&; <http://v2.listbox.com/member/?&;>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=86390938-eb4e16

Reply via email to