On Sun, Aug 17, 2008 at 10:52 PM, Charles Hixson
<[EMAIL PROTECTED]> wrote:
> Well, one point where we disagree is on whether "truth" can actually be
> known by anything.  I don't think this is possible.  So to me that which is
> called truth is just something with a VERY high probability, and which is
> also consistent with the mental models that one is employing to describe
> "what's out there".

We may comprehend a truth but that does not mean that we can assess
the truth of anything.  I agree with that.  But the same thing goes
for probability.

> I don't understand how you can assert that estimation of probability isn't
> necessary.  One can't even walk across the room without estimating that one
> will find a floor under ones feet.

I was talking about the use of a metric of probability.  Notice that
your example of 'probability' did not require the use of a numerical
estimation.  My complaint with using numerical probability is that you
are only assigning a (probably) false estimate of probability which is
then mixed with other (probably) false estimates.  Or, let's just call
them questionable estimates of probability.

> I don't think it desirable to segment knowledge into "general knowledge" and
> "opinion".  To me "general knowledge" describes beliefs about the world that
> are presumed to be commonly shared (by some group), and opinion is
> "personal" beliefs that are based on not only general knowledge, but also on
> the entire rest of one's modeling of the world.

Well this seems like a very personal opinion about opinion.

I don't feel that we can generally separate opinion from general
knowledge. However, we can examine the reliability of the 'facts' that
are the basis of our own opinions.  This is the basis of science and
this idea suggests that it might be the basis of higher intelligence
as well.

Originally I said, "But we can make choices about things that are not
known based on opinion."

What I meant was that we can make choices about things (with or
without) the use of numerical estimates based on general knowledge
(even if that knowledge is more opinion than fact.)  So an operational
definition to make the distinction between opinion and fact is not
necessary for an agi program - as far as this one statement goes.

Most AI people who are needy-dependent on the use of numerical
probability (but who were not experts in statistics before they became
interested in AI) don't understand how ideas can integrated without
the use of some kind of general method like logic, heuristics or
probability.  You may not realize that I am not arguing for a choice
between traditional AI paradigms of (Logic, Neural Networks,
Heuristics, GAs) vs (Probability-Statistics).  What I am saying is
that greater insight into the nature of how ideas work can produce
more sophisticated results in AI than these other more established
paradigms.

Jim Bromer


On Sun, Aug 17, 2008 at 10:52 PM, Charles Hixson
<[EMAIL PROTECTED]> wrote:
> This is probably quibbling over a definition, but:
> Jim Bromer wrote:
>>
>> On Sat, Aug 9, 2008 at 5:35 PM, Charles Hixson
>> <[EMAIL PROTECTED]> wrote:
>>
>>>
>>> Jim Bromer wrote:
>>>
>>>>
>>>> As far as I can tell, the idea of making statistical calculation about
>>>> what we don't know is only relevant for three conditions.
>>>> The accuracy of the calculation is not significant.
>>>> The evaluation is near 1 or 0.
>>>> The problem of what is not known is clearly within a generalization
>>>> category and a measurement of the uncertainty is also made within a
>>>> generalization category valid for the other generalization category.
>>>>
>>>> But we can make choices about things that are not known based on
>>>> opinion.
>>>>
>>
>>
>>>
>>> Could you define "opinion" in an operational manner, i.e. in such a way
>>> that
>>> it was specified whether a particular structure in a database satisfied
>>> that
>>> or not?  Or a particular logical operation?
>>> Otherwise I am forced to consider opinion as a conflation of probability
>>> estimates and desirability evaluations.  This doesn't seem consistent
>>> with
>>> your assertion (i.e., if you intended opinion to be so defined, you
>>> wouldn't
>>> have responded in that way), but I have no other meaning for it.
>>>
>>
>>
>> I could define the difference between opinion and general knowledge
>> with abstract terms but it is extremely difficult to come up with an
>> operational principle that could be used to reliably detect opinion.
>> This is true when dealing with human opinion so why wouldn't it be
>> true when dealing with AI opinion?  Most facts are supported by
>> opinion and most opinions are supported by some facts, although the
>> connection may be somewhat difficult to see in some cases.
>>
>> Your opinion that opinion itself can be defined, 'as a conflation of
>> probability estimates and desirability evaluations,' avoids the
>> difficulty of the definition by making it dependent on two concepts
>> neither of which are necessary and both of which would require some
>> kind of arbitrary evaluation system for most cases.  Opinion can be
>> derived without probability or an evaluation of desirability.  And
>> opinion is not necessarily dependent on some kind of weighted system
>> of numerical measurement.
>>
>> But while I cannot provide an operational definition that is
>> absolutely reliable for all cases, I can begin to discuss it as if it
>> were still an open question (as opposed to an arbitrary definition).
>>
>> Opinion will be mixed with facts in almost all cases.  One can only
>> start to distinguish them by devising standard systems that attempt to
>> separate and categorize them.  This system is going to be imperfect
>> just as it is in everyday life.  This idea of creating standard
>> methods that can be used for general classes of kinds of things is
>> significant because it is related to the problem of  'grounding'
>> opinions or theories onto 'observable events'.
>>
>> My imaginary AI program would use categorical reasoning but it would
>> also be able to learn.  I would use text-based IO at first.  So in
>> this sense 'grounding' would have to based on textual interactions.
>> This kind of grounding would be weaker than the grounding that humans
>> are capable of, but people are limited too, in their own way.
>>
>> Since opinion and fact seem to be gnarly and intertwined, I feel that
>> the use of standard methods to examine the problem are necessary.  Why
>> 'standard methods'?  Because standard methods would be established
>> only after passing a series of tests to demonstrate the kind of
>> reliability that would be desirable for the kinds of problems that
>> they would be applied to. This kind of reliability could be measurable
>> in some cases, but measurability is not a necessary aspect of
>> detecting opinion.  And another aspect of developing standard methods
>> is that by relying on highly reliable components and by narrowing the
>> variations of individual interpretation, these standard methods could
>> act as a base for methods of grounding.  Ironically, this helps to
>> bond individual opinions from human society about what is fact and
>> what is not, but this process is helpful as long as it is not
>> totalitarian.
>>
>> So an opinion that contains some truth, but cannot attain a standard
>> of reliability based on the use of established standard methods to
>> examine similar problems would have to continue to be considered as an
>> opinion.  Of course, a theory might only be considered to be an
>> opinion after the thinking device is exposed to an alternative theory
>> that explains some reference data in another way.
>>
>> This problem is directly related to the greater problem of artificial
>> judgment the lack of elementary methods that could act as the
>> 'independent variables' to produce higher AI.  That means that I think
>> the problem is AI-Complete (to use an interesting phrase that someone
>> in the group has used).
>>
>> Jim Bromer
>>
>
> Well, one point where we disagree is on whether "truth" can actually be
> known by anything.  I don't think this is possible.  So to me that which is
> called truth is just something with a VERY high probability, and which is
> also consistent with the mental models that one is employing to describe
> "what's out there".
>
> I don't think it desirable to segment knowledge into "general knowledge" and
> "opinion".  To me "general knowledge" describes beliefs about the world that
> are presumed to be commonly shared (by some group), and opinion is
> "personal" beliefs that are based on not only general knowledge, but also on
> the entire rest of one's modeling of the world.  As such I think of them as
> estimates of how things probably will work out (did work out?  tense is
> situational) that cannot be expected to be generally shared...but which may
> well be shared by some subset of the class with "general knowledge".
> Note that since opinions, by my estimation, are based on modeling actions,
> they are less certain that the facts and models that they are based on.
>  Mistakes in application are possible.  Also it's usually true that
> heuristics are used in creating opinions, so they are even less certain.
>
> Note that "facts" are only memories of past experiences, and are therefore
> possibly incorrect.  Not only is the process of storage subject to errors,
> but "facts" aren't the experiences, but are rather only a compressed (via a
> lossy compression) rendition of the experience.  I suspect that these
> features will be necessary in any intelligent system.  Retrieval is also a
> problem.  The needed facts may not be remembered when any particular cycle
> of running the model is in process, and were they present then they might
> affect the result.  Perfection is not in this universe.  (I note that we are
> probably using the term "facts" differently.  This depends on precisely what
> you mean when you say "Most facts are supported by opinion".  I would also
> assert that ALL opinions are supported by SOME facts.  Just not enough, so
> one is forced to various means of estimation.  Often the means of estimation
> used will be that which is most convenient rather than that which is most
> accurate.
>
> I don't understand how you can assert that estimation of probability isn't
> necessary.  One can't even walk across the room without estimating that one
> will find a floor under ones feet.  Desirability estimates are necessary
> because without desirability one wouldn't bother to form an opinion.
>  Calling the weights numeric is reflecting the implementation.  Other means
> of estimating probability are possible (e.g., it can be done with shifting
> weights), but numbers (or rather their electrical analog) are what a
> computer deals with, so a computer model uses numbers (or rather a pattern
> of electrical charges that is manipulated in a means isomorphic to the way
> that numbers are manipulated).  This feels like extreme quibbling.  Analog
> systems aren't numeric, digital systems are (up to isomorphism, and
> excluding errors).
>
> An operational definition would be one that I could use as a programmer
> examining the code.  I don't expect to reliably be able to tell the
> difference between a statement of opinion and a statement of fact or belief
> when examining the outputs.  (As you say, you can't do it with people,
> either.)
>
> As to definitions, either a word can be defined, or it's a meaningless
> noise.  The definition may not be totally precise (few are outside of
> mathematics...and not all of those), but it draws a fuzzy boundary.
>  Operational just means that you can use it, or theoretically could use it,
> to determine whether in a particular instance it applied.  In a program,
> given the inputs and the outputs, any sufficiently explicit definition could
> be made operational, but frequently this would involve lots of nitpicking
> over the precise meaning of the words used in the definition.  So it's
> simpler to just start off by asking for an operational definition, i.e., one
> you could use in a test.  (Note that in any particular instance my proposed
> definition would also require detailed examination of the code and tracing
> the flow between inputs and outputs, so as a matter of practice it would be
> essentially impossible to apply it operationally.  Still, it could
> theoretically be so applied, and that is an important distinction.  And one
> could intentionally design a system that followed that definition.)
>
> I think I'll stop now, unless you want comments on the remainder of the
> post.
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to