On Sat, Aug 9, 2008 at 5:35 PM, Charles Hixson
<[EMAIL PROTECTED]> wrote:
> Jim Bromer wrote:
>> As far as I can tell, the idea of making statistical calculation about
>> what we don't know is only relevant for three conditions.
>> The accuracy of the calculation is not significant.
>> The evaluation is near 1 or 0.
>> The problem of what is not known is clearly within a generalization
>> category and a measurement of the uncertainty is also made within a
>> generalization category valid for the other generalization category.
>>
>> But we can make choices about things that are not known based on opinion.

>
> Could you define "opinion" in an operational manner, i.e. in such a way that
> it was specified whether a particular structure in a database satisfied that
> or not?  Or a particular logical operation?
> Otherwise I am forced to consider opinion as a conflation of probability
> estimates and desirability evaluations.  This doesn't seem consistent with
> your assertion (i.e., if you intended opinion to be so defined, you wouldn't
> have responded in that way), but I have no other meaning for it.


I could define the difference between opinion and general knowledge
with abstract terms but it is extremely difficult to come up with an
operational principle that could be used to reliably detect opinion.
This is true when dealing with human opinion so why wouldn't it be
true when dealing with AI opinion?  Most facts are supported by
opinion and most opinions are supported by some facts, although the
connection may be somewhat difficult to see in some cases.

Your opinion that opinion itself can be defined, 'as a conflation of
probability estimates and desirability evaluations,' avoids the
difficulty of the definition by making it dependent on two concepts
neither of which are necessary and both of which would require some
kind of arbitrary evaluation system for most cases.  Opinion can be
derived without probability or an evaluation of desirability.  And
opinion is not necessarily dependent on some kind of weighted system
of numerical measurement.

But while I cannot provide an operational definition that is
absolutely reliable for all cases, I can begin to discuss it as if it
were still an open question (as opposed to an arbitrary definition).

Opinion will be mixed with facts in almost all cases.  One can only
start to distinguish them by devising standard systems that attempt to
separate and categorize them.  This system is going to be imperfect
just as it is in everyday life.  This idea of creating standard
methods that can be used for general classes of kinds of things is
significant because it is related to the problem of  'grounding'
opinions or theories onto 'observable events'.

My imaginary AI program would use categorical reasoning but it would
also be able to learn.  I would use text-based IO at first.  So in
this sense 'grounding' would have to based on textual interactions.
This kind of grounding would be weaker than the grounding that humans
are capable of, but people are limited too, in their own way.

Since opinion and fact seem to be gnarly and intertwined, I feel that
the use of standard methods to examine the problem are necessary.  Why
'standard methods'?  Because standard methods would be established
only after passing a series of tests to demonstrate the kind of
reliability that would be desirable for the kinds of problems that
they would be applied to. This kind of reliability could be measurable
in some cases, but measurability is not a necessary aspect of
detecting opinion.  And another aspect of developing standard methods
is that by relying on highly reliable components and by narrowing the
variations of individual interpretation, these standard methods could
act as a base for methods of grounding.  Ironically, this helps to
bond individual opinions from human society about what is fact and
what is not, but this process is helpful as long as it is not
totalitarian.

So an opinion that contains some truth, but cannot attain a standard
of reliability based on the use of established standard methods to
examine similar problems would have to continue to be considered as an
opinion.  Of course, a theory might only be considered to be an
opinion after the thinking device is exposed to an alternative theory
that explains some reference data in another way.

This problem is directly related to the greater problem of artificial
judgment the lack of elementary methods that could act as the
'independent variables' to produce higher AI.  That means that I think
the problem is AI-Complete (to use an interesting phrase that someone
in the group has used).

Jim Bromer


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to