An assertion is the notion of the truth of the assertion as something that
can be considered (is what I meant to say of course.)

On Sun, Mar 17, 2013 at 3:06 PM, Jim Bromer <[email protected]> wrote:

> I didn't mean to say that using probability and weighted reasoning were
> wrong or something.  I just meant that you cannot use probability without
> the supposition of a logically sound frame (or something).
> Jim Bromer
>
> On Sun, Mar 17, 2013 at 9:44 AM, Jim Bromer <[email protected]> wrote:
>
>> Charles,
>> The notion of the probability of a prediction (an expectation in general
>> human language) is nonsensical if you have ruled out assertions.  Since you
>> are not ruling out assertions you are unwittingly allowing the notion of
>> "truth" in the front door even as you are chasing it out the back. An
>> assertion is a notion of the truth of the assertion as a likely
>> possibility. If you are saying that an AGI program was capable of somewhat
>> reliably deducing the probability of a prediction then you are asserting
>> that the process was based on the strength and the truth of the application
>> of the methods used to derive those probabilities.
>>
>> If it were easy for a computer program to attain the probability of an
>> event based on observations of past events then this kind of discussion
>> would not be relevant to AGI.  The problem is that events are actually
>> complexities which are not only composed of distinct 'kinds' of events and
>> some background 'noise' but of different variations of 'kinds' of events
>> and a lot of other events.  The science of using probability and
>> statistics is premised on the methodical actions of an agent who is not
>> only intelligent but highly trained in the science of the applied
>> statistical methods.  The idea that intelligence can be founded on
>> statistics is backwards.
>>  Jim Bromer
>>
>> On Tue, Feb 19, 2013 at 7:56 PM, Charles Hixson <
>> [email protected]> wrote:
>>
>>>  On 02/19/2013 11:03 AM, Piaget Modeler wrote:
>>>
>>>
>>>  I'm sure this topic has been discussed before.  Sorry for rehashing it
>>> if so. I have a specific question I'd like to answer.
>>>
>>>
>>>  In designing a cognitive system, someone made a criticism that utterly
>>> confounded me.  And got me thinking.
>>>
>>>  The system receives sensory data sets from the world and transforms
>>> them into percept propositions which it asserts to
>>> its memory.  Each percept proposition is activated when it is asserted.
>>>  Infereneces are made from these percepts.
>>> These initial percepts and its inferences are called "Observables".  All
>>> observables can be activated, but there is only a
>>> notion of activation.
>>>
>>>  Next, the system can predict that these observables will recur at some
>>> point.  But the prediction refers only to predicting
>>> the re-activation of observables.
>>>
>>>  Then some one asked, where is the notion of TRUTH in your system.  I
>>> was flabbergasted.  Speechless. Then I asked
>>> well what is truth?  I checked wikipedia.  (
>>> http://en.wikipedia.org/wiki/Truth )
>>>
>>>
>>>  It turns out that when someone says something is true, it means a very
>>> many things:
>>>
>>>  a) It means that the statement is logically consistent (validity),
>>> b) that the statement corresponds, concurs, or conforms to reality
>>> (verity),
>>> c) that one is sure of the statement (certainty / confidence),
>>> d) that the statement is likely to occur rather than unlikely
>>> (Likelihood), and
>>> e) that we agree with the statement (agreement).
>>>
>>>  So my questions are:
>>>
>>>  (1) Is truth necessary or important to a cognitive system?
>>> (2) Which notion of truth should a cognitive system model?
>>> (3) How do we ascribe truth (values) to sensory input or inferences
>>> derived from sensory input?
>>>
>>>  Your thoughts?
>>>
>>>  ~PM.
>>>
>>>
>>> ------------------------------------------------------------------------------------------------------------------------------------------------
>>> *Confidential *- *This message is meant solely for the intended
>>> recipient. Please do not copy or forward this message without *
>>> *the consent of the sender. If you have received this message in error,
>>> please delete the message and notify the sender.*
>>>     *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/232072-58998042> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>> Truth is an illusion.  It is the belief that what you believe to be most
>>> likely is, in fact, inevitable.
>>>
>>> An AI doesn't need the concept of truth...except to communicate with
>>> people.  Internally it can operate off of graded degrees of probability,
>>> cost, benefit, etc.  When communicating with people it needs to condense
>>> that so that when something has more than a certain amount of probability,
>>> and the benefit of asserting it is sufficiently large, and the cost of
>>> being wrong is sufficiently small, then it synopsizes this as proclaiming
>>> "truth".  It's my belief that people operate in the same way, though this
>>> is disguised because different people use different constraints on things
>>> like "What is probable enough?".  Also note that the cost and the benefit
>>> are figured on the basis of the cost/benefit to the entity proclaiming a
>>> truth rather than on those accepting it.
>>>
>>> So perhaps we would want a sufficiently capable AI to avoid talking
>>> about truth, and instead talk about what the probabilities are, and what
>>> costs and benefits can be expected.  It's a bit harder to understand, but
>>> it strikes me as much safer.
>>>
>>> --
>>> Charles Hixson
>>>
>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/10561250-470149cf> |
>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to