From: Stanley Nilsen <senil...@ghvalley.net>
Sent: 13 April 2017 02:10 AM
To: AGI
Subject: Re: [agi] I Still Do Not Believe That Probability Is a Good Basis for 
AGI - with responses

"I will have to plead that I don't really understand "existential" logic, so I 
can't really guess as to how it solves the examples."

When objective-enough facts could be established, the need for guesswork is 
removed. Here is an example of existential logic. You
and I are busy having a conversation on topic AGI. Can it be proven that:
a) The context of this conversation exists?
b) Your persona with name X exists?
c) My persona with name X exists?

If the answer to any 1 of these questions was "Yes", and physical evidence 
could be produced to substantiate the claim (email record, profile data, 
in-session camera data, etc, then a conclusion could be reached that the whole 
pertaining to the "Yes" exists. On this basis a computer "brain" could progress 
to testing the assumption that both you and I probably are human, and publish 
the substantiated decision accordingly.


 "But I would like to venture into the area of "sense of correctness is born in 
the mind of humans" as a way to make a case for
abstraction."

Abstraction is provable via function and relational theory.


"In the mind of a human we often "feel" the sense of what choice is correct.  
We generally do not fill in a spreadsheet that gives us a
correct answer.   Most would agree that a "feeling" is more abstract than a 
reason.   In Antonio Damasio's book "The Feeling of What Happens" chapter 2 
Emotion and Feeling, he mentions clinical cases that document difficulty in 
decision making for persons who have brain damage affecting emotions.  The 
"rationale" part of the person is intact but the emotional person is broken.  
Such people can have great difficulty making a decision."

Pun intended, this should be a no brainer. Impaired brain = impaired decision 
making. A feeling could be more abstract than a reason, unless the reason for 
the feeling was the chemical reaction, which automatically occurred based on a 
particular, sensory stimulus. Again, my point on a relative-reality view and 
the need for a coherent methodological approach persists.


"The relationship to abstraction is that there is some mechanism that is able 
to take a set of facts and produce a feeling.  The "stronger" or "better" the 
feeling, the more we incline to that decision."

True, shit indeed floats (pardon the directness). As long as we know what the 
substance is and have the means to sift the chaff from the
corn, we would retain a means to navigate complex-adaptive reality.


"Somewhere along the line our artificial intelligence will need to implement 
abstraction.  And, don't be surprised if when it is asked
how it arrived at the decision to do A instead of B, it replies that A just 
felt better.  : )"

XAI hopes for the machine to tell us exactly what logic it followed to arrive 
at a decision.


"And so often we reply "it may be correct in your mind..."

Actually, the group-average method works pretty well. It takes the mean of a 
group's "feel most correct to my mind" as the most-probable answer.


"I prefer to think in terms of adoption or not adopted.  We buy it or reject.  
When given an assumption, I choose to employ it into my thinking or not."

Curve-ball for you: Does free will really exist?


"And, most of the time people are okay with acceptance based on trust of the 
presenter,  or shallow argument.  Plenty of assumptions are adopted on surface 
knowledge rather than "testable" experience."

If we all participated in a decision-making process, then agreed on the result, 
it may be counted as "testable" experience.


"And, that is pretty much my point, the AI will use references and trusted 
sources (the programmer?) rather than provable or
testable assertions."

I'll have to disagree with you here. The AGI would use its own brain to collect 
data, discover potential domain knowledge, make
assumptions, and learn from its decision processes.


"It would be ideal if one could give the reality test to assumptions - that is, 
we had enough chances to "field" test every assumption.  Granted, our "science" 
slowly improves our understanding of reality, and our projections are better.  
But, to get the ball rolling we
need to rely on assumptions."

I think this is a rather significant point you're making. A hypothesis is also 
an assumption. It takes research methodology to turn it into a
tangible result.


________________________________
From: Stanley Nilsen <senil...@ghvalley.net>
Sent: 13 April 2017 02:10 AM
To: AGI
Subject: Re: [agi] I Still Do Not Believe That Probability Is a Good Basis for 
AGI

On 04/12/2017 01:50 PM, Nanograte Knowledge Technologies wrote:

"That is the abstraction problem of AGI.  For example, is health a more 
significant domain than finance?  Is public service better for the AGI than 
bettering the skills of the AGI?"

I see no abstraction problem with AGI. The examples you posed as problems are 
fairly easily resolvable via existential logic. "Domain Provable" probabilistic 
choices flow from existential logic. That is where a sense of correctness is 
born in the mind of humans, and it could be so in computerized machines also. 
And once sense is established, consciousness becomes possible. However, it does 
return me to the obvious need for an adequate, <deabstraction> methodology.


I will have to plead that I don't really understand "existential" logic, so I 
can't really guess as to how it solves the examples.  But I would like to 
venture into the area of "sense of correctness is born in the mind of humans" 
as a way to make a case for abstraction.

In the mind of a human we often "feel" the sense of what choice is correct.  We 
generally do not fill in a spreadsheet that gives us a correct answer.   Most 
would agree that a "feeling" is more abstract than a reason.   In Antonio 
Damasio's book "The Feeling of What Happens" chapter 2 Emotion and Feeling, he 
mentions clinical cases that document difficulty in decision making for persons 
who have brain damage affecting emotions.  The "rationale" part of the person 
is intact but the emotional person is broken.  Such people can have great 
difficulty making a decision.

The relationship to abstraction is that there is some mechanism that is able to 
take a set of facts and produce a feeling.  The "stronger" or "better" the 
feeling, the more we incline to that decision. Hence, I believe the human is 
abstracting as a decision making method.

Somewhere along the line our artificial intelligence will need to implement 
abstraction.  And, don't be surprised if when it is asked how it arrived at the 
decision to do A instead of B, it replies that A just felt better.  : )


"The issue is, where will AGI get the assumptions?  And, how rigorous will the 
process be for accepting a new assumption?"

The relative terms 'right' and 'wrong', 'good' and 'bad' etc. carry their own 
poison. I prefer to use the term 'correct', to relate a decision to a scenario 
option. Indeed, 'correct' also denotes a judgment, but it more strongly denotes 
the testable outcome of an assumption, relative to a knowledge base.


And so often we reply "it may be correct in your mind..."

I prefer to think in terms of adoption or not adopted.  We buy it or reject.   
When given an assumption, I choose to employ it into my thinking or not.  And, 
most of the time people are okay with acceptance based on trust of the 
presenter,  or shallow argument.  Plenty of assumptions are adopted on surface 
knowledge rather than "testable" experience.
And, that is pretty much my point, the AI will use references and trusted 
sources (the programmer?) rather than provable or testable assertions.


AGI would get its assumptions from learning, per contextual schema, what a 
scale of correctness would result in. Instead of just the two poles of 
'correct' and 'not correct', many other points of correctness could be defined 
and placed on such a scale to introduce decision granularity, and so increase 
the overall probability of an assumption becoming testable relative to reality.


It would be ideal if one could give the reality test to assumptions - that is, 
we had enough chances to "field" test every assumption.  Granted, our "science" 
slowly improves our understanding of reality, and our projections are better.  
But, to get the ball rolling we need to rely on assumptions.

The granularity begins to sound like "subtle" feeling assessments - more 
abstraction.


How rigorous will the process be for accepting a new assumption? Not rigorous 
at all. The "most true, or most correct" result would always inform the 
validity and reliability of any assumption. The strongest genes would survive.
       For example: Start - logic, then assumption, then else chain reaction. 
The value on the "correctness" scale would provide the
                                           loop-until value <x>. Exit. End.

Results are hard to validate except under rigorous lab conditions. The 
"general" world is not so controlled as to distinguish exactly which assumption 
will reliably produce the "better" result.

Given the computational resources of the universe, we may compute for billions 
of years and find out that the universe doesn't find any particular 
configuration to be "better."  (if you prefer the Godless view of things.)


________________________________
From: Nanograte Knowledge Technologies 
<nano...@live.com><mailto:nano...@live.com>
Sent: 12 April 2017 09:29 PM
To: a...@listbox.com<mailto:a...@listbox.com>
Subject: Re: [agi] I Still Do Not Believe That Probability Is a Good Basis for 
AGI


"Okay, but this begs the question of how you define AGI.  Domain knowledge is 
the distinguishing point of what might be called regular AI.  It is the General 
part of AGI that doesn't allow a domain intense approach."

I do not have my own definition of AGI. Any accepted definition is fine by me, 
but I understand AGI to mean that a computerized machine would be able to 
exhibit human functionality via human-like brain functionality, as sentient 
intelligence. In the main, domain knowledge pertains to knowledge about any 
domain. Knowledge to me is not AI, but it could be argued to be so. To me, AI 
is reasoning towards knowledge.

On the contrary, I would contend that it is exactly the General part of AGI, 
which most allows for a domain intense approach. If we replaced the broader 
term 'domain', with a more specialized term, 'schema', and expanded it to 
specifically mean 'contextual schema', would your argument still hold equally 
strongly?



________________________________
From: Stanley Nilsen <senil...@ghvalley.net><mailto:senil...@ghvalley.net>
Sent: 12 April 2017 05:16 PM
To: AGI
Subject: Re: [agi] I Still Do Not Believe That Probability Is a Good Basis for 
AGI

On 04/11/2017 10:00 PM, Nanograte Knowledge Technologies wrote:

The moment relationships of any functional value (associations), and any 
framework of hierarchy (systems) can be established and tested against all 
known (domain) knowledge, and even changed if the rules driving such a 
hierarchy should change (adapted), it may be regarded as a concrete version of 
a probabilistic framework.


Okay, but this begs the question of how you define AGI.  Domain knowledge is 
the distinguishing point of what might be called regular AI.  It is the General 
part of AGI that doesn't allow a domain intense approach.

Is it accepted that the "general" indicates that we are looking across domains 
into the realm of all domains?  And, we have to choose between actions coming 
from multiple domains.   One might call this "meta-domain" knowledge.  Such 
knowledge, I believe, would require abstraction.  That is the abstraction 
problem of AGI.  For example, is health a more significant domain than finance? 
 Is public service better for the AGI than bettering the skills of the AGI? 
Choices, choices, choices...


To contend: Probability may not be a "good" basis for AGI, similarly as love 
may not be a good basis for marriage, but what might just be a "good" basis is 
a reliable engine (reasoning and unreasoning computational framework) for 
managing relativity with. This is where philosophy started from, unraveling a 
reasoning ontology.


I don't think probability is a problem.  A piece of knowledge may increase the 
chance that we see the situation accurately, and accuracy will help us be more 
specific about our response.  That said, it is the way we put assumptions 
together that will determine our final action.

Probability has been used in that we think our assumptions are "probably" 
right.   It is the qualifying of our assumptions that distinguishes the quality 
of our actions.  Adopt sloppy assumptions and your results will probably not 
always be appropriate or best - not super intelligent.

An "advanced" system will have some mechanism for adopting assumptions (most 
currently rely on the judgment of the programmer.)   It is this process of 
evaluating assumptions that we tend to get abstract.  Since we are calling 
these "heuristics" assumptions, there is an implication that we can't prove 
this premise that we are adopting.   Most likely we can't prove because the 
premise we choose to build on is abstract - at least has elements of 
abstraction that won't allow a clear logical conclusion.

The issue is, where will AGI get the assumptions?  And, how rigorous will the 
process be for accepting a new assumption?




[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]<https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc>
      
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]<https://www.listbox.com/member/archive/rss/303/24379807-653794b5>
 |    
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]<https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc>
 |    
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>


AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]
 <https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc>  | 
Modify<https://www.listbox.com/member/?&;> Your Subscription      
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>
AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]
 <https://www.listbox.com/member/archive/rss/303/9320387-ea529a81>  | 
Modify<https://www.listbox.com/member/?&;> Your Subscription       
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>


AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]
 <https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc>  | 
Modify<https://www.listbox.com/member/?&;> Your Subscription 
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to