"That is the abstraction problem of AGI. For example, is health a more
significant domain than finance? Is public service better for the AGI than
bettering the skills of the AGI?"
I see no abstraction problem with AGI. The examples you posed as problems are
fairly easily resolvable via existential logic. "Domain Provable" probabilistic
choices flow from existential logic. That is where a sense of correctness is
born in the mind of humans, and it could be so in computerized machines also.
And once sense is established, consciousness becomes possible. However, it does
return me to the obvious need for an adequate, <deabstraction> methodology.
"The issue is, where will AGI get the assumptions? And, how rigorous will the
process be for accepting a new assumption?"
The relative terms 'right' and 'wrong', 'good' and 'bad' etc. carry their own
poison. I prefer to use the term 'correct', to relate a decision to a scenario
option. Indeed, 'correct' also denotes a judgment, but it more strongly denotes
the testable outcome of an assumption, relative to a knowledge base.
AGI would get its assumptions from learning, per contextual schema, what a
scale of correctness would result in. Instead of just the two poles of
'correct' and 'not correct', many other points of correctness could be defined
and placed on such a scale to introduce decision granularity, and so increase
the overall probability of an assumption becoming testable relative to reality.
How rigorous will the process be for accepting a new assumption? Not rigorous
at all. The "most true, or most correct" result would always inform the
validity and reliability of any assumption. The strongest genes would survive.
For example: Start - logic, then assumption, then else chain reaction.
The value on the "correctness" scale would provide the
loop-until value <x>. Exit. End.
________________________________
From: Nanograte Knowledge Technologies <[email protected]>
Sent: 12 April 2017 09:29 PM
To: [email protected]
Subject: Re: [agi] I Still Do Not Believe That Probability Is a Good Basis for
AGI
"Okay, but this begs the question of how you define AGI. Domain knowledge is
the distinguishing point of what might be called regular AI. It is the General
part of AGI that doesn't allow a domain intense approach."
I do not have my own definition of AGI. Any accepted definition is fine by me,
but I understand AGI to mean that a computerized machine would be able to
exhibit human functionality via human-like brain functionality, as sentient
intelligence. In the main, domain knowledge pertains to knowledge about any
domain. Knowledge to me is not AI, but it could be argued to be so. To me, AI
is reasoning towards knowledge.
On the contrary, I would contend that it is exactly the General part of AGI,
which most allows for a domain intense approach. If we replaced the broader
term 'domain', with a more specialized term, 'schema', and expanded it to
specifically mean 'contextual schema', would your argument still hold equally
strongly?
________________________________
From: Stanley Nilsen <[email protected]>
Sent: 12 April 2017 05:16 PM
To: AGI
Subject: Re: [agi] I Still Do Not Believe That Probability Is a Good Basis for
AGI
On 04/11/2017 10:00 PM, Nanograte Knowledge Technologies wrote:
The moment relationships of any functional value (associations), and any
framework of hierarchy (systems) can be established and tested against all
known (domain) knowledge, and even changed if the rules driving such a
hierarchy should change (adapted), it may be regarded as a concrete version of
a probabilistic framework.
Okay, but this begs the question of how you define AGI. Domain knowledge is
the distinguishing point of what might be called regular AI. It is the General
part of AGI that doesn't allow a domain intense approach.
Is it accepted that the "general" indicates that we are looking across domains
into the realm of all domains? And, we have to choose between actions coming
from multiple domains. One might call this "meta-domain" knowledge. Such
knowledge, I believe, would require abstraction. That is the abstraction
problem of AGI. For example, is health a more significant domain than finance?
Is public service better for the AGI than bettering the skills of the AGI?
Choices, choices, choices...
To contend: Probability may not be a "good" basis for AGI, similarly as love
may not be a good basis for marriage, but what might just be a "good" basis is
a reliable engine (reasoning and unreasoning computational framework) for
managing relativity with. This is where philosophy started from, unraveling a
reasoning ontology.
I don't think probability is a problem. A piece of knowledge may increase the
chance that we see the situation accurately, and accuracy will help us be more
specific about our response. That said, it is the way we put assumptions
together that will determine our final action.
Probability has been used in that we think our assumptions are "probably"
right. It is the qualifying of our assumptions that distinguishes the quality
of our actions. Adopt sloppy assumptions and your results will probably not
always be appropriate or best - not super intelligent.
An "advanced" system will have some mechanism for adopting assumptions (most
currently rely on the judgment of the programmer.) It is this process of
evaluating assumptions that we tend to get abstract. Since we are calling
these "heuristics" assumptions, there is an implication that we can't prove
this premise that we are adopting. Most likely we can't prove because the
premise we choose to build on is abstract - at least has elements of
abstraction that won't allow a clear logical conclusion.
The issue is, where will AGI get the assumptions? And, how rigorous will the
process be for accepting a new assumption?
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]<https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc>
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
<http://www.listbox.com>
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]<https://www.listbox.com/member/archive/rss/303/24379807-653794b5>
|
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
<http://www.listbox.com>
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]<https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc>
|
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
<http://www.listbox.com>
AGI | Archives<https://www.listbox.com/member/archive/303/=now>
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]
<https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc> |
Modify<https://www.listbox.com/member/?&> Your Subscription
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
<http://www.listbox.com>
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com