On 04/12/2017 01:50 PM, Nanograte Knowledge Technologies wrote:
"That is the abstraction problem of A*G*I. For example, is health a
more significant domain than finance? Is public service better for
the AGI than bettering the skills of the AGI?"
I see no abstraction problem with AGI. The examples you posed as
problems are fairly easily resolvable via existential logic. "Domain
Provable" probabilistic choices flow from existential logic. That is
where a sense of correctness is born in the mind of humans, and it
could be so in computerized machines also. And once sense is
established, consciousness becomes possible. However, it does return
me to the obvious need for an adequate, <deabstraction> methodology.
I will have to plead that I don't really understand "existential" logic,
so I can't really guess as to how it solves the examples. But I would
like to venture into the area of "sense of correctness is born in the
mind of humans" as a way to make a case for abstraction.
In the mind of a human we often "feel" the sense of what choice is
correct. We generally do not fill in a spreadsheet that gives us a
correct answer. Most would agree that a "feeling" is more abstract
than a reason. In Antonio Damasio's book "The Feeling of What Happens"
chapter 2 Emotion and Feeling, he mentions clinical cases that document
difficulty in decision making for persons who have brain damage
affecting emotions. The "rationale" part of the person is intact but
the emotional person is broken. Such people can have great difficulty
making a decision.
The relationship to abstraction is that there is some mechanism that is
able to take a set of facts and produce a feeling. The "stronger" or
"better" the feeling, the more we incline to that decision. Hence, I
believe the human is abstracting as a decision making method.
Somewhere along the line our artificial intelligence will need to
implement abstraction. And, don't be surprised if when it is asked how
it arrived at the decision to do A instead of B, it replies that A just
felt better. : )
"The issue is, where will AGI get the assumptions? And, how rigorous
will the process be for accepting a new assumption?"
The relative terms 'right' and 'wrong', 'good' and 'bad' etc. carry
their own poison. I prefer to use the term 'correct', to relate a
decision to a scenario option. Indeed, 'correct' also denotes a
judgment, but it more strongly denotes the testable outcome of an
assumption, relative to a knowledge base.
And so often we reply "it may be correct in your mind..."
I prefer to think in terms of adoption or not adopted. We buy it or
reject. When given an assumption, I choose to employ it into my
thinking or not. And, most of the time people are okay with acceptance
based on trust of the presenter, or shallow argument. Plenty of
assumptions are adopted on surface knowledge rather than "testable"
experience.
And, that is pretty much my point, the AI will use references and
trusted sources (the programmer?) rather than provable or testable
assertions.
AGI would get its assumptions from learning, per contextual schema,
what a scale of correctness would result in. Instead of just the two
poles of 'correct' and 'not correct', many other points of correctness
could be defined and placed on such a scale to introduce decision
granularity, and so increase the overall probability of an assumption
becoming testable relative to reality.
It would be ideal if one could give the reality test to assumptions -
that is, we had enough chances to "field" test every assumption.
Granted, our "science" slowly improves our understanding of reality, and
our projections are better. But, to get the ball rolling we need to
rely on assumptions.
The granularity begins to sound like "subtle" feeling assessments - more
abstraction.
How rigorous will the process be for accepting a new assumption? Not
rigorous at all. The "most true, or most correct" result would always
inform the validity and reliability of any assumption. The strongest
genes would survive.
For example: Start - logic, then assumption, then else chain
reaction. The value on the "correctness" scale would provide the
loop-until value <x>. Exit.
End.
Results are hard to validate except under rigorous lab conditions. The
"general" world is not so controlled as to distinguish exactly which
assumption will reliably produce the "better" result.
Given the computational resources of the universe, we may compute for
billions of years and find out that the universe doesn't find any
particular configuration to be "better." (if you prefer the Godless
view of things.)
------------------------------------------------------------------------
*From:* Nanograte Knowledge Technologies <[email protected]>
*Sent:* 12 April 2017 09:29 PM
*To:* [email protected]
*Subject:* Re: [agi] I Still Do Not Believe That Probability Is a Good
Basis for AGI
"Okay, but this begs the question of how you define AGI. Domain
knowledge is the distinguishing point of what might be called regular
AI. It is the General part of AGI that doesn't allow a domain intense
approach."
I do not have my own definition of AGI. Any accepted definition is
fine by me, but I understand AGI to mean that a computerized machine
would be able to exhibit human functionality via human-like brain
functionality, as sentient intelligence. In the main, domain knowledge
pertains to knowledge about any domain. Knowledge to me is not AI, but
it could be argued to be so. To me, AI is reasoning towards knowledge.
On the contrary, I would contend that it is exactly the General part
of AGI, which most allows for a domain intense approach. If we
replaced the broader term 'domain', with a more specialized term,
'schema', and expanded it to specifically mean 'contextual schema',
would your argument still hold equally strongly?
------------------------------------------------------------------------
*From:* Stanley Nilsen <[email protected]>
*Sent:* 12 April 2017 05:16 PM
*To:* AGI
*Subject:* Re: [agi] I Still Do Not Believe That Probability Is a Good
Basis for AGI
On 04/11/2017 10:00 PM, Nanograte Knowledge Technologies wrote:
The moment relationships of any functional value (associations), and
any framework of hierarchy (systems) can be established and tested
against all known (domain) knowledge, and even changed if the rules
driving such a hierarchy should change (adapted), it may be regarded
as a concrete version of a probabilistic framework.
Okay, but this begs the question of how you define AGI. Domain
knowledge is the distinguishing point of what might be called regular
AI. It is the General part of AGI that doesn't allow a domain intense
approach.
Is it accepted that the "general" indicates that we are looking across
domains into the realm of all domains? And, we have to choose between
actions coming from multiple domains. One might call this
"meta-domain" knowledge. Such knowledge, I believe, would require
abstraction. That is the abstraction problem of A*G*I. For example,
is health a more significant domain than finance? Is public service
better for the AGI than bettering the skills of the AGI? Choices,
choices, choices...
To contend: Probability may not be a "good" basis for AGI, similarly
as love may not be a good basis for marriage, but what might just be
a "good" basis is a reliable engine (reasoning and unreasoning
computational framework) for managing relativity with. This is where
philosophy started from, unraveling a reasoning ontology.
I don't think probability is a problem. A piece of knowledge may
increase the chance that we see the situation accurately, and accuracy
will help us be more specific about our response. That said, it is
the way we put assumptions together that will determine our final action.
Probability has been used in that we think our assumptions are
"probably" right. It is the qualifying of our assumptions that
distinguishes the quality of our actions. Adopt sloppy assumptions
and your results will probably not always be appropriate or best - not
super intelligent.
An "advanced" system will have some mechanism for adopting assumptions
(most currently rely on the judgment of the programmer.) It is this
process of evaluating assumptions that we tend to get abstract. Since
we are calling these "heuristics" assumptions, there is an implication
that we can't prove this premise that we are adopting. Most likely
we can't prove because the premise we choose to build on is abstract -
at least has elements of abstraction that won't allow a clear logical
conclusion.
The issue is, where will AGI get the assumptions? And, how rigorous
will the process be for accepting a new assumption?
<https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc>
[Powered by Listbox] <http://www.listbox.com>
<https://www.listbox.com/member/archive/rss/303/24379807-653794b5>
| [Powered by Listbox] <http://www.listbox.com>
<https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc> |
[Powered by Listbox] <http://www.listbox.com>
*AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc> |
Modify <https://www.listbox.com/member/?&> Your Subscription [Powered
by Listbox] <http://www.listbox.com>
*AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/9320387-ea529a81> |
Modify
<https://www.listbox.com/member/?&>
Your Subscription [Powered by Listbox] <http://www.listbox.com>
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com