On 04/12/2017 01:29 PM, Nanograte Knowledge Technologies wrote:
"Okay, but this begs the question of how you define AGI. Domain
knowledge is the distinguishing point of what might be called regular
AI. It is the General part of AGI that doesn't allow a domain intense
approach."
I do not have my own definition of AGI. Any accepted definition is
fine by me, but I understand AGI to mean that a computerized machine
would be able to exhibit human functionality via human-like brain
functionality, as sentient intelligence. In the main, domain knowledge
pertains to knowledge about any domain. Knowledge to me is not AI, but
it could be argued to be so. To me, AI is reasoning towards knowledge.
It wasn't my intent to try to define AGI, but there have been several
discussions of differences between AGI and regular AI. Perhaps my idea
of "domain" is the problem. I see a domain as being a specialized area
where one can encounter experts of that domain, or system. I'm not sure
I understand the statement above "In the main, domain knowledge pertains
to knowledge about any domain." Are you defining a term called "domain
knowledge" that is the domain of knowledge of the general
characteristics of domains? Or, writing of an ability to generalize
principles from one domain to other domains?
On the contrary, I would contend that it is exactly the General part
of AGI, which most allows for a domain intense approach. If we
replaced the broader term 'domain', with a more specialized term,
'schema', and expanded it to specifically mean 'contextual schema',
would your argument still hold equally strongly?
Call it schema if you want, but to me that just means you have a set of
relationships between objects that are well understood, or well
documented. It is the comparison of objects of different schema that is
difficult. Within a schema you may know how a "piece" fits, and deduce
something about the significants of that piece.
It is harder (not impossible) to compare the value of two pieces that
come from different schema - which one is more important? I'm just
saying that in making such a comparison, you venture into a more
abstract task. A domain is larger than a few aspects, and therefore the
sum of those aspects has to go into an evaluation of the whole domain
(for comparison with another domain.)
My argument becomes then... if one considers AGI to be like human
intelligence (HI) then it is probable that AGI will not need to have
Domain intensive knowledge (expertise) in numerous domains (humans often
have limited expertise.)
------------------------------------------------------------------------
*From:* Stanley Nilsen <[email protected]>
*Sent:* 12 April 2017 05:16 PM
*To:* AGI
*Subject:* Re: [agi] I Still Do Not Believe That Probability Is a Good
Basis for AGI
On 04/11/2017 10:00 PM, Nanograte Knowledge Technologies wrote:
The moment relationships of any functional value (associations), and
any framework of hierarchy (systems) can be established and tested
against all known (domain) knowledge, and even changed if the rules
driving such a hierarchy should change (adapted), it may be regarded
as a concrete version of a probabilistic framework.
Okay, but this begs the question of how you define AGI. Domain
knowledge is the distinguishing point of what might be called regular
AI. It is the General part of AGI that doesn't allow a domain intense
approach.
Is it accepted that the "general" indicates that we are looking across
domains into the realm of all domains? And, we have to choose between
actions coming from multiple domains. One might call this
"meta-domain" knowledge. Such knowledge, I believe, would require
abstraction. That is the abstraction problem of A*G*I. For example,
is health a more significant domain than finance? Is public service
better for the AGI than bettering the skills of the AGI? Choices,
choices, choices...
To contend: Probability may not be a "good" basis for AGI, similarly
as love may not be a good basis for marriage, but what might just be
a "good" basis is a reliable engine (reasoning and unreasoning
computational framework) for managing relativity with. This is where
philosophy started from, unraveling a reasoning ontology.
I don't think probability is a problem. A piece of knowledge may
increase the chance that we see the situation accurately, and accuracy
will help us be more specific about our response. That said, it is
the way we put assumptions together that will determine our final action.
Probability has been used in that we think our assumptions are
"probably" right. It is the qualifying of our assumptions that
distinguishes the quality of our actions. Adopt sloppy assumptions and
your results will probably not always be appropriate or best - not
super intelligent.
An "advanced" system will have some mechanism for adopting assumptions
(most currently rely on the judgment of the programmer.) It is this
process of evaluating assumptions that we tend to get abstract. Since
we are calling these "heuristics" assumptions, there is an implication
that we can't prove this premise that we are adopting. Most likely
we can't prove because the premise we choose to build on is abstract -
at least has elements of abstraction that won't allow a clear logical
conclusion.
The issue is, where will AGI get the assumptions? And, how rigorous
will the process be for accepting a new assumption?
<https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc>
[Powered by Listbox] <http://www.listbox.com>
<https://www.listbox.com/member/archive/rss/303/24379807-653794b5>
| [Powered by Listbox] <http://www.listbox.com>
<https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc> |
[Powered by Listbox] <http://www.listbox.com>
*AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc> |
Modify <https://www.listbox.com/member/?&> Your Subscription [Powered
by Listbox] <http://www.listbox.com>
*AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/9320387-ea529a81> |
Modify
<https://www.listbox.com/member/?&>
Your Subscription [Powered by Listbox] <http://www.listbox.com>
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com