Jim,
I'm trying to understand. Could you show how your conceptual network
would ~~ see how parts are being used and see how much sense that makes
to the central concept ~~. And what the result would be depending on how
much sense was made. A hypothetical example is what I'd like to see.
Thanks,
Dimitry
On 10/6/2012 7:12 AM, Jim Bromer wrote:
I am presenting a rough idea of a conceptual network as a potential
advancement from earlier ideas like semantic networks. Looking on
Wikipedia I found some examples of semantic networks. In a semantic
network the nodes are the "concepts" and the edges are "relations
between concepts". A semantic network was usually defined with a
conveniently finite number of definitions of the edges (as types of
relations between concepts) and a lot of nodes (which were the
concepts). One difference then is that the conceptual network that I
envision will not be limited by the number of relations between
concepts. This initial presentation, however, is a little misleading
because, as can easily be deduced from an inspection of a semantic
network, it is obvious that the edges, which are called "relations
between the concepts," are concepts themselves. So in the conceptual
network, a relation could become a concept itself. And the conceptual
network that I am thinking of does not have a single systematic method
of being 'activated' in some way (although searches would be made
through it). Furthermore, the network does not have to be envisioned
as a single network, but since different kinds of concepts may be
associated arbitrarily the potential for interrelations would tend to
be extensive.
Since this network is not as simple as a semantic network, the
utilization of the parts of the conceptual network would probably be
defined as they are used. So the different parts would not all work
just the same way. (However, the underlying methodology of how the
different parts are used might be drawn from a standard system).
Finally, since the network is not used in one simple way, deduction
(derived from conceptual knowledge) would also rely on what I call
structural relations. Different concepts would have different
structural relations when used with other concepts. This way an
expectation of structural relations concerning a central concept can
help to derive meaning from a sentence or an observation. So if the
central concepts of a sentence (for example) were recognized then
other parts of the sentence that were directly related to the central
concepts could be found by fitting them to some of the potential
structural relationships that had been previously defined for those
central concepts.
Different people have different kinds of knowledge about things, so
the structural relations that I am talking about are not (usually)
normative. For instance, a causal relation is a structural relation,
but different people will believe different kinds of things so there
would be no pre-defined underlying normative system of causality for
the AGI program. However, the program would be interested in trying to
understand what other people are describing and if this model of
structural relations could be used as a successful basis for an AGI
program then it would learn something about how people structure their
own conceptual relations. Many other kinds of relations between
concepts could be considered as structural; I mentioned causality only
because it is such a familiar concept.
The structural concept thing that I am thinking about is distinctly
different than (what I call) the funneling AGI models. Conclusions are
not derived through a funneling of deductions or weight-based
reasoning. Yes, I would use deduction and weight-based reasoning and
yes the reaching of a conclusion would have a terminal point, but the
structural concept method means that you don't just try to smush a
measurement of the validity of all ideas that are related to some
central concept into a common hopper even when the conclusion would
not be homogenous for that combination of things. Instead the program
would look to see how the parts are being used and whether or not that
makes sense for the kind of central concepts that are being considered
at that moment.(I am using the term "structural" to denote the fact
that interrelated concepts should not all be funneled through one
single circuit of reasoning).
While many people have come to the conclusion that my ideas about
conceptual structure only represented a high-level form of GOFAI or
that they were the same as the desired high level products of machine
learning, my theory is that that the structural relations between
(individuated and instanced) concepts have to be seen as part of the
basis of reasoning, not just the resultant of it. So while the
individuated structural relations between concepts in a particular
instance would (usually) be learned, the underlying programming has to
take their usage into account. I believe that the use of conceptual
structure concerning some central idea that is to be considered has to
be a part of the foundational process of artificial intelligence.And
this idea can be used as an explanation of how we can derive meaning
from combinations of ideas that are somewhat novel.
This is not an easy model but I believe it could be developed and at
least tested with some simple cases.
Jim Bromer
On Fri, Oct 5, 2012 at 2:16 PM, Piaget Modeler
<[email protected] <mailto:[email protected]>> wrote:
Sure.
~PM
------------------------------------------------------------------------
I am curious about something. Is anyone interested in discussing
my ideas about conceptual structure?
Jim Bromer
*AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/10215994-5ed4e9d1> |
Modify
<https://www.listbox.com/member/?&>
Your Subscription [Powered by Listbox] <http://www.listbox.com>
____________________________________________________________
Woman is 53 But Looks 25
Mom reveals 1 simple wrinkle trick that has angered doctors...
http://thirdpartyoffers.juno.com/TGL3141/5070decacaac15ec94a3est01duc
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com