What’s your O.D. ? What’s the end-product of your program? Drawings?
Buildings? Text-readings? Wtf is it going to DO? Or is that too difficult for
you to say?
Providing empirical support is not “a little too stuffy” – it’s a little too
hard for you. You never do. And so you waste your life – People progress, and
science and technology progress, by testing theories – or, in your case, wild
guesses – against evidence, and then refining the theories from that.
From: Jim Bromer
Sent: Monday, April 15, 2013 2:05 PM
To: AGI
Subject: Re: [agi] Re: Summary of My Current Theory For an AGI Program.
Wikipedia does give a definition of a scientific theory:
In modern science, the term "theory" refers to scientific theories, a
well-confirmed type of explanation of nature, made in a way consistent with
scientific method, and fulfilling the criteria required by modern science. Such
theories are described in such a way that any scientist in the field is in a
position to understand and either provide empirical support ("verify") or
empirically contradict ("falsify") it.
But the ancient, more general definition of a theory is:
Theory is a contemplative and rational type of abstract or generalizing
thinking, or the results of such thinking. Depending on the context, the
results might for example include generalized explanations of how nature works,
or even how divine or metaphysical matters are thought to work. The word has
its roots in ancient Greek, but in modern use it has taken on several different
related meanings.
Notice that some sciences are still in a speculative stage, so technically the
first definition is a little too stuffy. If we were to claim that our
discussions are a kind of science appropriate for a nascent technology, then
the others in the field are in a position to provide some evidence to support
or contradict my theories. The modern scientific theory was developed after
some observations of the progress that cart makers made. That fact could be
used to support the idea that technological development is a valid scientific
field. While a criticism could be leveled at me and many others that we spend
too much time talking and not enough time programming, I would point out that
this summary is part of an actual testing program that I am planning to start
next month.
Your criticism that the part of my summary that you have read so far lacks an
Operational Definition is nonsense. Since you are not a not an active
programmer or programmer analyst in the nascent field of AGI, you are in no
position to understand a speculative scientific theory of AGI.
Jim Bromer
On Mon, Apr 15, 2013 at 5:03 AM, Mike Tintner <[email protected]> wrote:
What you have is a v. vague *hypothesis*. A *theory* involves evidence as to
why it may work..
And you have no Operational Definition of what effect you’re trying to
achieve. Not even the teeniest weeniest hint of an O.D.
Tch, tch.
From: Jim Bromer
Sent: Monday, April 15, 2013 4:14 AM
To: AGI
Subject: [agi] Re: Summary of My Current Theory For an AGI Program.
Part 4
Artificial imagination is also necessary for AGI. Imagination can take place
simply by creating associations between concepts but obviously the best forms
of imagination are going to be based on rational meaningfulness. An
association between concepts or (concept objects) which cannot be interpreted
as meaningful is not usually very useful. So it seems that if the relationship
is both imaginative and potentially meaningful it would be advantageous. An
association formed by a categorical substitution is more likely to be
meaningful so I consider this a rational form of imagination. However, you can
find many examples where a categorical substitution does not produce a
meaningful association, so perhaps my claim that it is a rational process is
dependent on the likelihood that the process will turn up a greater proportion
of meaningful relations than purely random associations. Some imaginative
relations may exist just as entertainment, but I believe that the application
of the imagination is one of the more important steps toward understanding. In
fact, I believe that all understanding is essentially a form of imaginative
projection, where you project previously formed ideas onto an ongoing situation
which is recognized or thought to share some characteristics with the projected
ideas. So from this point of view, the reliance of previously learned
knowledge is really an application of the imagination. Perhaps it is a special
form of imagination but the imagination none the less. Anyway, once an
imaginative association or relation is created it has to be tested. I feel that
relations of understanding cannot be appreciated out of context. The basic
rule of thumb is that it takes knowledge of many things to understand one
thing. This creates a problem when trying to test or validate an insight which
was partially produced by the imagination or which had to be fitted using
imaginative projection. The only way an AGI program is going to be able to
validate a new idea is by seeing how well it fits and how well it works in a
variety of related contexts. This is what I call a structural integration. It
not only represents a single concept but it also carries a lot of other
information with it that can seemingly explain a lot of other small facts as
well. A new idea seems to make sense if it fits in with a number of insights
that were previously acquired.
On Sun, Apr 14, 2013 at 3:30 PM, Jim Bromer <[email protected]> wrote:
Part 3
The program will make extensive use of generalizations and
cross-generalization. The program will need to be able to discover
abstractions. These abstractions typically may be used to develop
generalizations. A generalization may be formed from a group in which all the
members share some common characteristics. However, generalizations may also be
formed by various arbitrary processes. And, if the program works,
generalizations may be formed in response to some educational instruction. The
most typical example of cross-generalization may be the consideration of
similarities across individual systems of taxonomies or classes or subclasses.
In this broad definition of generalization, the collections do not have to be
grouped by any common characteristic and the same can go for
cross-categorizations. Although this might be a misuse of the term
generalization, the generalizations that my program will create may not be
trees because they can potentially branch off in different directions. Indexes
into data for internal searches may be formed in a similar way but I will have
to think about whether the variety of branching makes sense as I am developing
the program. I believe that because of the variety of forms of generalization
or categorization that the program will use it is necessary for the program to
keep track of the different kinds of categorization and generalization that it
develops. And it will put transcendent boundaries around portions of the
generalizations that it develops as it uses them in particular ways. These
boundaries are transcendent in that overlapping relations may be considered
across them (as in cross-generalization or cross-categorization). Perhaps the
terms relations and categorization are more abstract than the terms of
generalization. So the program will be able to develop abstractions of
relations and then build categorizations from these relations. The categories
that I have in mind may be somewhat free-wheeling. Cross-categorization will
be important because they will help the program find and consider similarities
across the categorical structures. These categorical structures may need to be
bounded, but since bounded categories may still be related across a relatively
dominant categorical relation that means that they can be transcended by other
associative relations.
On Sat, Apr 13, 2013 at 7:34 AM, Jim Bromer <[email protected]> wrote:
Part 2
I believe that it takes a great deal of knowledge to 'understand' one
thing. A statement has to be integrated into a greater collection of knowledge
in order for the relations of understanding to be formed. And the knowledge of
a single statement has to be integrated into a greater field of knowledge
concerning the central features of the subject for the intelligent entity to
truly understand the statement. While conceptual integration, by some name,
has always been a primary subject in AI/AGI, I think it was relegated to a
subservient position by those who originally stressed the formal methods of
logic, linguistics, psychology, numerics, probability, and neural networks.
Thinking that the details of how ideas work in actual thinking was either part
of some predawn-of-science-philosophy or the turn-of-the-crank production of
the successful application of formal methods, a focus on the details of how
ideas work in actual problems was seen as naïve. This problem, where the
smartest thinkers would spend lives pursuing the abstract problems without
wasting their time carefully examining many real world cases occurs often in
science. It is amplified by ignorance. If no one knows how to create a
practical application then the experts in the field may become overly
pre-occupied with the proposed formal methods that had been presented to them.
Formal methods are important - but they are each only one kind of thing. It
takes a great deal of knowledge about many different things to 'understand' one
kind of thing. A reasonable rule of thumb is that formal methods have to be
tried and shaped based on exhaustive applications of the methods to real world
problems.
In order to integrate new knowledge the new idea that is being introduced
usually has to be verified using many steps to show that it holds. Since there
is no absolute insight into truth for this kind of thing, knowledge has to be
integrated in a more thorough trial and error manner. The program has to
create new theories about statements or reactions it is considering. This
would extend to interpretations of observations for systems where other kinds
of sensory systems were used. A single experiment does not 'prove' a new
theory in science. A large number of experiments are required and most of
those experiments have to demonstrate that the application of the theory can
lead to better understanding of other related effects. It takes a knowledge of
a great many things to verify a statement about one thing. In order for the
knowledge represented by a statement to be verified and comprehended it has to
be related to, and integrated with, a great many other statements concerning
the primary subject matter. It is necessary to see how the primary subject
matter may be used in many different kinds of thoughts to be able to understand
it.
On Sat, Apr 13, 2013 at 6:39 AM, Jim Bromer <[email protected]> wrote:
Part 1
I feel that complexity is a major problem facing contemporary AGI. It
is true, that for most human reasoning we do not need to figure out complicated
problems precisely in order to take the first steps toward competency but so
far AGI has not been able to get very far beyond the narrow-AI barrier.
I am going to start with a text-based AGI program. I agree that more
kinds of IO modalities would make an effective AGI program better. However, I
am not aware of any evidence that sensory-based AGI or multi-modal sensory
based AGI or robotic based AGI has been able to achieve something greater than
other efforts. The core of AGI is not going to be found in the peripherals.
And it is clear that starting with complicated IO accessories would make AGI
programming more difficult. It seems obvious that IO is necessary for AI/AGI
and this abstraction is a probably more appropriate basis for the requirements
of AGI.
My AGI program is going to be based on discreet references. I feel that
the argument that only neural networks are able to learn or are able to
incorporate different kinds of data objects into an associative field is not
accurate. I do, however, feel that more attention needs to be paid to concept
integration. And I think that many of us recognize that a good AGI model is
going to create an internal reference model that is a kind of network. The
discreet reference model more easily allows the program to retain the
components of an agglomeration in a way in which the traditional neural network
does not. This means that it is more likely that the parts of an associative
agglomeration can be detected. On the other hand, since the program will
develop its own internal data objects, these might be formed in such a way so
that the original parts might be difficult to detect. With a more conscious
effort to better understand concept integration I think that the discreet
conceptual network model will prove itself fairly easily.
I am going to use weighted reasoning and probability but only to a
limited extent.
AGI | Archives | Modify Your Subscription
AGI | Archives | Modify Your Subscription
AGI | Archives | Modify Your Subscription
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com