Re: [agi] The Effect of Application of an Idea

2008-03-26 Thread Jim Bromer
On Tue, Mar 25, 2008 at 7:19 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:


 Certainly ambiguity (=applicability to multiple contexts in different
 ways) and presence of rich structure in presumably simple 'ideas', as
 you call it, is a known issue. Even interaction between concept clouds
 evoked by a pair of words is a nontrivial process (triangular
 lightbulb). In a way, whole operation can be modeled by such
 interactions, where sensory input/recall is taken to present a stream
 of triggers that evoke concept cloud after cloud, with associations
 and compound concepts forming at the overlaps. But of course it's too
 hand-wavy without a more restricted model of what's going on.
 Communicating something that exists solely on high level is very
 inefficient, plus most of such content can turn out to be wrong. Back
 to prototyping...

 --
 Vladimir Nesov
 [EMAIL PROTECTED]



I agreed with you up until your conclusion.  While the problems that I
talked about may be known issues, they are discussed almost exclusively
using intuitive models, like we used, or by referring to ineffective models,
like network theories that do not explicitly show how its associative
interrelations would effectively deal with the intricate conceptual details
that would be required to address these issues and would be produced by an
effective solution. I have never seen any theory that was designed to
specifically address the range of situations that I am thinking of
although most earlier AI models were intended to deal similar issues and I
have seen some exemplary models that did use controlled models which showed
how some of these interrelations might be modeled.  These intuitive
discussions and the exaggerated effectiveness of inadequate programs creates
a concept cloud itself, and the problem is that the knowledgeable listener
has a feeling that he understands the problem even without having made any
kind of commitment to the exploration of an effective solution.



Although I have not detailed how the effects of the application of ideas
might be modeled in an actual AI program (or in an extremely simple model
that I would use to start studying the modeling) my whole point is that if
you are interested in advancing AI programming, then the issue that my
theory addresses is a problem that can not be dismissed with a wave of the
hand.  The next step for me is to find a model that would be strong enough
to hold up to genuine extensible learning.



If you are making a decision on how much time you should spend thinking
about this based only on whether or not you have thought about similar
problems I believe that you have already considered some sampling of the
kind of problems that my theory is meant to address.


Jim Bromer

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-26 Thread Vladimir Nesov
On Wed, Mar 26, 2008 at 4:27 PM, Jim Bromer [EMAIL PROTECTED] wrote:

 I agreed with you up until your conclusion.  While the problems that I
 talked about may be known issues, they are discussed almost exclusively
 using intuitive models, like we used, or by referring to ineffective models,
 like network theories that do not explicitly show how its associative
 interrelations would effectively deal with the intricate conceptual details
 that would be required to address these issues and would be produced by an
 effective solution. I have never seen any theory that was designed to
 specifically address the range of situations that I am thinking of although
 most earlier AI models were intended to deal similar issues and I have seen
 some exemplary models that did use controlled models which showed how some
 of these interrelations might be modeled.  These intuitive discussions and
 the exaggerated effectiveness of inadequate programs creates a concept cloud
 itself, and the problem is that the knowledgeable listener has a feeling
 that he understands the problem even without having made any kind of
 commitment to the exploration of an effective solution.

 Although I have not detailed how the effects of the application of ideas
 might be modeled in an actual AI program (or in an extremely simple model
 that I would use to start studying the modeling) my whole point is that if
 you are interested in advancing AI programming, then the issue that my
 theory addresses is a problem that can not be dismissed with a wave of the
 hand.  The next step for me is to find a model that would be strong enough
 to hold up to genuine extensible learning.

 If you are making a decision on how much time you should spend thinking
 about this based only on whether or not you have thought about similar
 problems I believe that you have already considered some sampling of the
 kind of problems that my theory is meant to address.


What you describe is essentially my own path up to this point: I
started with considering high-level capabilities and gradually worked
towards an implementation that seems to be able to exhibit these
high-level capabilities. At the end of my last message I referred to a
pragmatic problem. Substrate with which I now experiment is
essentially a very simple recurrent network with seemingly
insignificant tweaks. Without high-level view of how to make it
exhibit high-level capabilities I'd never look at it twice. Convincing
someone else that it is that capable will take a rather long
description, and I can well turn out to be wrong (so people have a
perfectly good reason not to listen). It seems more sensible to stick
to prototyping and wait for more solid results, either changing the
theory, or demonstrating its potential.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-26 Thread William Pearson
On 25/03/2008, Vladimir Nesov [EMAIL PROTECTED] wrote

 Simple systems can be computationally universal, so it's not an issue
  in itself. On the other hand, no learning algorithm is universal,
  there are always distributions that given algorithms will learn
  miserably. The problem is to find a learning algorithm/representation
  that has the right kind of bias to implement human-like performance.

First a riddle: What can be all learning algorithms, but is none?

I'd disagree. Okay simple systems can be computationally universal,
but what does that really mean.

Computational universality means to be able to represent any
computable function, the range and domain of this function are assumed
to be from the natural numbers to itself.

Most AI formulations when they say that are computationally universal
are only talking about function of F: I → O where I is the input and O
is the output. These include the formulations of neural networks/GA
etc that I have seen. However there are lots of interesting programs
in computers that do not map the input to the output. Humans also do
not just map the input to the output, we also think, ruminate, model
and remember. This does not affect the range of functions from the
input to the output, but it does change how quickly they can be moved
between. What I am interested in is in systems where the ranges and
domains of the functions are entities inside the system.

That is the F: I → S, F: S → O, and F: S→ S are important and should
be potentially computationally universal. Where S is the internal
memory of the system. This allows the system to be all possible
learning algorithms (although only one at any time), but also it is no
algorithm (else F: I x S → S, would be fixed).

General purpose desktop computers are these kinds of systems. If they
weren't how else could we implement any type of learning system on
them? Thus the answer to my riddle.

The question I have been trying to answer precisely is how to govern
these sorts of systems so they roughly do what you want, without you
having to give precise instructions.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-26 Thread Jim Bromer
On Wed, Mar 26, 2008 at 10:17 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:


 What you describe is essentially my own path up to this point: I
 started with considering high-level capabilities and gradually worked
 towards an implementation that seems to be able to exhibit these
 high-level capabilities. At the end of my last message I referred to a
 pragmatic problem. Substrate with which I now experiment is
 essentially a very simple recurrent network with seemingly
 insignificant tweaks. Without high-level view of how to make it
 exhibit high-level capabilities I'd never look at it twice. Convincing
 someone else that it is that capable will take a rather long
 description, and I can well turn out to be wrong (so people have a
 perfectly good reason not to listen). It seems more sensible to stick
 to prototyping and wait for more solid results, either changing the
 theory, or demonstrating its potential.

 --
 Vladimir Nesov

I do not know much about neural networks, but from  what I read, I always
felt that a recurrent network would be the only way you could feasibly get
an ANN to represent (excuse my french) distinct items without absurdly huge
and noisy expansions. So I am curious about what you are talking about.
When you mention prototyping, you are talking about prototyping the neural
network with high level concepts for easier demonstrations or something like
that.  I think there was some discussion about using 'labels' in neural
networks on one of those links to an online video that were recently
posted.  Is this similar to what you mean by prototyping?
Jim Bromer

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-26 Thread Jim Bromer
2008/3/26 William Pearson [EMAIL PROTECTED]:

 On 25/03/2008, Vladimir Nesov [EMAIL PROTECTED] wrote
 
  Simple systems can be computationally universal, so it's not an issue
   in itself. On the other hand, no learning algorithm is universal,
   there are always distributions that given algorithms will learn
   miserably. The problem is to find a learning algorithm/representation
   that has the right kind of bias to implement human-like performance.

 First a riddle: What can be all learning algorithms, but is none?


Excellent philosophical point!!


 Okay simple systems can be computationally universal,
 but what does that really mean.

 Computational universality means to be able to represent any
 computable function, the range and domain of this function are assumed
 to be from the natural numbers to itself.


---I think Godel would disagree.


 Most AI formulations when they say that are computationally universal
 are only talking about function of F: I → O where I is the input and O
 is the output. These include the formulations of neural networks/GA
 etc that I have seen. However there are lots of interesting programs
 in computers that do not map the input to the output. Humans also do
 not just map the input to the output, we also think, ruminate, model
 and remember. This does not affect the range of functions from the
 input to the output, but it does change how quickly they can be moved
 between. What I am interested in is in systems where the ranges and
 domains of the functions are entities inside the system.

 That is the F: I → S, F: S → O, and F: S→ S are important and should
 be potentially computationally universal. Where S is the internal
 memory of the system. This allows the system to be all possible
 learning algorithms (although only one at any time), but also it is no
 algorithm (else F: I x S → S, would be fixed).

 General purpose desktop computers are these kinds of systems. If they
 weren't how else could we implement any type of learning system on
 them? Thus the answer to my riddle.

 The question I have been trying to answer precisely is how to govern
 these sorts of systems so they roughly do what you want, without you
 having to give precise instructions.

  Will Pearson


-I am going to read this more carefully later.  However, the first part
of the answer to your last question is that the governance of these kinds of
systems will be based on general rules (or methods of generality) so you do
not need to define all the precise instructions that would be needed.  But,
there is not one level of universality, there are potentially infinite
levels of generalization, and they do not all mesh together perfectly.
Although this kind of talk may not solve the problem, I believe that this is
where we are going to end up working if we continue to work on the problem
Jim Bromer

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-26 Thread Vladimir Nesov
On Wed, Mar 26, 2008 at 8:47 PM, Jim Bromer [EMAIL PROTECTED] wrote:

 I do not know much about neural networks, but from  what I read, I always
 felt that a recurrent network would be the only way you could feasibly get
 an ANN to represent (excuse my french) distinct items without absurdly huge
 and noisy expansions. So I am curious about what you are talking about.
 When you mention prototyping, you are talking about prototyping the neural
 network with high level concepts for easier demonstrations or something like
 that.  I think there was some discussion about using 'labels' in neural
 networks on one of those links to an online video that were recently posted.
 Is this similar to what you mean by prototyping?


For now objective is to try to achieve basic high-level dynamics that
this architecture was designed to implement, and thus to partially
establish consistence of many-faceted high-level design with simple
network implementation. If this stage succeeds, after a bit of
scalability implementation it should be possible to start teaching it
more impressive high-level feats.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


I know, I KNOW :-) WAS Re: [agi] The Effect of Application of an Idea

2008-03-26 Thread Mark Waser

First a riddle: What can be all learning algorithms, but is none?


A human being!


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: I know, I KNOW :-) WAS Re: [agi] The Effect of Application of an Idea

2008-03-26 Thread William Pearson
On 26/03/2008, Mark Waser [EMAIL PROTECTED] wrote:
  First a riddle: What can be all learning algorithms, but is none?

  A human being!


Well my answer was a common PC, which I hope is more illuminating
because we know it well.

But human being works, as does any future AI design, as far as I am concerned.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-25 Thread William Pearson
On 24/03/2008, Jim Bromer [EMAIL PROTECTED] wrote:



 To try to understand what I am talking about, start by imagining a
 simulation of some physical operation, like a part of a complex factory in a
 Sim City kind of game.  In this kind of high-level model no one would ever
 imagine all of the objects should interact in one stereotypical way,
 different objects would interact with other objects in different kinds of
 ways.  And no one would imagine that the machines that operated on other
 objects in the simulation were not also objects in their own right.  For
 instance the machines used in production might require the use of other
 machines to fix or enhance them.  And the machines might produce or
 operate on objects that were themselves machines.  When you think about a
 simulation of some complicated physical systems it becomes very obvious that
 different kinds of objects can have different effects on other objects.  And
 yet, when it comes to AI, people go on an on about systems that totally
 disregard this seemingly obvious divergence of effect that is so typical of
 nature.  Instead most theories see insight as if it could be funneled
 through some narrow rational system or other less rational field operations
 where the objects of the operations are only seen as the ineffective object
 of the pre-defined operations of the program.



How would this differ from the sorts of computational systems I have been
muttering about? Where you have an architecture where an active bit of code
or program is equivalent to an object in the above paragraph. Also have a
look at Eurisko by Doug Lenat.

   Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-25 Thread Jim Bromer
On Tue, Mar 25, 2008 at 11:23 AM, William Pearson [EMAIL PROTECTED]
wrote:



  On 24/03/2008, Jim Bromer [EMAIL PROTECTED] wrote:
 
 
 
  To try to understand what I am talking about, start by imagining a
  simulation of some physical operation, like a part of a complex factory in a
  Sim City kind of game.  In this kind of high-level model no one would
  ever imagine all of the objects should interact in one stereotypical way,
  different objects would interact with other objects in different kinds of
  ways.  And no one would imagine that the machines that operated on other
  objects in the simulation were not also objects in their own right.  For
  instance the machines used in production might require the use of other
  machines to fix or enhance them.  And the machines might produce or
  operate on objects that were themselves machines.  When you think about
  a simulation of some complicated physical systems it becomes very obvious
  that different kinds of objects can have different effects on other objects.
  And yet, when it comes to AI, people go on an on about systems that
  totally disregard this seemingly obvious divergence of effect that is so
  typical of nature.  Instead most theories see insight as if it could be
  funneled through some narrow rational system or other less rational field
  operations where the objects of the operations are only seen as the
  ineffective object of the pre-defined operations of the program.
 


 How would this differ from the sorts of computational systems I have been
 muttering about? Where you have an architecture where an active bit of code
 or program is equivalent to an object in the above paragraph. Also have a
 look at Eurisko by Doug Lenat.

Will Pearson



There is no reason to believe that anything I might imagine would be the
same as something that was created 35 years ago!

I have a lot of trouble explaining myself on some days.  The idea of the
effect of the application of ideas is that most people do not consciously
think about the subject, and so, just by becoming aware of it one can change
how his program works regardless of how automated the program is.  It can
work with strictly defined logical systems or with inductive systems that
can be extended creatively or with systems that are capable of learning.
However, it is not a complete solution to AI, it is more like something that
you will need to think about if you plan to write some seriously
innovative AI application in the near future.  So, I haven't written such a
program, but I do have something to say.

A system that has heuristics that can modify the heuristics of the system is
important, and such a system does implement what I am talking about.
However, the point is, that Lenat never seemed to completely accept the
range that such a thing would have to have to generate true intelligence.
The reason is that it would become so complicated that it would make any
feasible AI program impossible.  And the reason that a truely intelligent AI
program is still not feasible is just because it would be complicated.

I am saying that the method of recognizing and defining the effect of ideas
on other ideas would not, by itself, make it all work, but rather it would
help us to better understand how to better automate the kind of extensive
complications of effect that would be necessary.

I am thinking of a writing about a simple imaginary model that could be
incrementally extended.  This model would not be useful, because it would be
too simple.  But I should be able to give you some idea about what I am
thinking about.

As any program becomes more and more complicated, the programmer has to
think more and more about how various combinations of data and processes
will interact.  Why would anyone think that an advanced AI program would be
any simpler?

Ideas affect other ideas.  Heuristics that can act on other heuristics is a
basis of this kind of thing, but it has to be much more complicated than
that.  So while I don't have the answers, I can begin to think of hand
crafting a model where such a thing could be examined, by recognizing that
the application of ideas to other ideas will have complicated effects that
need to be defined.  The more automated AI program would have to use some
systems to shape these complicated interactions, but the effect of those
heuristics would be modifiable by other learning (to some extent.)

Jim Bromer

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-25 Thread Vladimir Nesov
On Tue, Mar 25, 2008 at 2:17 AM, Jim Bromer [EMAIL PROTECTED] wrote:

 A usage evaluation could be taken as an example of an effect of application,
 because the idea of usage and of statistical evaluation can be combined with
 the object of consideration along with other theories that detail how such
 combinations could be usefully applied to some problem.  But it is obviously
 not the only effective process that would be necessary to understand
 complicated systems.  No one would only use statistical models to discuss
 the management and operations of a real factory for example.  It is rather
 obvious that such limited methods would be grossly inadequate.  Why would
 anyone imagine that a narrow operational system would be adequate for an AI
 program?  The theory of the effect of application of an idea tries to
 address this inadequacy by challenging the programmer to begin to think
 about and program applications that can detail how simple interactive
 effects can be combined with novel insights in a feasible extensible object.
 So while I don't have the solution, I believe I can see a path.


Simple systems can be computationally universal, so it's not an issue
in itself. On the other hand, no learning algorithm is universal,
there are always distributions that given algorithms will learn
miserably. The problem is to find a learning algorithm/representation
that has the right kind of bias to implement human-like performance.

It's more or less clear that such representation needs to have
higher-level concepts that refine interactions between lower-level
concepts and are learned incrementally, built on existing concepts.
Association-like processes can port existing high-level circuits to
novel tasks for which they were not originally learned, which allows
some measure of general knowledge.

As I see it, the issue you are trying to solve is the porting of
structured high-level competencies. Which looks equivalent to the
general problem of association-building between structured
representations. Is it roughly a correct characterization of what you
are talking about?

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-25 Thread Vladimir Nesov
On Tue, Mar 25, 2008 at 11:30 PM, Jim Bromer [EMAIL PROTECTED] wrote:

 I am saying that the method of recognizing and defining the effect of ideas
 on other ideas would not, by itself, make it all work, but rather it would
 help us to better understand how to better automate the kind of extensive
 complications of effect that would be necessary.


It's interesting, but first the structure of 'ideas' needs to be
described, otherwise it doesn't help.


 As any program becomes more and more complicated, the programmer has to
 think more and more about how various combinations of data and processes
 will interact.  Why would anyone think that an advanced AI program would be
 any simpler?

 Ideas affect other ideas.  Heuristics that can act on other heuristics is a
 basis of this kind of thing, but it has to be much more complicated than
 that.  So while I don't have the answers, I can begin to think of hand
 crafting a model where such a thing could be examined, by recognizing that
 the application of ideas to other ideas will have complicated effects that
 need to be defined.  The more automated AI program would have to use some
 systems to shape these complicated interactions, but the effect of those
 heuristics would be modifiable by other learning (to some extent.)


Modularity fights this problem in programming, helping to keep track
of *code*. But this code is built on top of existing models of
program's behavior existing in programmers' minds. Programmers
manually determine applicability of code. It's often possible to solve
a wide variety of problems with existing codebase, but programmer is
needed to contextually match and assemble pathways that solve any
given problem. We don't currently have practically applicable methods
to extend the context in which code can be applied, and to build on
these extended contexts.

I think that one of the most important features of AGI system must be
automated extensibility. It should be possible to teach it new things
without breaking it. It should be able to correct its performance to
preserve previously learned skills, so that teaching needs only to
focus on few high-level performance properties, regardless on how much
is already learned.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-25 Thread Jim Bromer
On Tue, Mar 25, 2008 at 4:42 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:



 Simple systems can be computationally universal, so it's not an issue
 in itself. On the other hand, no learning algorithm is universal,
 there are always distributions that given algorithms will learn
 miserably. The problem is to find a learning algorithm/representation
 that has the right kind of bias to implement human-like performance.

 It's more or less clear that such representation needs to have
 higher-level concepts that refine interactions between lower-level
 concepts and are learned incrementally, built on existing concepts.
 Association-like processes can port existing high-level circuits to
 novel tasks for which they were not originally learned, which allows
 some measure of general knowledge.

 As I see it, the issue you are trying to solve is the porting of
 structured high-level competencies. Which looks equivalent to the
 general problem of association-building between structured
 representations. Is it roughly a correct characterization of what you
  are talking about?

 Vladimir Nesov
 [EMAIL PROTECTED]


Can you give some more indication about what you mean by porting of
structured high-level competencies and the problem of association-building
between structured representations?

I do not know where you got the phrase porting from since I have only seen
it in reference to porting code from one machine to another.  I assume that
you are using it as a kind of metaphor, or the application of an idea very
similar to 'porting' to AGI.

Let's suppose that I claim that Ed bumped into me.  Right away we can see
that the word-concept bumped has some effect on any ideas you might have
about Ed, me and Ed and me.  My claim here is that the effect of the
interaction of ideas goes beyond semantics into the realm of ideas proper.
If it turned out that I got into Ed's way (perhaps intentionally) then one
might wonder if the claim that Ed bumped into me was a correct or adequate
description of what happened.  On the other hand, such detail might not be
interesting or necessary in some other conversation, so the effect of the
idea of 'bumping' and the idea of 'getting in the way of' may or may not be
of interest in all conversatations about the event.  Furthermore, the idea
of 'getting in the way of' may not be relevant to some examinations of what
happened, as in the case where a judge might want to focus on whether or not
the bumping actually took place.  From this kind of focus, the question of
whether or not I got in Ed's way might then become evidence of whether or
not the bump actually took place, but it would not otherwise be relevant to
the judge's examination of the incident.

Presentations like the one that I just made have been made often before.
What I am saying is that the effect of the application of different ideas
may be more clearly deliniated in stories like this, and that process can be
seen as a generalization of form that may be used with representations to
help show what kind of structure would be needed to create and maintain such
complexes of potential relations between ideas.

While I do not know the details of how I might go about to create a program
to build structure like that, the view that it is only a 'porting of
structure' implies that the method might be applied in some simple manner.
While it can be applied in a simple manner to a simple model, my interest in
the idea is that I could also take the idea further in more complicated
models.

The point that the method can be used in a simplistic, constrained model is
significant because the potential problem is so complex that constrained
models may be used to study details that would be impossible in more dynamic
learning models.

Jim Bromer

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-25 Thread Vladimir Nesov
On Wed, Mar 26, 2008 at 1:27 AM, Jim Bromer [EMAIL PROTECTED] wrote:

 Let's suppose that I claim that Ed bumped into me.  Right away we can see
 that the word-concept bumped has some effect on any ideas you might have
 about Ed, me and Ed and me.  My claim here is that the effect of the
 interaction of ideas goes beyond semantics into the realm of ideas proper.
 If it turned out that I got into Ed's way (perhaps intentionally) then one
 might wonder if the claim that Ed bumped into me was a correct or adequate
 description of what happened.  On the other hand, such detail might not be
 interesting or necessary in some other conversation, so the effect of the
 idea of 'bumping' and the idea of 'getting in the way of' may or may not be
 of interest in all conversatations about the event.  Furthermore, the idea
 of 'getting in the way of' may not be relevant to some examinations of what
 happened, as in the case where a judge might want to focus on whether or not
 the bumping actually took place.  From this kind of focus, the question of
 whether or not I got in Ed's way might then become evidence of whether or
 not the bump actually took place, but it would not otherwise be relevant to
 the judge's examination of the incident.

 Presentations like the one that I just made have been made often before.
 What I am saying is that the effect of the application of different ideas
 may be more clearly deliniated in stories like this, and that process can be
 seen as a generalization of form that may be used with representations to
 help show what kind of structure would be needed to create and maintain such
 complexes of potential relations between ideas.

 While I do not know the details of how I might go about to create a program
 to build structure like that, the view that it is only a 'porting of
 structure' implies that the method might be applied in some simple manner.
 While it can be applied in a simple manner to a simple model, my interest in
 the idea is that I could also take the idea further in more complicated
 models.

 The point that the method can be used in a simplistic, constrained model is
 significant because the potential problem is so complex that constrained
 models may be used to study details that would be impossible in more dynamic
 learning models.


Certainly ambiguity (=applicability to multiple contexts in different
ways) and presence of rich structure in presumably simple 'ideas', as
you call it, is a known issue. Even interaction between concept clouds
evoked by a pair of words is a nontrivial process (triangular
lightbulb). In a way, whole operation can be modeled by such
interactions, where sensory input/recall is taken to present a stream
of triggers that evoke concept cloud after cloud, with associations
and compound concepts forming at the overlaps. But of course it's too
hand-wavy without a more restricted model of what's going on.
Communicating something that exists solely on high level is very
inefficient, plus most of such content can turn out to be wrong. Back
to prototyping...

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-24 Thread Jim Bromer
Thanks for asking.  I will try to come up with a simple model during the
next week. I can create an example because the principle can be used in
well-defined constrained models or in more extensible models.



The theory does not answer all questions about AGI.  I would think that
should be taken as a reasonable assumption about any single theory; however,
I believe that it can help in the discovery of more dynamic and flexible
principles that may be of some use in AI.  The reason I think this is
because the theory explicitly deals with an issue that has not been
abstracted and highlighted in any discussions that I can recall.  So while
the idea of an effect of application of an idea may have been implicitly
invoked in any number of discussions, I don't really think that it has ever
really emerged as a fundamental subject matter in its own right.



Concept grounding could be taken as an example of the effect of application.
The association of a concept with some data that exemplifies it within the
greater data environment would naturally produce some kinds of knowledge
that could affect other kinds of knowledge.



To try to understand what I am talking about, start by imagining a
simulation of some physical operation, like a part of a complex factory in a
Sim City kind of game.  In this kind of high-level model no one would ever
imagine all of the objects should interact in one stereotypical way,
different objects would interact with other objects in different kinds of
ways.  And no one would imagine that the machines that operated on other
objects in the simulation were not also objects in their own right.  For
instance the machines used in production might require the use of other
machines to fix or enhance them.  And the machines might produce or operate
on objects that were themselves machines.  When you think about a simulation
of some complicated physical systems it becomes very obvious that different
kinds of objects can have different effects on other objects.  And yet, when
it comes to AI, people go on an on about systems that totally disregard this
seemingly obvious divergence of effect that is so typical of nature.  Instead
most theories see insight as if it could be funneled through some narrow
rational system or other less rational field operations where the objects of
the operations are only seen as the ineffective object of the pre-defined
operations of the program.



A usage evaluation could be taken as an example of an effect of application,
because the idea of usage and of statistical evaluation can be combined with
the object of consideration along with other theories that detail how such
combinations could be usefully applied to some problem.  But it is obviously
not the only effective process that would be necessary to understand
complicated systems.  No one would only use statistical models to discuss
the management and operations of a real factory for example.  It is rather
obvious that such limited methods would be grossly inadequate.  Why would
anyone imagine that a narrow operational system would be adequate for an AI
program?  The theory of the effect of application of an idea tries to
address this inadequacy by challenging the programmer to begin to think
about and program applications that can detail how simple interactive
effects can be combined with novel insights in a feasible extensible object.
So while I don't have the solution, I believe I can see a path.



I feel that by using the principle of the effect of the application of
ideas, one could build simple extensible models. The models would start out
as being simplistic.  But by carefully studying how complicated interactions
interfere or cohere I believe that some new AI principles may be found.  I
will try to come up with a simple model during the next week.



Jim Bromer



On Sun, Mar 23, 2008 at 4:53 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:

 Jim,

 It sounds like something about concept grounding, but that's all I
 got. Can you give an example that demonstrates the structure of what
 you are talking about?

 --
 Vladimir Nesov
 [EMAIL PROTECTED]

 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-23 Thread Vladimir Nesov
Jim,

It sounds like something about concept grounding, but that's all I
got. Can you give an example that demonstrates the structure of what
you are talking about?

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] The Effect of Application of an Idea

2008-03-22 Thread Mark Waser
Jim,

You are absolutely correct.  Try the following logical definition/meme-map 
where all links are tautologies (and therefore transitive)

-Spread
|| |
   |
|| |
   |
vvv 
 v
Intelligent -- Friendly --  Ethical -- {Core of all 
religions}--{Any current religion minus irrational, unethical , stupidities}
^  ^ \^   ^ 
|   |   \   |   |
v  v\ v   v
   Rational--Self-Interest   {Play Well With Others)--Love

The meme itself is a successful implementation of Seed Friendliness (and 
Intelligence) if successfully implanted.



  - Original Message - 
  From: Jim Bromer 
  To: agi@v2.listbox.com 
  Sent: Saturday, March 22, 2008 5:33 PM
  Subject: **SPAM** [agi] The Effect of Application of an Idea


  I want to quickly mention an idea I had a few years ago.  New knowledge can 
be thought of as data stored somewhere in some kind of database, but it also 
has a greater potential of effect when it used intelligently.  New knowledge is 
not just a dull object of information to be stored, because it has a potential 
to be far reaching when it is used.  It even has the potential to change the 
course of ones thinking.



  Knowledge is varied, and the usefulness, adequacy and accuracy will not be 
the same for each piece of information.  Some information will constitute a 
meaningful idea and some will only constitute a fragment of an idea.  Knowledge 
constitutes understanding only when it is combined with more knowledge that is 
relevant to it and can be used effectively to integrate it into a greater sense 
of the subject matter.  So even though a piece of knowledge may be recognized 
as meaningful, that meaning exists because it can be related to many other 
kinds of knowledge.



  Knowledge is not just a piece of information that lies dully on a dusty 
shelf, so to speak, it has the potential to dynamically relate to other kinds 
of knowledge.  In my view of advanced AI, knowledge has to play different 
computational roles with other kinds of knowledge.



  A few years ago, I realized that different kinds of knowledge can have a 
varied range of effect when it is applied to other knowledge.  Let's say that I 
want to store the information that parts x1332-b and z733-c are somehow related 
into a database record.  If no other kind of information has been stored about 
those parts, or if no relations with other relevant information has been 
programmed into the database, then that information will exist only as a rather 
dull fragment of information that is unrelated to any other information.  And 
the vast majority of people who will read this will quickly forget that parts 
x1332-b and z733-c are somehow related just because it holds so little meaning 
for them.  On the other hand, if I convince you that you should be a little 
more open minded, you might take that thought and apply it to a wide variety of 
situations.  The difference is that a persuasive argument to change the way you 
think about things in general can potentially have a greater effect of 
application than a piece of information concerning some trivial fact that is 
has little meaning to you.



  One of the great things about this theory of the effect of application is 
that it can be modeled in a simple closed artificial system and studied, but it 
can also subsequently be used in an extension of the system.  It is in effect, 
a foundation of conceptual integration, and by recognizing the significance of 
the theory, I believe that some important new areas of research may be 
discovered.  The theory can help advanced students imaginatively discover how 
ideas can play different roles with other ideas and help them to see beyond the 
narrow application of fundamental primitives that dominate (and, in my opinion, 
often disable) current research into AGI.



  Although the theory can be used as a fundamental primitive in simple 
experiments, it can also be used in the definition of extensive conceptual 
systems as well.  It's beauty that it is not just a fundamental piece of 
knowledge; it is also a principle in a theory of conceptualization.  Ideas have 
effects on other ideas.



  Jim Bromer 

--
agi | Archives  | Modify Your Subscription  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: