Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Vladimir Nesov
Hello Boris, and welcome to the list.

I didn't understand your algorithm, you use many terms that you didn't
define. It probably would be clearer if you use some kind of
pseudocode and systematically describe all occurring procedures. But I
think more fundamental questions that need clarifying won't depend on
these.

What is it that your system tries to predict? Does it predict only
specific terminal inputs, values on the ends of its sensors? Or
something else? When does prediction occur?

What is this prediction for? How does it help? How does the system use
it? What use is this ability of the system for us?

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
Although I symphathize with some of Hawkin's general ideas about
unsupervised learning, his current HTM framework is unimpressive in
comparison with state-of-the-art techniques such as Hinton's RBM's, LeCun's
convolutional nets and the promising low-entropy coding variants.

But it should be quite clear that such methods could eventually be very
handy for AGI. For example, many of you would agree that a reliable,
computationally affordable solution to Vision is a crucial factor for AGI:
much of the world's information, even on the internet, is encoded in
audiovisual information. Extracting (sub)symbolic semantics from these
sources would open a world of learning data to symbolic systems.

An audiovisual perception layer generates semantic interpretation on the
(sub)symbolic level. How could a symbolic engine ever reason about the real
world without access to such information?

Vision may be classified under Narrow AI, but I reckon that an AGI can
never understand our physical world without a reliable perceptual system.
Therefore, perception is essential for any AGI reasoning about physical
entities!

Greets, Durk

On Sun, Mar 30, 2008 at 4:34 PM, Derek Zahn [EMAIL PROTECTED] wrote:


 It seems like a reasonable and not uncommon idea that an AI could be built
 as a mostly-hierarchical autoassiciative memory.  As you point out, it's not
 so different from Hawkins's ideas.  Neighboring pixels will correlate in
 space and time; features such as edges should become principle components
 given enough data, and so on.  There is a bunch of such work on
 self-organizing the early visual system like this.

 That overall concept doesn't get you very far though; the trick is to make
 it work past the first few rather obvious feature extraction stages of
 sensory data, and to account for things like episodic memory, language use,
 goal-directed behavior, and all other cognitive activity that is not just
 statistical categorization.

 I sympathize with your approach and wish you luck.  If you think you have
 something that produce more than Hawkins has with his HTM, please explain it
 with enough precision that we can understand the details.

 --
   *agi* | Archives http://www.listbox.com/member/archive/303/=now
 http://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttp://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Vladimir Nesov
On Sun, Mar 30, 2008 at 7:23 PM, Kingma, D.P. [EMAIL PROTECTED] wrote:

 Although I symphathize with some of Hawkin's general ideas about
 unsupervised learning, his current HTM framework is unimpressive in
 comparison with state-of-the-art techniques such as Hinton's RBM's, LeCun's
 convolutional nets and the promising low-entropy coding variants.

 But it should be quite clear that such methods could eventually be very
 handy for AGI. For example, many of you would agree that a reliable,
 computationally affordable solution to Vision is a crucial factor for AGI:
 much of the world's information, even on the internet, is encoded in
 audiovisual information. Extracting (sub)symbolic semantics from these
 sources would open a world of learning data to symbolic systems.

 An audiovisual perception layer generates semantic interpretation on the
 (sub)symbolic level. How could a symbolic engine ever reason about the real
 world without access to such information?

 Vision may be classified under Narrow AI, but I reckon that an AGI can
 never understand our physical world without a reliable perceptual system.
 Therefore, perception is essential for any AGI reasoning about physical
 entities!


At this point I think that although vision doesn't seem absolutely
necessary, it may as well be implemented, if it can run on the same
substrate as everything else (it probably can). It may prove to be a
good playground for prototyping. If it's implemented with moving fovea
(which is essentially what LeCun's hack is about) and relies on
selective attention (so that only gist of the scene is perceived, read
supported on higher levels), it shouldn't require insanely much
resources, compared to the rest of reasoning engine. Alas in this
picture I give up my previous assessment (of few months back) that
reasoning can be implemented efficiently, so that only few active
concepts need to figure into computation each tact. In my current
model all concepts compute all the time...

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Derek Zahn
[EMAIL PROTECTED] writes:
 
 But it should be quite clear that such methods could eventually be very handy 
 for AGI.
 
I agree with your post 100%, this type of approach is the most interesting 
AGI-related stuff to me.
 
 An audiovisual perception layer generates semantic interpretation on the  
 (sub)symbolic level. How could a symbolic engine ever reason about the real  
 world without access to such information?
 
Even more interesting:  How could a symbolic engine ever reason about the real 
world *with* access to such information? :)

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread William Pearson
On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote:
 Although I symphathize with some of Hawkin's general ideas about unsupervised
learning, his current HTM framework is unimpressive in comparison with
state-of-the-art techniques such as Hinton's RBM's, LeCun's
convolutional nets and the promising low-entropy coding variants.

 But it should be quite clear that such methods could eventually be very handy
 for AGI. For example, many of you would agree that a reliable, computationally
 affordable solution to Vision is a crucial factor for AGI: much of the world's
 information, even on the internet, is encoded in audiovisual information.
 Extracting (sub)symbolic semantics from these sources would open a world of
 learning data to symbolic systems.

 An audiovisual perception layer generates semantic interpretation on the
 (sub)symbolic level. How could a symbolic engine ever reason about the real
 world without access to such information?

So a deafblind person couldn't reason about the real world? Put ear
muffs and a blind fold on, see what you can figure out about the world
around you. Less certainly, but then you could figure out more about
the world if you had magnetic sense like pidgeons.

Intelligence is not about the modalities of the data you get, it is
about the what you do with the data you do get.

All of the data on the web is encoded in electronic form, it is only
because of our comfort with incoming photons and phonons that it is
translated to video and sound. This fascination with A/V is useful,
but does not help us figure out the core issues that are holding us up
whilst trying to create AGI.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Stephen Reed
Derek: How could a symbolic engine ever reason about the real world *with* 
access to such information? 

I hope my work eventually demonstrates a solution to your satisfaction.  In the 
meantime there is evidence from robotics, specifically driverless cars, that 
real world sensor input can be sufficiently combined and abstracted for use by 
symbolic route planners.
 
-Steve


Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Derek Zahn [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, March 30, 2008 11:21:52 AM
Subject: RE: [agi] Intelligence: a pattern discovery algorithm of scalable 
complexity.

.hmmessage P { margin:0px;padding:0px;} body.hmmessage { 
FONT-SIZE:10pt;FONT-FAMILY:Tahoma;}  [EMAIL PROTECTED] writes:
  
  But it should be quite clear that such methods could eventually be very 
  handy for AGI.
  
 I agree with your post 100%, this type of approach is the most interesting 
AGI-related stuff to me.
  
  An audiovisual perception layer generates semantic interpretation on the 
 (sub)symbolic level. How could a symbolic engine ever reason about the real 
 world without access to such information?
  
 Even more interesting:  How could a symbolic engine ever reason about the real 
world *with* access to such information? :)

 agi | Archives   | Modify  Your Subscription   

 






  

Special deal for Yahoo! users  friends - No Cost. Get a month of Blockbuster 
Total Access now 
http://tc.deals.yahoo.com/tc/blockbuster/text3.com

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Mike Tintner
Durk,

Absolutely right about the need for what is essentially an imaginative level of 
mind. But wrong in thinking:

Vision may be classified under Narrow AI

You seem to be treating this extra audiovisual perception layer as a purely 
passive layer. The latest psychology  philosophy recognize that this is in 
fact a level of v. active thought and intelligence. And our culture is only 
starting to understand imaginative thought generally.

Just to begin reorienting your thinking here, I suggest you consider how much 
time people spend on audiovisual information (esp. tv) vs purely symbolic 
information (books).  And allow for how much and how rapidly even academic 
thinking is going audiovisual.  

Know of anyone trying to give computers that extra layer? I saw some vague 
reference about this recently.of which I have only a confused memory.


  Durk:Although I symphathize with some of Hawkin's general ideas about 
unsupervised learning, his current HTM framework is unimpressive in comparison 
with state-of-the-art techniques such as Hinton's RBM's, LeCun's convolutional 
nets and the promising low-entropy coding variants.

  But it should be quite clear that such methods could eventually be very handy 
for AGI. For example, many of you would agree that a reliable, computationally 
affordable solution to Vision is a crucial factor for AGI: much of the world's 
information, even on the internet, is encoded in audiovisual information. 
Extracting (sub)symbolic semantics from these sources would open a world of 
learning data to symbolic systems.

  An audiovisual perception layer generates semantic interpretation on the 
(sub)symbolic level. How could a symbolic engine ever reason about the real 
world without access to such information?

  Vision may be classified under Narrow AI, but I reckon that an AGI can 
never understand our physical world without a reliable perceptual system. 
Therefore, perception is essential for any AGI reasoning about physical 
entities!

  Greets, Durk


  On Sun, Mar 30, 2008 at 4:34 PM, Derek Zahn [EMAIL PROTECTED] wrote:


It seems like a reasonable and not uncommon idea that an AI could be built 
as a mostly-hierarchical autoassiciative memory.  As you point out, it's not so 
different from Hawkins's ideas.  Neighboring pixels will correlate in space 
and time; features such as edges should become principle components given 
enough data, and so on.  There is a bunch of such work on self-organizing the 
early visual system like this.
 
That overall concept doesn't get you very far though; the trick is to make 
it work past the first few rather obvious feature extraction stages of sensory 
data, and to account for things like episodic memory, language use, 
goal-directed behavior, and all other cognitive activity that is not just 
statistical categorization.
 
I sympathize with your approach and wish you luck.  If you think you have 
something that produce more than Hawkins has with his HTM, please explain it 
with enough precision that we can understand the details.
 



  agi | Archives  | Modify Your Subscription  




--
agi | Archives  | Modify Your Subscription  



--


  No virus found in this incoming message.
  Checked by AVG. 
  Version: 7.5.519 / Virus Database: 269.22.1/1349 - Release Date: 3/29/2008 
5:02 PM

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Derek Zahn

Stephen Reed writes:
 
 How could a symbolic engine ever reason about the real world *with* access 
 to such information? 
 
 I hope my work eventually demonstrates a solution to your satisfaction.  
 
Me too!
 
 In the meantime there is evidence from robotics, specifically driverless 
 cars,  that real world sensor input can be sufficiently combined and 
 abstracted for use  by symbolic route planners.
 
True enough, that is one answer:  by hand-crafting the symbols and the 
mechanics for instantiating them from subsymbolic structures.  We of course 
hope for better than this but perhaps generalizing these working systems is a 
practical approach.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
Mike, you seem to have misinterpreted my statement. Perception is certainly
not 'passive', as it can be described as active inference using a (mostly
actively) learned world model. Inference is done on many levels, and could
integrate information from various abstraction levels, so I don't see it as
an isolated layer.

On Sun, Mar 30, 2008 at 6:27 PM, Mike Tintner [EMAIL PROTECTED]
wrote:

  Durk,

 Absolutely right about the need for what is essentially an imaginative
 level of mind. But wrong in thinking:

 Vision may be classified under Narrow AI

 You seem to be treating this extra audiovisual perception layer as a
 purely passive layer. The latest psychology  philosophy recognize that this
 is in fact a level of v. active thought and intelligence. And our
 culture is only starting to understand imaginative thought generally.

 Just to begin reorienting your thinking here, I suggest you consider how
 much time people spend on audiovisual information (esp. tv) vs purely
 symbolic information (books).  And allow for how much and how rapidly even
 academic thinking is going audiovisual.

 Know of anyone trying to give computers that extra layer? I saw some vague
 reference about this recently.of which I have only a confused memory.



 Durk:Although I symphathize with some of Hawkin's general ideas about
 unsupervised learning, his current HTM framework is unimpressive in
 comparison with state-of-the-art techniques such as Hinton's RBM's, LeCun's
 convolutional nets and the promising low-entropy coding variants.


 But it should be quite clear that such methods could eventually be very
 handy for AGI. For example, many of you would agree that a reliable,
 computationally affordable solution to Vision is a crucial factor for AGI:
 much of the world's information, even on the internet, is encoded in
 audiovisual information. Extracting (sub)symbolic semantics from these
 sources would open a world of learning data to symbolic systems.

 An audiovisual perception layer generates semantic interpretation on the
 (sub)symbolic level. How could a symbolic engine ever reason about the real
 world without access to such information?

 Vision may be classified under Narrow AI, but I reckon that an AGI can
 never understand our physical world without a reliable perceptual system.
 Therefore, perception is essential for any AGI reasoning about physical
 entities!

 Greets, Durk

 On Sun, Mar 30, 2008 at 4:34 PM, Derek Zahn [EMAIL PROTECTED] wrote:

 
  It seems like a reasonable and not uncommon idea that an AI could be
  built as a mostly-hierarchical autoassiciative memory.  As you point out,
  it's not so different from Hawkins's ideas.  Neighboring pixels will
  correlate in space and time; features such as edges should become
  principle components given enough data, and so on.  There is a bunch of such
  work on self-organizing the early visual system like this.
 
  That overall concept doesn't get you very far though; the trick is to
  make it work past the first few rather obvious feature extraction stages of
  sensory data, and to account for things like episodic memory, language use,
  goal-directed behavior, and all other cognitive activity that is not just
  statistical categorization.
 
  I sympathize with your approach and wish you luck.  If you think you
  have something that produce more than Hawkins has with his HTM, please
  explain it with enough precision that we can understand the details.
 
   --
*agi* | Archives http://www.listbox.com/member/archive/303/=now
  http://www.listbox.com/member/archive/rss/303/ | 
  Modifyhttp://www.listbox.com/member/?;Your Subscription
  http://www.listbox.com
 

  --
   *agi* | Archives http://www.listbox.com/member/archive/303/=now
 http://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttp://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com

 --

 No virus found in this incoming message.
 Checked by AVG.
 Version: 7.5.519 / Virus Database: 269.22.1/1349 - Release Date: 3/29/2008
 5:02 PM

 --
   *agi* | Archives http://www.listbox.com/member/archive/303/=now
 http://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttp://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
On Sun, Mar 30, 2008 at 6:48 PM, William Pearson [EMAIL PROTECTED] wrote:

 On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote:
  An audiovisual perception layer generates semantic interpretation on the
  (sub)symbolic level. How could a symbolic engine ever reason about the real
  world without access to such information?

 So a deafblind person couldn't reason about the real world? Put ear
 muffs and a blind fold on, see what you can figure out about the world
 around you. Less certainly, but then you could figure out more about
 the world if you had magnetic sense like pidgeons.

 Intelligence is not about the modalities of the data you get, it is
 about the what you do with the data you do get.

 All of the data on the web is encoded in electronic form, it is only
 because of our comfort with incoming photons and phonons that it is
 translated to video and sound. This fascination with A/V is useful,
 but does not help us figure out the core issues that are holding us up
 whilst trying to create AGI.

  Will Pearson

Intelligence is not *only* about the modalities of the data you get,
but modalities are certainly important. A deafblind person can still
learn a lot about the world with taste, smell, and touch, but the
senses one has access to defines the limits to the world model one can
build.

If I put on ear muffs and a blind fold right now, I can still reason
quite well using touch, since I have access to a world model build
using e.g. vision. If you were deafblind and paralysed since your
birth, would you have any possibility of spatial reasoning? No, maybe
except for some extremely crude genetically coded heuristics.

Sure, you could argue that an intelligence purely based on text,
disconnected from the physical world, could be intelligent, but it
would have a very hard time reasoning about interaction of entities in
the physicial world. It would be unable to understand humans in many
aspects: I wouldn't call that generally intelligent.

Perception is a about learning and using a model of our physical
world. Input is often high-bandwidth, while output is often
low-bandwidth and useful for high-level processing (e.g. reasining and
memory). Luckily, efficient methods are arising, so I'm quite
optimistic about progress towards this aspect of intelligence.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Vladimir Nesov
On Sun, Mar 30, 2008 at 10:16 PM, Kingma, D.P. [EMAIL PROTECTED] wrote:

  Intelligence is not *only* about the modalities of the data you get,
  but modalities are certainly important. A deafblind person can still
  learn a lot about the world with taste, smell, and touch, but the
  senses one has access to defines the limits to the world model one can
  build.

  If I put on ear muffs and a blind fold right now, I can still reason
  quite well using touch, since I have access to a world model build
  using e.g. vision. If you were deafblind and paralysed since your
  birth, would you have any possibility of spatial reasoning? No, maybe
  except for some extremely crude genetically coded heuristics.

  Sure, you could argue that an intelligence purely based on text,
  disconnected from the physical world, could be intelligent, but it
  would have a very hard time reasoning about interaction of entities in
  the physicial world. It would be unable to understand humans in many
  aspects: I wouldn't call that generally intelligent.

  Perception is a about learning and using a model of our physical
  world. Input is often high-bandwidth, while output is often
  low-bandwidth and useful for high-level processing (e.g. reasining and
  memory). Luckily, efficient methods are arising, so I'm quite
  optimistic about progress towards this aspect of intelligence.


One of the requirements that I try to satisfy with my design is
ability to equivalently perceive information encoded by seemingly
incompatible modalities. For example, visual stream can be encoded
using a set of pairs tag,color, where tags are unique labels that
correspond to positions of pixels. This set of pairs can be shuffled
and supplied using serial input (where tags and colors are encoded as
binary words of activation), and system must be able to reconstruct
representation as good as that supplied by naturally arranged video
input. Of course getting to that point requires careful incremental
teaching, but after that there should be no real difference (aside
from bandwidth, of course).

It might be useful to look at all concepts as 'modalities': you can
'see' your thoughts, when you know a certain theory, you can 'see' how
it's applied, how its parts interact, what obvious conclusions are.
Prewiring sensory input in a certain way merely pushes learning in
certain direction, just like inbuilt drives bias action in theirs.

This way, for example, it should be possible to teach a 'modality' for
understanding simple graphs encoded as text, so that on one hand
text-based input is sufficient, and on the other hand system
effectively perceives simple vector graphics. This trick can be used
to explain spacial concepts from natural language. But, again, video
camera might be a simpler and more powerful way to the same end, even
if visual processing is severely limited.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Symbols

2008-03-30 Thread Derek Zahn
Related obliquely to the discussion about pattern discovery algorithms What 
is a symbol?
 
I am not sure that I am using the words in this post in exactly the same way 
they are normally used by cognitive scientists; to the extent that causes 
confusion, I'm sorry.  I'd rather use words in their strict conventional sense 
but I do not fully understand what that is.  These thoughts are fuzzier than 
I'd like; if I was better at de-fuzzifying them I might be a pro instead of an 
amateur!
 
Proposition:  a symbol is a token with both denotative and model-theoretic 
semantics.
 
The denotative semantacs are what makes a symbol refer to something or be 
about something.  The model-theoretic semantics allow symbol processing 
operations to occur (such as reasoning).
 
I believe this is a somewhat more restrictive use of the word symbol than is 
necessarily implied by Newell and Simon in the Physical Symbol System 
Hypothesis, but my aim is engineering rather than philosophy.
 
I'm actually somewhat skeptical that human beings use symbols in this sense for 
much of our cognition.  We appear to be a million times better at it than any 
other animal, and that is the special thing that makes us so great, but we 
still aren't very good at it.  However, most of the things we want to build AGI 
*for* require us to greatly expand the symbol processing capabilities of mere 
humans.  I think we're mostly interested in building artificial scientists and 
engineers rather than artificial musicians.  Since computer programs, 
engineering drawings, and physics theories are explicitly symbolic constructs, 
we're more interested in effectively creating symbols than in the totality of 
the murky subsymbolic world supporting it.  To what extent can we separate 
them?  I wish I knew.
 
In this view, subsymbolic simply refers to tokens that lack some of the 
features of symbols.  For example, a representation of a pixel from a camera 
has clear denotational semantics but it is not elaborated as well as a better 
symbol would be (the light coming from direction A at time B is not as useful 
as the light reflecting off of Fred's pinky fingernail).  Similarly, and more 
importantly, subsymbolic products of sensory systems lack useful 
model-theoretic semantics.  The origin of symbols problem involves how those 
semantics arise -- and to me it's the most interesting piece of the AGI puzzle.
 
Is anybody else interested in this kind of question, or am I simply inventing 
issues that are not meaningful and useful?

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
Vladimir, I agree with you on many issues, but...

On Sun, Mar 30, 2008 at 9:03 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
  This way, for example, it should be possible to teach a 'modality' for
  understanding simple graphs encoded as text, so that on one hand
  text-based input is sufficient, and on the other hand system
  effectively perceives simple vector graphics. This trick can be used
  to explain spacial concepts from natural language. But, again, video
  camera might be a simpler and more powerful way to the same end, even
  if visual processing is severely limited.

Vector graphics can indeed be communicated to an AGI by relatively
low-bandwidth textual input. But, unfortunately,
the physical world is not made of vector graphics, so reducing the
physical world to vector graphics is quite lossy (and computationally
expensive an sich). I'm not sure whether you're assuming that vector
graphics is very useful for AGI, but I would disagree.

 Prewiring sensory input in a certain way merely pushes learning in
 certain direction, just like inbuilt drives bias action in theirs.

Who said perception needs to be prewired? Perception should be made
efficient by exploiting statistical regularities in the data, not
assuming them per se. Regularities in the data (captured by your world
model) should tell you where to focus your attention on *most* of the
time, not *all* the time ;)

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Vladimir Nesov
On Sun, Mar 30, 2008 at 11:33 PM, Kingma, D.P. [EMAIL PROTECTED] wrote:

  Vector graphics can indeed be communicated to an AGI by relatively
  low-bandwidth textual input. But, unfortunately,
  the physical world is not made of vector graphics, so reducing the
  physical world to vector graphics is quite lossy (and computationally
  expensive an sich). I'm not sure whether you're assuming that vector
  graphics is very useful for AGI, but I would disagree.


I referred to manually providing explanation in conversational format.
Of course it's lossy, but whether it's lossy compared to the real
world is not an issue, it's much more important how it compares to
'gist' scheme that we extract from full vision. It's clearly not much.
Vision allows to attend to any of huge number of details present in
the input, but there are only few details seen at a time. When a
specific issue needs a spacial explanation, it can be carried out by
explicitly specifying its structure in vector graphics.


   Prewiring sensory input in a certain way merely pushes learning in
   certain direction, just like inbuilt drives bias action in theirs.

  Who said perception needs to be prewired? Perception should be made
  efficient by exploiting statistical regularities in the data, not
  assuming them per se. Regularities in the data (captured by your world
  model) should tell you where to focus your attention on *most* of the
  time, not *all* the time ;)


By prewiring I meant a trivial level, like routing signals from the
retina to certain places in the brain, from the start suggesting that
nearby pixels on the retina are close together, and making temporal
synchrony of signals to be approximately the same as in image on the
retina. Bad prewiring would consist in sticking signals from pixels on
the retina to random parts of the brain, with random delays. It would
take much more effort to acquire good visual perception in this case
(and would be impossible on brain wetware).

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
Alright, agreed with all you say. If I understood correctly, your
system (at the moment) assumes scene descriptions at a level higher
than pixels, but certainly lower than objects. An application of such
system seems be a simulated, virtual world where such descriptions are
at hand... Is this indeed the direction you're going?

Greets,
Durk

On Sun, Mar 30, 2008 at 10:00 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:

 On Sun, Mar 30, 2008 at 11:33 PM, Kingma, D.P. [EMAIL PROTECTED] wrote:
 
   Vector graphics can indeed be communicated to an AGI by relatively
   low-bandwidth textual input. But, unfortunately,
   the physical world is not made of vector graphics, so reducing the
   physical world to vector graphics is quite lossy (and computationally
   expensive an sich). I'm not sure whether you're assuming that vector
   graphics is very useful for AGI, but I would disagree.
 

 I referred to manually providing explanation in conversational format.
 Of course it's lossy, but whether it's lossy compared to the real
 world is not an issue, it's much more important how it compares to
 'gist' scheme that we extract from full vision. It's clearly not much.
 Vision allows to attend to any of huge number of details present in
 the input, but there are only few details seen at a time. When a
 specific issue needs a spacial explanation, it can be carried out by
 explicitly specifying its structure in vector graphics.


 
Prewiring sensory input in a certain way merely pushes learning in
certain direction, just like inbuilt drives bias action in theirs.
 
   Who said perception needs to be prewired? Perception should be made
   efficient by exploiting statistical regularities in the data, not
   assuming them per se. Regularities in the data (captured by your world
   model) should tell you where to focus your attention on *most* of the
   time, not *all* the time ;)
 

 By prewiring I meant a trivial level, like routing signals from the
 retina to certain places in the brain, from the start suggesting that
 nearby pixels on the retina are close together, and making temporal
 synchrony of signals to be approximately the same as in image on the
 retina. Bad prewiring would consist in sticking signals from pixels on
 the retina to random parts of the brain, with random delays. It would
 take much more effort to acquire good visual perception in this case
 (and would be impossible on brain wetware).

 --

 Vladimir Nesov
 [EMAIL PROTECTED]




 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Vladimir Nesov
On Mon, Mar 31, 2008 at 12:21 AM, Kingma, D.P. [EMAIL PROTECTED] wrote:
 Alright, agreed with all you say. If I understood correctly, your
  system (at the moment) assumes scene descriptions at a level higher
  than pixels, but certainly lower than objects. An application of such
  system seems be a simulated, virtual world where such descriptions are
  at hand... Is this indeed the direction you're going?

I'm far from dealing with high-level stuff, so it's only in design.
Vector graphics that I talked about was supposed to be provided
manually by a human. For example, it can be a part of explanation of
what 'between' word is about. Alternatively a kind of sketchpad can be
used. My point is that 'modality' seems to be a learnable thing, that
can be stimulated not only by direct sensory input, but also by
learned inferences coming from completely different modalities.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-30 Thread Mark Waser
I agree with Richard and hereby formally request that Ben chime in.

It is my contention that SAT is a relatively narrow form of Narrow AI and not 
general enough to be on an AGI list.

This is not meant, in any way shape or form, to denigrate the work that you are 
doing.  It is very important work.  

It's just that you're performing the equivalent of presenting a biology paper 
at a physics convention.:-)

  - Original Message - 
  From: Jim Bromer 
  To: agi@v2.listbox.com 
  Sent: Sunday, March 30, 2008 11:52 AM
  Subject: **SPAM** Re: [agi] Logical Satisfiability...Get used to it.





On the contrary, Vladimir is completely correct in requesting that the
discussion go elsewhere:  this has no relevance to the AGI list, and
there are other places where it would be pertinent.


Richard Loosemore


  If Ben doesn't want me to continue, I will stop posting to this group. 
Otherwise please try to understand what I said about the relevance of SAT to 
AGI and try to address the specific issues that I mentioned.  On the other 
hand, if you don't want to waste your time in this kind of discussion then do 
just that: Stay out of it.
  Jim Bromer


  Jim Bromer

--
agi | Archives  | Modify Your Subscription  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Symbols

2008-03-30 Thread Mark Waser
  From: Derek Zahn 
  Is anybody else interested in this kind of question, or am I simply inventing 
issues that are not meaningful and useful?

  The issues you bring up are key/core to a major part of AGI.  Unfortunately, 
they are also issues hashed over way to many times in a mailing list format 
where resolution is nearly impossible.

  Might I suggest attempting to do this in wiki format instead?  I would be 
very interested in participating.

  Mark

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Mark Waser
 True enough, that is one answer:  by hand-crafting the symbols and the 
 mechanics for instantiating them from subsymbolic structures.  We of course 
 hope for better than this but perhaps generalizing these working systems is a 
 practical approach.

Um.  That is what is known as the grounding problem.  I'm sure that Richard 
Loosemore would be more than happy to send references explaining why this is 
not productive.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Mark Waser

From: Kingma, D.P. [EMAIL PROTECTED]

Sure, you could argue that an intelligence purely based on text,
disconnected from the physical world, could be intelligent, but it
would have a very hard time reasoning about interaction of entities in
the physicial world. It would be unable to understand humans in many
aspects: I wouldn't call that generally intelligent.


Given sufficient bandwidth, why would it have a hard time reasoning about 
interaction of entities?  You could describe vision down to the pixel, 
hearing down to the pitch and decibel, touch down to the sensation, etc. and 
the system could internally convert it to exactly what a human feels.  You 
could explain to it all the known theories of psychology and give it the 
personal interactions of billions of people.  Sure, that's a huge amount of 
bandwidth, but it proves that your statement is inaccurate.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-30 Thread Ben Goertzel
My judgment as list moderator:

1)  Discussions of particular, speculative algorithms for solving SAT
are not really germane for this list

2)  Announcements of really groundbreaking new SAT algorithms would
certainly be germane to the list

3) Discussions of issues specifically regarding the integration of SAT solvers
into AGI architectures are highly relevant to this list

4) If you think some supernatural being placed an insight in your mind, you're
probably better off NOT mentioning this when discussing the insight in a
scientific forum, as it will just cause your idea to be taken way less seriously
by a vast majority of scientific-minded people...

-- Ben G, List Owner

On Sun, Mar 30, 2008 at 4:41 PM, Mark Waser [EMAIL PROTECTED] wrote:


 I agree with Richard and hereby formally request that Ben chime in.

 It is my contention that SAT is a relatively narrow form of Narrow AI and
 not general enough to be on an AGI list.

 This is not meant, in any way shape or form, to denigrate the work that you
 are doing.  It is very important work.

 It's just that you're performing the equivalent of presenting a biology
 paper at a physics convention.:-)




 - Original Message -
 From: Jim Bromer
 To: agi@v2.listbox.com
 Sent: Sunday, March 30, 2008 11:52 AM
 Subject: **SPAM** Re: [agi] Logical Satisfiability...Get used to it.





  On the contrary, Vladimir is completely correct in requesting that the
  discussion go elsewhere:  this has no relevance to the AGI list, and
  there are other places where it would be pertinent.
 
 
  Richard Loosemore
 
 

  If Ben doesn't want me to continue, I will stop posting to this group.
 Otherwise please try to understand what I said about the relevance of SAT to
 AGI and try to address the specific issues that I mentioned.  On the other
 hand, if you don't want to waste your time in this kind of discussion then
 do just that: Stay out of it.
 Jim Bromer


 Jim Bromer
  

  agi | Archives | Modify Your Subscription
  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Mark Waser

From: Kingma, D.P. [EMAIL PROTECTED]

Vector graphics can indeed be communicated to an AGI by relatively
low-bandwidth textual input. But, unfortunately,
the physical world is not made of vector graphics, so reducing the
physical world to vector graphics is quite lossy (and computationally
expensive an sich).


Huh?  Intelligence is based upon lossyness and the ability to lose rarely 
relevant (probably incorrect) outlier information is frequently the key to 
making problems tractable (though it can also set you up for failure when 
you miss a phase transition by mistaking it for just an odd outlier :-) 
since it forms the basis of discovery by analogy.


Matt Mahoney's failure to recognize this has him trapped in *exact* 
compression hell.;-)



Who said perception needs to be prewired? Perception should be made
efficient by exploiting statistical regularities in the data, not
assuming them per se. Regularities in the data (captured by your world
model) should tell you where to focus your attention on *most* of the
time, not *all* the time ;)


Which is the correct answer to the grounding problem.  Thank you. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-30 Thread Mark Waser
4) If you think some supernatural being placed an insight in your mind, 
you're

probably better off NOT mentioning this when discussing the insight in a
scientific forum, as it will just cause your idea to be taken way less 
seriously

by a vast majority of scientific-minded people...


Awesome answer!

However, only *some* religions believe in supernatural beings and I, 
personally, have never seen any evidence supporting such a thing.


Have you been having such experiences and been avoiding mentioning them 
because you're afraid for your reputation?


Ben, I'm worried about you now.;-) 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Derek Zahn
Mark Waser writes:
 
 True enough, that is one answer:  by hand-crafting the symbols and  the 
 mechanics for instantiating them from subsymbolic structures.   We of 
 course hope for better than this but perhaps generalizing these  working 
 systems is a practical approach.  Um.  That is what is known as the 
 grounding problem.  I'm sure that  Richard Loosemore would be more than 
 happy to send references explaining  why this is not productive.
 
It's not the grounding problem.  The symbols crashing around inthese robotic 
systems are very well grounded.  
 
The problem is that these systems are narrow, not that they 
manipulateungrounded symbols.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Symbols

2008-03-30 Thread Mike Tintner
In this  surrounding discussions, everyone seems deeply confused -  it's 
nothing personal, so is our entire culture - about the difference between

SYMBOLS

1.  Derek Zahn  curly hair big jaw  intelligent eyes  . etc. etc

and 

IMAGES

2. http://robot-club.com/teamtoad/nerc/h2-derek-sunflower.JPG

I suggest that everytime you want to think about this area, you all put symbols 
besides the corresponding images, and slowly it will start to become clear that 
each does things the other CAN'T do, period.

We are all next to illiterate - and I mean, mind-blowingly ignorant - about how 
images function. What, for example, does an image of D.Z. or any person, do, 
that no amount of symbols - whether words, numbers, algebraic formulae, or 
logical propositions -  could ever do?

Why are images almost always more powerful than the corresponding symbols? Why 
do they communicate so much faster?

  Derek:

  Related obliquely to the discussion about pattern discovery algorithms 
What is a symbol?
   
  I am not sure that I am using the words in this post in exactly the same way 
they are normally used by cognitive scientists; to the extent that causes 
confusion, I'm sorry.  I'd rather use words in their strict conventional sense 
but I do not fully understand what that is.  These thoughts are fuzzier than 
I'd like; if I was better at de-fuzzifying them I might be a pro instead of an 
amateur!
   
  Proposition:  a symbol is a token with both denotative and model-theoretic 
semantics.
   
  The denotative semantacs are what makes a symbol refer to something or be 
about something.  The model-theoretic semantics allow symbol processing 
operations to occur (such as reasoning).
   
  I believe this is a somewhat more restrictive use of the word symbol than 
is necessarily implied by Newell and Simon in the Physical Symbol System 
Hypothesis, but my aim is engineering rather than philosophy.
   
  I'm actually somewhat skeptical that human beings use symbols in this sense 
for much of our cognition.  We appear to be a million times better at it than 
any other animal, and that is the special thing that makes us so great, but we 
still aren't very good at it.  However, most of the things we want to build AGI 
*for* require us to greatly expand the symbol processing capabilities of mere 
humans.  I think we're mostly interested in building artificial scientists and 
engineers rather than artificial musicians.  Since computer programs, 
engineering drawings, and physics theories are explicitly symbolic constructs, 
we're more interested in effectively creating symbols than in the totality of 
the murky subsymbolic world supporting it.  To what extent can we separate 
them?  I wish I knew.
   
  In this view, subsymbolic simply refers to tokens that lack some of the 
features of symbols.  For example, a representation of a pixel from a camera 
has clear denotational semantics but it is not elaborated as well as a better 
symbol would be (the light coming from direction A at time B is not as useful 
as the light reflecting off of Fred's pinky fingernail).  Similarly, and more 
importantly, subsymbolic products of sensory systems lack useful 
model-theoretic semantics.  The origin of symbols problem involves how those 
semantics arise -- and to me it's the most interesting piece of the AGI puzzle.
   
  Is anybody else interested in this kind of question, or am I simply inventing 
issues that are not meaningful and useful?



--
agi | Archives  | Modify Your Subscription  



--


  No virus found in this incoming message.
  Checked by AVG. 
  Version: 7.5.519 / Virus Database: 269.22.1/1349 - Release Date: 3/29/2008 
5:02 PM

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-30 Thread Ben Goertzel
On Sun, Mar 30, 2008 at 5:09 PM, Mark Waser [EMAIL PROTECTED] wrote:
  4) If you think some supernatural being placed an insight in your mind,
   you're
   probably better off NOT mentioning this when discussing the insight in a
   scientific forum, as it will just cause your idea to be taken way less
   seriously
   by a vast majority of scientific-minded people...

  Awesome answer!

  However, only *some* religions believe in supernatural beings and I,
  personally, have never seen any evidence supporting such a thing.

I've got one in a jar in my basement ... but don't worry, I won't let him out
till the time is right ;-) ...

and so far, all his AI ideas have proved to be
absolute bullshit, unfortunately ... though he's done a good job of helping
me put hexes on my neighbors...


  Have you been having such experiences and been avoiding mentioning them
  because you're afraid for your reputation?

  Ben, I'm worried about you now.;-)




  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Symbols

2008-03-30 Thread Vladimir Nesov
On Mon, Mar 31, 2008 at 12:02 AM, Mike Tintner [EMAIL PROTECTED] wrote:

 We are all next to illiterate - and I mean, mind-blowingly ignorant - about
 how images function. What, for example, does an image of D.Z. or any person,
 do, that no amount of symbols - whether words, numbers, algebraic formulae,
 or logical propositions -  could ever do?

 Why are images almost always more powerful than the corresponding symbols?
 Why do they communicate so much faster?


Because of higher bandwidth?

Mike, what is the point in crying ignorance, while providing no
constructive comment? Argument from awe can lead to all kinds of wrong
conclusions.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Symbols

2008-03-30 Thread Mark Waser
 Why are images almost always more powerful than the corresponding symbols? 
 Why do they communicate so much faster?

Um . . . . dude . . . . it's just a bandwidth thing.

Think about images vs. visual symbols vs. word descriptions vs. names.

It's a spectrum from high-bandwidth information transfer to almost pure 
reference tags.

If it's something you've never run across before, images are best -- high 
bandwidth but then you end up with high mental processing costs.

For familiar items, word descriptions (or better yet, single word names) 
require little bandwidth and little in the way of subsequent processing costs.

  - Original Message - 
  From: Mike Tintner 
  To: agi@v2.listbox.com 
  Sent: Sunday, March 30, 2008 4:02 PM
  Subject: Re: [agi] Symbols


  In this  surrounding discussions, everyone seems deeply confused -  it's 
nothing personal, so is our entire culture - about the difference between

  SYMBOLS

  1.  Derek Zahn  curly hair big jaw  intelligent eyes  . etc. etc

  and 

  IMAGES

  2. http://robot-club.com/teamtoad/nerc/h2-derek-sunflower.JPG

  I suggest that everytime you want to think about this area, you all put 
symbols besides the corresponding images, and slowly it will start to become 
clear that each does things the other CAN'T do, period.

  We are all next to illiterate - and I mean, mind-blowingly ignorant - about 
how images function. What, for example, does an image of D.Z. or any person, 
do, that no amount of symbols - whether words, numbers, algebraic formulae, or 
logical propositions -  could ever do?

  Why are images almost always more powerful than the corresponding symbols? 
Why do they communicate so much faster?

Derek:

Related obliquely to the discussion about pattern discovery algorithms 
What is a symbol?
 
I am not sure that I am using the words in this post in exactly the same 
way they are normally used by cognitive scientists; to the extent that causes 
confusion, I'm sorry.  I'd rather use words in their strict conventional sense 
but I do not fully understand what that is.  These thoughts are fuzzier than 
I'd like; if I was better at de-fuzzifying them I might be a pro instead of an 
amateur!
 
Proposition:  a symbol is a token with both denotative and 
model-theoretic semantics.
 
The denotative semantacs are what makes a symbol refer to something or be 
about something.  The model-theoretic semantics allow symbol processing 
operations to occur (such as reasoning).
 
I believe this is a somewhat more restrictive use of the word symbol than 
is necessarily implied by Newell and Simon in the Physical Symbol System 
Hypothesis, but my aim is engineering rather than philosophy.
 
I'm actually somewhat skeptical that human beings use symbols in this sense 
for much of our cognition.  We appear to be a million times better at it than 
any other animal, and that is the special thing that makes us so great, but we 
still aren't very good at it.  However, most of the things we want to build AGI 
*for* require us to greatly expand the symbol processing capabilities of mere 
humans.  I think we're mostly interested in building artificial scientists and 
engineers rather than artificial musicians.  Since computer programs, 
engineering drawings, and physics theories are explicitly symbolic constructs, 
we're more interested in effectively creating symbols than in the totality of 
the murky subsymbolic world supporting it.  To what extent can we separate 
them?  I wish I knew.
 
In this view, subsymbolic simply refers to tokens that lack some of the 
features of symbols.  For example, a representation of a pixel from a camera 
has clear denotational semantics but it is not elaborated as well as a better 
symbol would be (the light coming from direction A at time B is not as useful 
as the light reflecting off of Fred's pinky fingernail).  Similarly, and more 
importantly, subsymbolic products of sensory systems lack useful 
model-theoretic semantics.  The origin of symbols problem involves how those 
semantics arise -- and to me it's the most interesting piece of the AGI puzzle.
 
Is anybody else interested in this kind of question, or am I simply 
inventing issues that are not meaningful and useful?




  agi | Archives  | Modify Your Subscription   






No virus found in this incoming message.
Checked by AVG. 
Version: 7.5.519 / Virus Database: 269.22.1/1349 - Release Date: 3/29/2008 
5:02 PM


--
agi | Archives  | Modify Your Subscription  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your 

Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread William Pearson
On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote:

 Intelligence is not *only* about the modalities of the data you get,
  but modalities are certainly important. A deafblind person can still
  learn a lot about the world with taste, smell, and touch, but the
  senses one has access to defines the limits to the world model one can
  build.

As long as you have one high bandwidth modality you should be able to
add on technological gizmos to convert information to that modality,
and thus be able to model the phenomenon from that part of the world.

Humans manage to convert modalities E.g.

http://www.engadget.com/2006/04/25/the-brain-port-neural-tongue-interface-of-the-future/
Using touch on the tongue.

We don't do it very well, but that is mainly because we don't have to
do it it very often.

AIs that are designed to have new modalities added to them, using
their major modality of their memory space+interrupts (or other
computational modality), may be even more flexible than humans and
able to adapt to to a new modality as quickly as  a current computer
is able to add a new device.


  If I put on ear muffs and a blind fold right now, I can still reason
  quite well using touch, since I have access to a world model build
  using e.g. vision. If you were deafblind and paralysed since your
  birth, would you have any possibility of spatial reasoning? No, maybe
  except for some extremely crude genetically coded heuristics.

Sure if you don't get any spatial information you won't be able to
model spatially. But getting the information is different from having
a dedicated modality.  My point was that audiovisual is not the only
way to get spatial information. It may not even be the best way for
what we happen to want to do. So not to get too hung up on any
specific modality when discussing intelligence.

  Sure, you could argue that an intelligence purely based on text,
  disconnected from the physical world, could be intelligent, but it
  would have a very hard time reasoning about interaction of entities in
  the physicial world. It would be unable to understand humans in many
  aspects: I wouldn't call that generally intelligent.

I'm not so much interested in this case, but what about the case where
you have a robot with Sonar, Radar and other sensors. But not the
normal 2 camera +2 microphone thing people imply when they say
audiovisual.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-30 Thread Richard Loosemore

Jim Bromer wrote:



On the contrary, Vladimir is completely correct in requesting that the
discussion go elsewhere:  this has no relevance to the AGI list, and
there are other places where it would be pertinent.


Richard Loosemore

 
If Ben doesn't want me to continue, I will stop posting to this group. 
Otherwise please try to understand what I said about the relevance of 
SAT to AGI and try to address the specific issues that I mentioned.  On 
the other hand, if you don't want to waste your time in this kind of 
discussion then do just that: Stay out of it.

Jim Bromer


Since diplomacy did not work, I will come to the point:  as far as i can 
see you have given no specific issues, only content-free speculation 
on topics of no relevance.




Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
Okay, with text, I mean natural language, in it's usual
low-bandwidth form. That should clarify my statement. Any data can be
represented with text of course, but that's not the point... The point
that I was trying to make is that natural language is too
low-bandwidth to provide sufficient data to learn a sufficient model
about entities embedded in a complex physical world, such as humans.



On Sun, Mar 30, 2008 at 10:50 PM, Mark Waser [EMAIL PROTECTED] wrote:
 From: Kingma, D.P. [EMAIL PROTECTED]

  Sure, you could argue that an intelligence purely based on text,
   disconnected from the physical world, could be intelligent, but it
   would have a very hard time reasoning about interaction of entities in
   the physicial world. It would be unable to understand humans in many
   aspects: I wouldn't call that generally intelligent.

  Given sufficient bandwidth, why would it have a hard time reasoning about
  interaction of entities?  You could describe vision down to the pixel,
  hearing down to the pitch and decibel, touch down to the sensation, etc. and
  the system could internally convert it to exactly what a human feels.  You
  could explain to it all the known theories of psychology and give it the
  personal interactions of billions of people.  Sure, that's a huge amount of
  bandwidth, but it proves that your statement is inaccurate.





  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
On Sun, Mar 30, 2008 at 11:00 PM, Mark Waser [EMAIL PROTECTED] wrote:
 From: Kingma, D.P. [EMAIL PROTECTED]

  Vector graphics can indeed be communicated to an AGI by relatively
   low-bandwidth textual input. But, unfortunately,
   the physical world is not made of vector graphics, so reducing the
   physical world to vector graphics is quite lossy (and computationally
   expensive an sich).

  Huh?  Intelligence is based upon lossyness and the ability to lose rarely
  relevant (probably incorrect) outlier information is frequently the key to
  making problems tractable (though it can also set you up for failure when
  you miss a phase transition by mistaking it for just an odd outlier :-)
  since it forms the basis of discovery by analogy.

  Matt Mahoney's failure to recognize this has him trapped in *exact*
  compression hell.;-)

Agreed with that, exact compression is not the way to go if you ask
me. But that doesn't mean any lossy method is OK. Converting a scene
to vector graphics will lead you to throwing away much visual
information early in the process: visual information (e.g. texture)
that might be useful later in the process (for e.g. disambiguation).
I'm not stating a vector description is not useful: I'm stating that
information is thrown away that could have been used to construct an
essential part of a world model that understands physical entities
down to the level of e.g. textures.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
(Sorry for triple posting...)

On Sun, Mar 30, 2008 at 11:34 PM, William Pearson [EMAIL PROTECTED] wrote:
 On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote:


  Intelligence is not *only* about the modalities of the data you get,
but modalities are certainly important. A deafblind person can still
learn a lot about the world with taste, smell, and touch, but the
senses one has access to defines the limits to the world model one can
build.

  As long as you have one high bandwidth modality you should be able to
  add on technological gizmos to convert information to that modality,
  and thus be able to model the phenomenon from that part of the world.

  Humans manage to convert modalities E.g.

  
 http://www.engadget.com/2006/04/25/the-brain-port-neural-tongue-interface-of-the-future/
  Using touch on the tongue.

Nice article. Apparently even the brain's region for perception of
taste is generally adaptable to new input.

  I'm not so much interested in this case, but what about the case where
  you have a robot with Sonar, Radar and other sensors. But not the
  normal 2 camera +2 microphone thing people imply when they say
  audiovisual.

That's an interesting case indeed. AGIs equipped with
sonar/radar/ladar instead of 'regular' vision should be perfectly able
at certain forms of spatial reasoning, but still unable to understand
humans at certain subjects. Still, if you don't need your agents to
completely understand humans, audiovisual senses could go out of the
window. It depends on your agent's goals, I guess.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Symbols

2008-03-30 Thread Mike Tintner

  MW:
  MT: Why are images almost always more powerful than the corresponding 
symbols? Why do they communicate so much faster?

  Um . . . . dude . . . . it's just a bandwidth thing.

  Vlad:Because of higher bandwidth? 


  Well, guys, if the only difference between an image and, say, a symbolic - 
verbal or mathematical or programming - description is bandwidth, perhaps 
you'll be able to explain how you see the Cafe Wall illusion from a symbolic 
description:

  http://www.at-bristol.org.uk/Optical/cafewall_main.htm

  A symbolic description of the above will only describe a set of parallel 
lines and rectangles - and there will be no illusion.

  (You could also try a similar exercise with some of the other illusions 
there).

  Or you might try a symbolic description of the Mona Lisa, and explain to me, 
how I will know from your description that she is smiling. You see if you take 
that image to pieces  - as you must do in forming a symbolic description - 
there is no smile!:

  http://gotart.wordpress.com/2007/01/26/mona-lisa-lisa-gherardini/

  And perhaps you can explain to me how you will see the final picture on any 
fully-formed jigsaw puzzle from just the pieces at the very beginning. Take a 
picture to pieces - and you don't get the picture any more. 

  Like I said, we are extremely ignorant about how images work.  (I'll explain 
more another time - but in the meantime, maybe Vlad can explain to us how and 
where the information that is lost in the above examples, is encoded.).

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Mark Waser

From: Kingma, D.P. [EMAIL PROTECTED]

Okay, with text, I mean natural language, in it's usual
low-bandwidth form. That should clarify my statement. Any data can be
represented with text of course, but that's not the point... The point
that I was trying to make is that natural language is too
low-bandwidth to provide sufficient data to learn a sufficient model
about entities embedded in a complex physical world, such as humans.


Ah . . . . but natural language is *NOT* necessarily low-bandwidth.

As humans experience it with pretty much just a single focus of attention 
and only one set of eyes and ears that can only operate so fast -- Yes, it 
is low bandwidth.


But what about an intelligence with a hundred or more foci of attention and 
the ability to pull that many Wikipedia pages simultaneously at extremely 
high speed? 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Mark Waser

From: Kingma, D.P. [EMAIL PROTECTED]

Agreed with that, exact compression is not the way to go if you ask
me. But that doesn't mean any lossy method is OK. Converting a scene
to vector graphics will lead you to throwing away much visual
information early in the process: visual information (e.g. texture)
that might be useful later in the process (for e.g. disambiguation).
I'm not stating a vector description is not useful: I'm stating that
information is thrown away that could have been used to construct an
essential part of a world model that understands physical entities
down to the level of e.g. textures.


I would agree completely except that I would think that there is some way to 
include texture in the vector graphics in the same way in which color is 
included.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Symbols

2008-03-30 Thread Mark Waser
From: Mike Tintner 
   Well, guys, if the only difference between an image and, say, a symbolic - 
verbal or mathematical or programming - description is bandwidth, perhaps 
you'll be able to explain how you see the Cafe Wall illusion from a symbolic 
description:

  Sure!  The Cafe Wall illusion is a result of the interaction between an image 
composed of four parallel horizontal lines dividing the image into five strips 
with alternating black and white bars with the second and fourth strips 
slightly offset so as to trick the human eye into believing that the parallel 
lines aren't and the optimizing algorithms of the human eye.  I could go into 
enough detail to explain exactly how and why the trick works -- the fact that 
the eye is attempting to interpret a two-dimensional image as a 
three-dimensional scene -- but I think that I've made my point adequately.

   A symbolic description of the above will only describe a set of parallel 
lines and rectangles - and there will be no illusion.

  Of course not, the illusion is a result of the image being implemented on the 
hardware of the human eye and brain.  Unless you describe the human eye and 
brain, you don't get the illusion -- but you can do so easily as I did above 
and the illusion re-appears.

   Or you might try a symbolic description of the Mona Lisa, and explain to 
me, how I will know from your description that she is smiling. You see if you 
take that image to pieces  - as you must do in forming a symbolic description - 
there is no smile!:

  Huh?  All I need to do is include the smile in the description.  You can both 
take the image to pieces *AND* describe the whole at the same time.

   And perhaps you can explain to me how you will see the final picture on any 
fully-formed jigsaw puzzle from just the pieces at the very beginning. Take a 
picture to pieces - and you don't get the picture any more. 

  Wrong.  Take a child's ten piece puzzle apart and re-arrange all the pieces.  
It's simple enough that your mind can hold all of it at once and get the 
picture.  It's only when you take it to too many pieces . . . 

   Like I said, we are extremely ignorant about how images work.  (I'll 
explain more another time - but in the meantime, maybe Vlad can explain to us 
how and where the information that is lost in the above examples, is encoded.).

  I would be extremely careful about throwing the word we around and assuming 
that everyone is just like you.  Why does everyone else has to be ignorant 
about a subject just because you don't understand it yet?

  Do you understand general relativity?  If not, does that suddenly mean that I 
don't understand it any more?  How about biochemistry, physical chemistry, 
thermodynamics, evolution, simulated annealing, etc.?

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Boris Kazachenko

I'm going to attack you by questions again :-)


You're more than welcome to, sorry for being brisk. I did reply about RSS on 
the blog, but for some reason the post never made it through.

I don't how RSS works, but you can subscribe via bloglines.com.


What are 'range' and 'complexity'? Is there a specific architecture of
'levels'? Why should higher-level concepts be multidimensional?


Levels are defined incrementally, a comparison adds a set of derivatives to 
the syntax (complexity) of the template pattern.
After sufficient accumulation (range) the pattern is evaluated  selectively 
transfered to a higher level, where these derivatives are also compared, 
forming yet another level of complexity.
Complexity generally corresponds to the range of search ( resulting 
projection) because it adds cost, which must be justified by the 
benefit: accumulated match (one of the derivatives).
The levels may differ in dimensionality (we do live in a 4D world)b or 
modality integration, but this doesn't have to be designed-in, the 
differences can be discovered by the system.



What is the dynamics of system's operation in time? Is inference
feed-forward and 'instantaneous', measuring by external clock? Can
capture time series?


Temporal, as well as spatial, range of search ( duration is storage) 
increases with the level of generality, the feedback (projection) delay 
increases too.



By 'what prediction is for?' I mean connection to action. How does
prediction of inputs or features of inputs translate into action? If
this prediction activity doesn't lead to any result, it may as well be
absent.


The intellectual part of action is planning, which technically is a 
self-prediction.
Prediction is a pattern with adjusted focus: coordinates  resolution, sent 
down the hierarchy the changes act as a motor feedback.
Using the feedback the system will focus on areas of the environment with 
the greatest predictive potential.

To do so, it will eventualy learn to use tools.

Boris, http://scalable-intelligence.blogspot.com/ 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Boris Kazachenko

  It seems like a reasonable and not uncommon idea that an AI could be built as 
a mostly-hierarchical autoassiciative memory.  As you point out, it's not so 
different from Hawkins's ideas.  Neighboring pixels will correlate in space 
and time; features such as edges should become principle components given 
enough data, and so on.  There is a bunch of such work on self-organizing the 
early visual system like this.
   That overall concept doesn't get you very far though; the trick is to make 
it work past the first few rather obvious feature extraction stages of sensory 
data, and to account for things like episodic memory, language use, 
goal-directed behavior, and all other cognitive activity that is not just 
statistical categorization.
  I sympathize with your approach and wish you luck.  If you think you have 
something that produce more than Hawkins has with his HTM, please explain it 
with enough precision that we can understand the details.

  Good questions.

  I agree with you on Hawkins  HTM, but his main problem is conceptual.
  He seems to be profoundly confused as to what the hierarchy should select 
for: generality or novelty. He nominates both, apparently not realizing that 
they're mutually exclusive. This creates a difficulty in defining a 
quantitative criterion for selection, which is a key for my approach. This 
internal inconsistency leads to haphazard hacking in the HTM. For example, he 
starts by comparing 2D frames in a binary fashion, which pretty perverse for an 
incremental approach. I start from the begining, by comparing pixels: the limit 
of resolution,  I quantify the degree of match right there, as a distinct 
variable. I also record  compare explicit coordinates  derivatives, while he 
simply junks all that information. His approach doesn't scale because it's not 
consistent  incremental enough.

  I disagree that we need to specifically code episodic memory, language,  
action, - to me these are emergent properties (damn, I hate that word:)).

  Boris.   
  http://scalable-intelligence.blogspot.com/

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com