Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Terren Suydam

Hi Colin,

Looking at 
http://en.wikipedia.org/wiki/Electromagnetic_theories_of_consciousness, does 
your position vary substantially from what is written there?

Thanks,
Terren

--- On Fri, 12/19/08, Colin Hales c.ha...@pgrad.unimelb.edu.au wrote:
From: Colin Hales c.ha...@pgrad.unimelb.edu.au
Subject: Re: [agi] Building a machine that can learn from experience
To: agi@v2.listbox.com
Date: Friday, December 19, 2008, 1:09 AM




  
YKY (Yan King Yin) wrote:

  
DARPA buys G.Tononi for 4.9 $Million!  For what amounts to little more
than vague hopes that any of us here could have dreamed up. Here I am, up to
my armpits in an actual working proposition with a real science basis...
scrounging for pennies. hmmm...maybe if I sidle up and adopt an aging Nobel
prizewinner...maybe that'll do it.

nah. too cynical for the festive season. There's always 2009! You never
know

  
  You talked about building your 'chips'.  Just curious what are you
working on?  Is it hardware-related?

YKY


  

Hi,

I think I covered this in a post a while back but FYI... I am a little
'left-field' in the AGI circuit in that my approach involves literal
replication of the electromagnetic field structure of brain material.
This is in contrast to a computational model of the electromagnetic
field structure. The process involves a completely new chip design
which looks nothing like what we're used to. I have a crucial
experiment to run over the next 2 years. The results should be (I hope)
the basic parameters for early miniaturised prototype. 



The part of my idea that freaks everyone out is that there is no
programming involved. You can adjust the firmware settings for certain
intrinsic properties of the dynamics of the EM fields. But none of
these things correspond in any direct way to 'knowledge' or
intelligence. The chips (will) do what brain material does, but without
all the bio-overheads.



The thing that caught my eye in the thread subject Building a machine
that can learn from experience... is that if you asked Tononi or
anyone else exactly where the 'experience' is, they won't be able to
tell you. The EM field approach deals with this very question first.
The net EM field structure expressed in space literally is the
experiences. All learning is grounded in it. (Not input/output signals)



I wonder how anyone can claim that a machine that learns from
experience when you haven't really got a cogent,  physical and
biologically plausible, neuroscience informed  view of what
'experience' actually is. But there you go... guys like Tononi get
listened to. And good luck to them!



So I guess my approach is likely to remain a bit of an oddity here
until I get my results into the literature. The machines I plan to
build will be very small and act like biology... I call them
artificial fauna. My fave is the 'cane toad killer' that gets its
kicks by killing nothing but cane toads (these are a major eco-disaster
in northern australia). They can't reproduce and their learning
capability (used to create them) is switched off. It's a bit like the
twenty-something neural dieback in humans... after that you're set in
your ways.



Initially I want to build something 'ant-like' to enter into robo-cup
as a proof of concept anyway that's the plan. 



So the basics are: all hardware. No programming. The chips don't exist
yet, only their design concept (in a provisional patent application
just now). 



I think you get the idea. Thanks for your interest.



cheers

colin







  

  
  agi | Archives

 | Modify
 Your Subscription


  

  


 



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Should I get a PhD?

2008-12-17 Thread Terren Suydam

I'm no expert in these matters, but it seems this conversation is lacking the 
following point(s).

Ability to score funding for any venture usually requires salesmanship and 
connections. Salesmanship in turn requires that you are able to tell a good 
story about your product/approach, and be able to dispel doubts. Charisma and 
confidence help, and these can be developed to some extent if you're not one of 
the lucky ones who are just natural salesmen.

So I'd say if your primary concern is scoring funding, a PhD is a costly way to 
gain credibility. Credibility can be earned in other ways (such as having 
demonstrable results), and other factors may ultimately be more important, such 
as the ability to create and develop relationships with the people in your 
field who may be in a good position to help you out.

Terren

--- On Wed, 12/17/08, YKY (Yan King Yin) generic.intellige...@gmail.com wrote:
  On the contrary, getting a PhD is an astoundingly poor
 strategy for raising
  $$ for a startup.  If you have a talent for biz
 sufficient to raise $$ for a
  startup, you can always get some prof to join your
 team to lend you academic
  credibility.
 
  It is also useful in terms of lending you more
 credibility when you talk
  about your own wacky research ideas.  This may be part
 of YKY's motivation,
  and it's a genuinely meaningful one.  But having
 credibility when talking
  about research ideas is not particularly well
 correlated with being able to
  raise business funding.
 
 Getting business funding may be an inherently hard thing to
 do.  So,
 other things being equal, spending some time + money on a
 PhD degree
 may still be better than all other options.  That's my
 current
 reasoning...
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] Lamarck Lives!(?)

2008-12-11 Thread Terren Suydam

After talking to an old professor of mine, it bears mentioning that epigenetic 
mechanisms such as methylation and histone remodeling are not the only means of 
altering transcription. A long established mechanism involves phosphorylation 
of transcription factors in the neuron (phosphorylation is a way of chemically 
enabling or disabling the function of a particular enzyme).

In light of that I think there is some fuzziness around the use of epigenetic 
here because you could conceivably consider the above phosphorylation mechanism 
as epigenetic - functionally speaking, the effect is the same - an increase 
or decrease in transcription. The only difference between that and methylation 
etc is transience: phosphorylation of transcription factors is less permanent 
then altering the DNA.

He also shed some light on the effects on synapses due to epigenetic 
mechanisms. Ed, you were wondering how synapse-specific changes could occur in 
response to transcription mechanisms (which are central to the neuron). 
Specifically: There are 2 possible answers to that puzzle 
(that I am aware of);  1) evidence of mRNA and translation machinery 
present in dendrites at the site of synapses (see papers published by Oswald 
Steward or 2) activity causes a specific synapse to be 'tagged' so that 
newly synthesized proteins in the cell body are targeted specifically to the 
tagged synapses.

Terren

--- On Thu, 12/11/08, Ed Porter [EMAIL PROTECTED] wrote:
From: Ed Porter [EMAIL PROTECTED]
Subject: FW: [agi] Lamarck Lives!(?)
To: agi@v2.listbox.com
Date: Thursday, December 11, 2008, 10:32 AM

I


 


 








To save you the trouble the most relevant
language from the below cited article is 

 

 

“While scientists don't yet know exactly
how epigenetic regulation affects memory, the theory is that certain triggers,
such as exercise, visual stimulation, or drugs, unwind DNA, allowing expression
of genes involved in neural plasticity. That increase in gene expression might
trigger development of new neural connections and, in turn, strengthen the
neural circuits that underlie memory formation. Maybe our brains are
using these epigenetic mechanisms to allow us to learn and remember things, or
to provide sufficient plasticity to allow us to learn and adapt, says John 
Satterlee, program director of epigenetics at the National
Institute on Drug Abuse, in Bethesda, MD. 

We
have solid evidence that HDAC inhibitors massively promote growth of dendrites
and increase synaptogenesis [the creation of connections between
neurons], says Tsai. The process may boost memory or allow mice to regain
access to lost memories by rewiring or repairing damaged neural circuits.
We believe the memory trace is still there, but the animal cannot
retrieve it due to damage to neural circuits, she adds. ”

 

-Original Message-

From: Ed Porter
[mailto:[EMAIL PROTECTED] 

Sent: Thursday,
 December 11, 2008 10:28 AM

To: 'agi@v2.listbox.com'

Subject: FW: [agi] Lamarck
Lives!(?)

 

An article related to how changes in the
epigenonme could affect learning and memory (the subject which started this
thread a week ago)

 

 

http://www.technologyreview.com/biomedicine/21801/

 

 







  

  
  agi | Archives

 | Modify
 Your Subscription


  

  


 




  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] Lamarck Lives!(?)

2008-12-11 Thread Terren Suydam

Evolution is not magic. You haven't addressed the substance of Matt's questions 
at all. What you're suggesting is magical unless you can talk about specific 
mechanisms, as Richard did last week. Richard's idea - though it is extremely 
unlikely and lacks empirical evidence to support it - is technically plausible. 
He proposed a logical chain of ideas, which can be supported and/or criticized, 
something you need to do if you expect to be taken seriously. 

There are obvious parallels here with AGI. It's very easy to succumb to magical 
or pseudo-explanations of intelligence. So talk specifically and technically 
about *mechanisms* (even if extremely unlikely) and you're not wasting anyone's 
time.

Terren

--- On Thu, 12/11/08, Eric Burton brila...@gmail.com wrote:

 From: Eric Burton brila...@gmail.com
 Subject: Re: FW: [agi] Lamarck Lives!(?)
 To: agi@v2.listbox.com
 Date: Thursday, December 11, 2008, 6:33 PM
 I don't think that each inheritor receives a full set of
 the
 original's memories. But there may have *evolved* in
 spite of the
 obvious barriers, a means of transferring primary or
 significant
 experience from one organism to another in genetic form...
 we can
 imagine such a thing given this news!
 
 On 12/11/08, Matt Mahoney matmaho...@yahoo.com
 wrote:
  --- On Thu, 12/11/08, Eric Burton
 brila...@gmail.com wrote:
 
  You can see though how genetic memory encoding
 opens the door to
  acquired phenotype changes over an organism's
 life, though, and those
  could become communicable. I think Lysenko was
 onto something like
  this. Let us hope all those Soviet farmers
 wouldn't have just starved!
  ;3
 
  No, apparently you didn't understand anything I
 wrote.
 
  Please explain how the memory encoded separately as
 one bit each in 10^11
  neurons through DNA methylation (the mechanism for
 cell differentiation, not
  genetic changes) is all collected together and encoded
 into genetic changes
  in a single egg or sperm cell, and back again to the
 brain when the organism
  matures.
 
  And please explain why you think that Lysenko's
 work should not have been
  discredited.
 http://en.wikipedia.org/wiki/Trofim_Lysenko
 
  -- Matt Mahoney, matmaho...@yahoo.com
 
 
  On 12/11/08, Matt Mahoney
 matmaho...@yahoo.com
  wrote:
   --- On Thu, 12/11/08, Eric Burton
  brila...@gmail.com wrote:
  
   It's all a big vindication for
 genetic memory,
  that's for certain. I
   was comfortable with the notion of
 certain
  templates, archetypes,
   being handed down as aspects of brain
 design via
  natural selection,
   but this really clears the way for
 organisms'
  life experiences to
   simply be copied in some form to their
 offspring.
  DNA form!
  
   No it's not.
  
   1. There is no experimental evidence that
 learned
  memories are passed to
   offspring in humans or any other species.
  
   2. If memory is encoded by DNA methylation as
 proposed
  in
  
 
 http://www.newscientist.com/article/mg20026845.000-memories-may-be-stored-on-your-dna.html
   then how is the memory encoded in 10^11
 separate
  neurons (not to mention
   connectivity information) transferred to a
 single egg
  or sperm cell with
   less than 10^5 genes? The proposed mechanism
 is to
  activate one gene and
   turn off another -- 1 or 2 bits.
  
   3. The article at
  http://www.technologyreview.com/biomedicine/21801/
 says
   nothing about where memory is encoded, only
 that
  memory might be enhanced by
   manipulating neuron chemistry. There is
 nothing
  controversial here. It is
   well known that certain drugs affect
 learning.
  
   4. The memory mechanism proposed in
  
 
 http://www.ncbi.nlm.nih.gov/pubmed/16822969?ordinalpos=14itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_DefaultReportPanel.Pubmed_RVDocSum
   is distinct from (2). It proposes protein
 regulation
  at the mRNA level near
   synapses (consistent with the Hebbian model)
 rather
  than DNA in the nucleus.
   Such changes could not make their way back to
 the
  nucleus unless there was a
   mechanism to chemically distinguish the tens
 of
  thousands of synapses and
   encode this information, along with the
 connectivity
  information (about 10^6
   bits per neuron) back to the nuclear DNA.
  
   Last week I showed how learning could occur
 in neurons
  rather than synapses
   in randomly and sparsely connected neural
 networks
  where all of the outputs
   of a neuron are constrained to have identical
 weights.
  The network is
   trained by tuning neurons toward excitation
 or
  inhibition to reduce the
   output error. In general an arbitrary X to Y
 bit
  binary function with N = Y
   2^X bits of complexity can be learned using
 about 1.5N
  to 2N neurons with ~
   N^1/2 synapses each and ~N log N training
 cycles. As
  an example I posted a
   program that learns a 3 by 3 bit multiplier
 in about
  20 minutes on a PC
   using 640 neurons with 36 connections each.
  
   This is slower than Hebbian learning by a
 factor of
  

Re: FW: [agi] Lamarck Lives!(?)

2008-12-11 Thread Terren Suydam

That made almost no sense to me. I'm not trying to be rude here, but that 
sounded like the ramblings of one who doesn't have the necessary grasp of the 
key ideas required to speculate intelligently about these things. The fact that 
you once again managed to mention psilocybin does nothing to help your cause, 
either... and that's coming from someone who believes that psychedelics can be 
valuable, if used properly.

Terren

--- On Thu, 12/11/08, Eric Burton brila...@gmail.com wrote:

 From: Eric Burton brila...@gmail.com
 Subject: Re: FW: [agi] Lamarck Lives!(?)
 To: agi@v2.listbox.com
 Date: Thursday, December 11, 2008, 9:11 PM
 Ok.
 
 We think we're seeing short-term memories
 forming in the hippocampus and slowly turning into
 long-term memories in the cortex, says Miller,
 who presented the results last week at the Society
 for Neuroscience meeting in Washington DC.
 
 It certainly sounds like the genetic changes are limited to
 the brain
 itself. Perhaps there is some kind of extra DNA scratch
 space allotted
 to cranial nerve cells. I understand that psilocybin, a
 phosphorylated
 serotonin-like neurotransmitter found in fungal mycelia,
 may have
 evolved as a phosphorous bank for all the DNA needed in
 spore
 production. The structure of fungal mycelia closely
 approximates that
 of the brains found in the animal kingdom, which may have
 evolved from
 the same or some shared point. Then we see how the brain
 can be viewed
 as a qualified, indeed purpose-built DNA recombination
 factory!
 
 Fungal mycelia could be approaching all this from the
 opposite
 direction, doing DNA computation incidentally so as to
 perform
 short-term weather forecasts and other environmental
 calculations,
 simply because there is so much of it about for the next
 sporulation.
 A really compelling avenue for investigation
 
 The cool idea here is that the brain could be
 borrowing a form of cellular memory from
 developmental biology to use for what we think of as
 memory, says Marcelo Wood, who
 researches long-term memory at the University of
 California, Irvine.
 
 Yes. It is
 
 Eric B
 
 On 12/11/08, Eric Burton brila...@gmail.com wrote:
  I don't know how you derived the value 10^4, Matt,
 but that seems
  reasonable to me. Terren, let me go back to the
 article and try to
  understand what exactly it says is happening.
 Certainly that's my
  editorial's crux
 
  On 12/11/08, Matt Mahoney matmaho...@yahoo.com
 wrote:
  --- On Thu, 12/11/08, Eric Burton
 brila...@gmail.com wrote:
 
  I don't think that each inheritor receives
 a full set of the
  original's memories. But there may have
 *evolved* in spite of the
  obvious barriers, a means of transferring
 primary or significant
  experience from one organism to another in
 genetic form...
  we can imagine such a thing given this news!
 
  Well, we could, if there was any evidence
 whatsoever for Lamarckian
  evolution, and if we thought with our reproductive
 organs.
 
  To me, it suggests that AGI could be implemented
 with a 10^4 speedup over
  whole brain emulation -- maybe. Is it possible to
 emulate a sparse neural
  network with 10^11 adjustable neurons and 10^15
 fixed, random connections
  using a non-sparse neural network with 10^11
 adjustable connections?
 
  -- Matt Mahoney, matmaho...@yahoo.com
 
 
 
  ---
  agi
  Archives:
 https://www.listbox.com/member/archive/303/=now
  RSS Feed:
 https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Lamarck Lives!(?)

2008-12-03 Thread Terren Suydam

Hi Richard,

Thanks for the link, pretty intriguing. It's important to note that the 
mechanism proposed is just a switch that turns specific genes off... so 
properly understood, it's likely that the resolution required to model this 
mechanism would not necessarily require modeling the entire DNA strand. It 
seems more likely that these methylation caps are being applied to very 
specific genes that produce proteins heavily implicated in the dynamics of 
synapse creation/destruction (or some other process related to memory).  So 
modeling the phenomenon could very possibly be done functionally.

Memories could only be passed to the child if 1) those DNA changes were also 
made in the germ cells (i.e. egg/sperm) and 2) the DNA changes involved 
resulted in a brain organization in the child that mimicked the parent's brain. 
 (1) is very unlikely but theoretically possible; (2) is impossible for two 
reasons. One is, the methylation patterns proposed involve a large number of 
neurons, converging on a pattern of methylation; in contrast, a germ cell would 
only capture the methylation of a single cell (which would then be cloned in 
the developing fetus). Second, the hypothesized methylation patterns represent 
a different medium of information storage in the mature brain than what is 
normally considered to be the role of DNA in the developing brain. It would 
truly be a huge leap to suggest that the information stored via this alteration 
of DNA would result in that information being preserved somehow in a developing 
brain. 

There are plenty of other epigenetic phenomena to get Lamarck fans excited, but 
this isn't one of them.

Terren

--- On Wed, 12/3/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 From: Richard Loosemore [EMAIL PROTECTED]
 Subject: [agi] Lamarck Lives!(?)
 To: agi@v2.listbox.com
 Date: Wednesday, December 3, 2008, 11:11 AM
 Am I right in thinking that what these people:
 
 http://www.newscientist.com/article/mg20026845.000-memories-may-be-stored-on-your-dna.html
 
 
 are saying is that memories can be stored as changes in the
 DNA inside neurons?
 
 If so, that would upset a few apple carts.
 
 Would it mean that memories (including cultural
 adaptations) could be passed from mother to child?
 
 Implication for neuroscientists proposing to build a WBE
 (whole brain emulation):  the resolution you need may now
 have to include all the DNA in every neuron.  Any bets on
 when they will have the resolution to do that?
 
 
 
 Richard Loosemore
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Lamarck Lives!(?)

2008-12-03 Thread Terren Suydam

Ed,

That's a good point about synapses, but perhaps the methylation just affects 
the neuron's output, e.g., the targeted genes express proteins that only find a 
functional role in the axon.

Terren

--- On Wed, 12/3/08, Ed Porter [EMAIL PROTECTED] wrote:
 Richard,
 
 The role played by the epigenome in genetics actually does
 have a slightly
 Lamarckian tinge.  Nova had a show saying that when
 identical twins are born
 their epigenomes are very similar, but that as they age
 their epigenomes
 start to differ more an more, and that certain behaviors
 like drinking or
 smoking can increase the rate at which such changes take
 place.
 
 What I didn't understand about the article you linked
 to is that it appears
 they are changing the epigenome to change the expression of
 DNA, but as far
 as I know DNA only appears in the nucleus (with the
 exception of
 mitochondirial DNA), and thus would appear to affect the
 cell as a whole,
 and thus not be good at differentially affecting the
 strengths of different
 synapses --- as would presumably be required for most
 neuronal memory ---
 unless the nuclear DNA had some sort of mapping to
 individual synapses, or
 unless local changes to mitochondrial DNA, near a synapse
 are involved.  The
 article does not appear to shed in any light on this issue
 of how changes in
 the expression of DNA would affect learning at the synapse
 level, where most
 people think it occurs.
 
 Ed Porter
 
 -Original Message-
 From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, December 03, 2008 11:12 AM
 To: agi@v2.listbox.com
 Subject: [agi] Lamarck Lives!(?)
 
 
 Am I right in thinking that what these people:
 
 http://www.newscientist.com/article/mg20026845.000-memories-may-be-stored-on
 -your-dna.html 
 
 
 are saying is that memories can be stored as changes in the
 DNA inside 
 neurons?
 
 If so, that would upset a few apple carts.
 
 Would it mean that memories (including cultural
 adaptations) could be 
 passed from mother to child?
 
 Implication for neuroscientists proposing to build a WBE
 (whole brain 
 emulation):  the resolution you need may now have to
 include all the DNA 
 in every neuron.  Any bets on when they will have the
 resolution to do that?
 
 
 
 Richard Loosemore
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Lamarck Lives!(?)

2008-12-03 Thread Terren Suydam
 I definitely agree that getting from there to a situation
 in which packages of information are being inserted into
 germ cell DNA is a long road, but this one new piece of
 research has - surprisingly - just cut the length of that
 road in half.

Half of infinity is still infinity ;-]

It's just not a possibility, which should be obvious if you look at the 
quantity of information involved. Let M be a measure of the information stored 
via distributed methylation patterns across some number of neurons N. The 
amount of information stored by a single neuron's methylated DNA is going to be 
much smaller than M (roughly M/N). A single germ cell which might conceivably 
inherit the methylation pattern from some single neuron would not be able to 
convey any more than a [1/N] piece of the total information that makes up M. 

The real significance of this research has nothing to do with Lamarckian 
inheritance. It has to do with the proposed medium of memory, as a network of 
switched genes in neurons and perhaps other cells. It's a novel idea that is 
generative of a whole range of new hypotheses and applications (e.g. in the 
pharmaceutical space).

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Lamarck Lives!(?)

2008-12-03 Thread Terren Suydam

Ed,

Though it seems obvious that synapses are *involved* with memory storage, it's 
not proven that synapses individually *store* memories. Clearly memory is 
distributed, as evidenced by brain injury studies (a situation that led Karl 
Pribram/David Bohm to propose a holographic storage metaphor). In other words, 
memories might be stored as patterns of synaptic/neural dynamics, in which the 
relevant scope is well higher than at the level of the individual synapse.

Given that memory storage is not so simple as to depend crucially on individual 
synapses, I see no serious problems with a neuron-wide mechanism of memory 
storage.

Also, think of Hebbian learning, in which synaptic strength is reinforced based 
on a neuron-wide signal.

Terren

--- On Wed, 12/3/08, Ed Porter [EMAIL PROTECTED] wrote:

 From: Ed Porter [EMAIL PROTECTED]
 Subject: RE: [agi] Lamarck Lives!(?)
 To: agi@v2.listbox.com
 Date: Wednesday, December 3, 2008, 1:33 PM
 I don' really see how a change in gene expression in the
 nucleus of a neuron
 caused by methylation could store long term memories, since
 most neural
 network models store all most all their information in the
 location and
 differentiation of they synapses. 
 
 How is information in a neural net stored by making what
 would appear to be
 only neuron-wide behaviors?  Such a global change might be
 valuable for
 signally that a record of recent events in the neuron at a
 give brief period
 of time, should be stored, but it would not appear to
 actually keep them
 stored over a long period of time. 
 
 I think the article failed to mention an important part of
 the theory of
 what is going on.
 
 Ed Porter
 
 -Original Message-
 From: Terren Suydam [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, December 03, 2008 12:16 PM
 To: agi@v2.listbox.com
 Subject: RE: [agi] Lamarck Lives!(?)
 
 
 Ed,
 
 That's a good point about synapses, but perhaps the
 methylation just affects
 the neuron's output, e.g., the targeted genes express
 proteins that only
 find a functional role in the axon.
 
 Terren
 
 --- On Wed, 12/3/08, Ed Porter [EMAIL PROTECTED]
 wrote:
  Richard,
  
  The role played by the epigenome in genetics actually
 does
  have a slightly
  Lamarckian tinge.  Nova had a show saying that when
  identical twins are born
  their epigenomes are very similar, but that as they
 age
  their epigenomes
  start to differ more an more, and that certain
 behaviors
  like drinking or
  smoking can increase the rate at which such changes
 take
  place.
  
  What I didn't understand about the article you
 linked
  to is that it appears
  they are changing the epigenome to change the
 expression of
  DNA, but as far
  as I know DNA only appears in the nucleus (with the
  exception of
  mitochondirial DNA), and thus would appear to affect
 the
  cell as a whole,
  and thus not be good at differentially affecting the
  strengths of different
  synapses --- as would presumably be required for most
  neuronal memory ---
  unless the nuclear DNA had some sort of mapping to
  individual synapses, or
  unless local changes to mitochondrial DNA, near a
 synapse
  are involved.  The
  article does not appear to shed in any light on this
 issue
  of how changes in
  the expression of DNA would affect learning at the
 synapse
  level, where most
  people think it occurs.
  
  Ed Porter
  
  -Original Message-
  From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
  Sent: Wednesday, December 03, 2008 11:12 AM
  To: agi@v2.listbox.com
  Subject: [agi] Lamarck Lives!(?)
  
  
  Am I right in thinking that what these people:
  
 
 http://www.newscientist.com/article/mg20026845.000-memories-may-be-stored-on
  -your-dna.html 
  
  
  are saying is that memories can be stored as changes
 in the
  DNA inside 
  neurons?
  
  If so, that would upset a few apple carts.
  
  Would it mean that memories (including cultural
  adaptations) could be 
  passed from mother to child?
  
  Implication for neuroscientists proposing to build a
 WBE
  (whole brain 
  emulation):  the resolution you need may now have to
  include all the DNA 
  in every neuron.  Any bets on when they will have the
  resolution to do that?
  
  
  
  Richard Loosemore
  
  
  
  ---
  agi
  Archives:
 https://www.listbox.com/member/archive/303/=now
  RSS Feed:
 https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
  
  
  
  ---
  agi
  Archives:
 https://www.listbox.com/member/archive/303/=now
  RSS Feed:
 https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
   
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303

Re: [agi] Lamarck Lives!(?)

2008-12-03 Thread Terren Suydam

 Does it work?

Assuming that the encodings between parent and child are compatible, it could 
work. But you'd still be limited to the total amount of information storage 
allowable in the junk DNA (which would necessarily be a miniscule fraction of 
the total information stored in the brain as memory). And you'd still need to 
identify the mechanism that writes to the junk DNA, which would involve some 
hefty molecular machinery (snipping DNA, synthesizing the new stuff, rejoining 
it, all while doing error correction and turning off the error correction 
involved with normal DNA synthesis/repair). Finally, the idea of junk DNA is 
getting smaller and smaller as we identify gene targets that are not 
necessarily proteins, but various RNA products; or sections of DNA that are 
simply there to anchor other sections, or to enable other methods of gene 
switching.

I know you're just playing here but it would be easy to empirically test this. 
Does junk DNA change between birth and death? Something tells me we would have 
discovered something that significant a long time ago.

Terren

--- On Wed, 12/3/08, Richard Loosemore [EMAIL PROTECTED] wrote:
 Okay, try this.
 
 [heck, you don't have to:  I am just playing with ideas
 here...]
 
 The methylation pattern has not necessarily been shown to
 *only* store information in a distributed pattern of
 activation - the jury's out on that one (correct me if
 I'm wrong).
 
 Suppose that the methylation end caps are just being used
 as a way station for some mechanism whose *real* goal is to
 make modifications to  some patterns in the junk DNA.  So,
 here I am suggesting that the junk DNA of any particular
 neuron is being used to code for large numbers of episodic
 memories (one memory per DNA strand, say), with each neuron
 being used as a redundant store of many episodes.  The same
 episode is stored in multiple neurons, but each copy is
 complete.  When we observe changes in the methylation
 patterns, perhaps these are just part of the transit
 mechanism, not the final destination for the pattern.  To
 put it in the language that Greg Bear would use, the endcaps
 were just part of the radio system.
 (http://www.gregbear.com/books/darwinsradio.cfm)
 
 Now suppose that part of the junk sequences that code for
 these memories are actually using a distributed coding
 scheme *within* the strand (in the manner of a good old
 fashioned backprop neural net, shall we say). That would
 mean that, contrary to what I said in the above paragraph,
 the individual strands were coding a bunch of different
 episodic memory traces, not just one.
 
 (It is even possible that the old idea of flashbulb
 memories may survive the critiques that have been launched
 against it ... and in that case, it could be that what we
 are talking about here is the mechanism for storing that
 particular set of memories.  And in that case, perhaps the
 system expects so few of them, that all DNA strands
 everywhere in the system are dedicated to storing just the
 individual's store of flashbulb memories).
 
 Now, finally, suppose that there is some mechanism for
 radioing these memories to distribute them
 around the system ... and that the radio network extends as
 far as the germ DNA.
 
 Now, the offspring could get the mixed flashbulb memories
 of its parents, in perhaps very dilute or noisy form.
 
 This assumes that whatever coding scheme is used to store
 the information can somehow transcend the coding schemes
 used by different individuals.  Since we do not yet know how
 much common ground there is between the knowledge storage
 used by individuals yet, this is still possible.
 
 There:  I invented a possible mechanism.
 
 Does it work?
 
 
 
 
 
 Richard Loosemore
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Lamarck Lives!(?)

2008-12-03 Thread Terren Suydam

Possibly... it has been shown with methylation. But I think the mechanism 
you're proposing could not involve methylation because (someone can correct me 
if wrong) methylation is only applicable to coding regions (methyl group only 
added to specific DNA sequences that mark the gene). That's not to say another 
switching mechanism on non-coding regions could not also be heritable (i.e., 
reproduced in the copied DNA strand).

Using DNA switches (such as methylation) is more tractable than DNA rewriting, 
but again, the amount of information storage is the limiting factor. Indeed, 
switching on and off sections of DNA implies a big reduction in information 
capacity (as compared to DNA rewriting), since gene switching applies to 
sections of DNA. I wonder how much memory would you expect to be able to pass 
on through this mechanism?

Also, you would need to propose the mechanism by which this form of storage 
would be read. Since junk DNA by definition doesn't code for anything, by 
what mechanism would these switches have an effect on cellular, neural, or 
otherwise cognitive processes?

Terren

--- On Wed, 12/3/08, Richard Loosemore [EMAIL PROTECTED] wrote:
 Ah, hang on folks:  what I was meaning was that the *state*
 of the junk DNA was being used, not the code.
 
 I am referring to the stuff that is dynamically
 interacting, as a result of which genes are switched on and
 off all over the place  so this is a gigantic network of
 switches.
 
 I wouldn't suggest that something is snipping and
 recombining the actual code of the junk DNA,
 only that the state of the switches is being used to code
 for something.
 
 Question is: can the state of the switches be preserved
 during reproduction?
 
 
 
 Richard Loosemore
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Lamarck Lives!(?)

2008-12-03 Thread Terren Suydam
http://en.wikipedia.org/wiki/Epigenetic_inheritance#DNA_methylation_and_chromatin_remodeling

The DNA sites where methylation can occur are rare, except in the regions where 
gene transcription occurs... which generally supports what I was saying about 
coding regions. However it is certainly possible that a different (as yet 
undiscovered) enzyme could methylate a different section of DNA that has no 
correlation at all with transcription.

The key point is that it's certainly possible in principle to have some kind of 
signaling mechanism that uses junk DNA as a substrate, and which can be 
inherited epigenetically. It doesn't seem likely that methylation (as we know 
it) fits the bill, so probably Richard would require an as yet unknown 
mechanism for switching junk DNA.


--- On Wed, 12/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 Junk DNA doesn't code for protein, but it seems to carry
 out various
 control functions over the protein synthesis and
 interaction
 processes, no?
 
 ben g
 
 On Wed, Dec 3, 2008 at 4:02 PM, Terren Suydam
 [EMAIL PROTECTED] wrote:
 
  Possibly... it has been shown with methylation. But I
 think the mechanism you're proposing could not involve
 methylation because (someone can correct me if wrong)
 methylation is only applicable to coding regions (methyl
 group only added to specific DNA sequences that mark the
 gene). That's not to say another switching mechanism on
 non-coding regions could not also be heritable (i.e.,
 reproduced in the copied DNA strand).
 
  Using DNA switches (such as methylation) is more
 tractable than DNA rewriting, but again, the amount of
 information storage is the limiting factor. Indeed,
 switching on and off sections of DNA implies a big reduction
 in information capacity (as compared to DNA rewriting),
 since gene switching applies to sections of DNA. I wonder
 how much memory would you expect to be able to pass on
 through this mechanism?
 
  Also, you would need to propose the mechanism by which
 this form of storage would be read. Since junk
 DNA by definition doesn't code for anything, by what
 mechanism would these switches have an effect on cellular,
 neural, or otherwise cognitive processes?
 
  Terren
 
  --- On Wed, 12/3/08, Richard Loosemore
 [EMAIL PROTECTED] wrote:
  Ah, hang on folks:  what I was meaning was that
 the *state*
  of the junk DNA was being used, not the code.
 
  I am referring to the stuff that is dynamically
  interacting, as a result of which genes are
 switched on and
  off all over the place  so this is a gigantic
 network of
  switches.
 
  I wouldn't suggest that something is snipping
 and
  recombining the actual code of the
 junk DNA,
  only that the state of the switches is being used
 to code
  for something.
 
  Question is: can the state of the switches be
 preserved
  during reproduction?
 
 
 
  Richard Loosemore
 
 
  ---
  agi
  Archives:
 https://www.listbox.com/member/archive/303/=now
  RSS Feed:
 https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
 
 
 
  ---
  agi
  Archives:
 https://www.listbox.com/member/archive/303/=now
  RSS Feed:
 https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
 https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
 
 
 -- 
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]
 
 I intend to live forever, or die trying.
 -- Groucho Marx
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Lamarck Lives!(?)

2008-12-03 Thread Terren Suydam

I think the key is to see the gene switching not as an information store per se 
but as part of a larger dynamic process (which might be similar in principle to 
simulated annealing), in which the contributions of whole neurons (e.g., the 
outputs) are switched in some way meaningful to the dynamic.

--- On Wed, 12/3/08, Ed Porter [EMAIL PROTECTED] wrote:
Ben, 

 

I basically agree.

 

There many things going in
the human brain.  There are all the different neuro- chemicals, receptors,
and blockers, some of which are not only effective across individual synapses,
but often across broader distances.  There is the fact that neuron
branches can apparently grow in directions guided by chemical gradients. 
There are synchronies and brain waves, and the way in which they might
spatially encode or decode information.  And so on.

 

So I admit the brain is
much more complicated than most neural net models. 

 

But I have not seen any
explanation of how changes in gene expression in a neuron's nucleus would store
memories, even given the knowledge that the epigenome can store information. 


 

If there is such an explanation,
either now or in the future, I would welcome hearing it.

 

Ed Porter

 

-Original Message-

From: Ben Goertzel [mailto:[EMAIL PROTECTED] 

Sent: Wednesday, December 03, 2008 3:24 PM

To: agi@v2.listbox.com

Subject: Re: [agi] Lamarck Lives!(?)

 

On Wed, Dec 3, 2008 at 3:19 PM, Ed Porter [EMAIL PROTECTED]
wrote:

 Terry and Ben,







 I never implied anything that could be considered a
memory at a conscious

 level is stored at just one synapse, but all the discussions I
have heard of

 learning in various brain science books and lectures imply
synaptic weights

 are the main place of our memories are stored.

 

Nevertheless, although it's an oft-repeated and well-spread meme, the

available biological evidence shows only that **this is one aspect of

the biological basis of memory in organisms with complex brains**

 

There certainly is data about long-term potentiation and its

relationship to memory ... but the available data comes nowhere near

to justifying the sorts of assumptions made in setting up formal

neural net models, in which synaptic modification is assumed as the

sole basis of learning/memory...

 

ben g

 

 

---

agi

Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription:
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com







  

  
  agi | Archives

 | Modify
 Your Subscription


  

  


 




  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


[agi] consciousness is an umbrella

2008-11-12 Thread Terren Suydam

consciousness refers to too many competing concepts to be of value in 
analytical discourse.

It's an umbrella term that invokes, depending on the context, some combination 
of the following concepts: subjective experience, awareness, attention, 
self-awareness, self-reflectivity, intention (will), and certainly others I 
can't think of at the moment. 

If we ask whether a dog has consciousness, that question can mean a dozen 
things to a dozen people. Better is to ask if a dog has subjective experience, 
if it is aware, if it can pay attention, if it is self-aware, if it is 
self-reflective, if it can exercise will. These are all different questions, 
and more useful because we can talk with some precision.

btw, I'm not proposing these sub-concepts in any formal way, just as one 
possible way (of many) to break down consciousness into more useful 
sub-concepts. 

I would be in favor of abolishing the word consciousness from analytical 
discourse because of its total lack of precision.

Terren

--- On Wed, 11/12/08, Richard Loosemore [EMAIL PROTECTED] wrote:
 H interesting angle.  Everything you say from this
 point on seems to be predicated on the idea that a person
 can *choose* to define it any way they want, and then run
 with their definition.
 
 I notice that this is not possible with any other
 scientific concept - we don't just define an electron as
 Your Plastic Pal Who's Fun To Be With and
 then start drawing conclusions.
 
 The same is true of consciousness.



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: [redwood] ICBS Seminar: -PS

2008-11-04 Thread Terren Suydam

Mike,

I can totally understand Ben's frustration and boredom with your posts. You're 
somehow an expert on general intelligence, yet you refuse to eradicate your own 
ignorance about some of the most basic technical concepts. What are you doing 
on a mailing list about artificial general intelligence if you can't relate to 
the technical concepts that are necessary to discuss its feasibility and/or 
implementation?  

Your enthusiasm is great, but until you gain some competency in technical 
areas, you aren't much more than a troll, on this list. You really are the 
embodiment of Eliezer Yudkowsky's point about people who can't see beyond their 
own level of intelligence. Your repeated accusations of ignorance against Ben 
bear that out. Maybe his approach will work and maybe it won't, but it ought to 
be obvious that this is a guy who does his homework. He has not only thought 
about creativity (one of the axes you endlessly grind), he has written a book 
about it, which in all likelihood you have not and will not read.

So maybe you should stop posting until you can demonstrate a grasp of technical 
concepts. I think that would be a reasonable requirement to make of those who 
would post here, to demonstrate some competency in basic computer science 
concepts.  That kind of rule is in force in other technical groups I 
participate in.

Terren

--- On Tue, 11/4/08, Mike Tintner [EMAIL PROTECTED] wrote:
From: Mike Tintner [EMAIL PROTECTED]
Subject: Re: [agi] Re: [redwood] ICBS Seminar: -PS
To: agi@v2.listbox.com
Date: Tuesday, November 4, 2008, 10:47 AM



 
 

 
Ben,

According to the known laws of physics, analog computers cannot compute 
anything different than what digital computers can...
if by compute you 
mean produce results observable by finite-precision instruments like human 
eyes 
and ears
 
Ben,
 
There is one other question here. Don't digital computers always add 
another layer? A line, say, always has to be translated into something else, 
like geometric formulae, in order for a digital computer to handle it, no?  
All information has to be coded and decoded, no?
 
I, as you/ve probably gathered, want a machine that can handle the line as 
a line, directly. A map as a map.No code. 
 
There is nothing comparable to the brain's maps in the physical layout of 
current computers, is there? Could there be? (Neural networks aren't quite the 
same, are they)?
 

 



  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Understanding and Problem Solving

2008-10-23 Thread Terren Suydam

Once again, there is a depth to understanding - it's not simply a binary 
proposition.

Don't you agree that a grandmaster understands chess better than you do, even 
if his moves are understandable to you in hindsight?

If I'm not good at math, I might not be able to solve y=3x+4 for x, but I might 
understand that y equals 3 times x plus four. My understanding is superficial 
compared to someone who can solve for x. 

Finally, don't you agree that understanding natural language requires solving 
problems? If not, how would you account for an AI's ability to understand novel 
metaphor? 

Terren

--- On Thu, 10/23/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
From: Dr. Matthias Heger [EMAIL PROTECTED]
Subject: [agi] Understanding and Problem Solving
To: agi@v2.listbox.com
Date: Thursday, October 23, 2008, 1:47 AM




 
 






Terren Suydam wrote: 

   

Understanding goes far beyond mere
knowledge - understanding *is* the ability to solve problems. One's
understanding of a situation or problem is only as deep as one's (theoretical)
ability to act in such a way as to achieve a desired outcome.  

   

   

I disagree. A grandmaster of chess can
explain his decisions and I will understand them. Einstein could explain his
theory to other physicist(at least a subset) and they could understand it. 

   

I can read a proof in mathematics and I
will understand it – because I only have to understand (= check) every
step of the proof. 

   

Problem solving is much much more than only
understanding. 

Problem solving is the ability to *create*
a sequence of actions to change a system’s state from A to a desired
state B. 

   

For example: Problem Find a path from A to
B within a graph. 

An algorithm which can check a solution and
can answer details about the solution is not necessarily able to find a
solution. 

   

-Matthias 

   





 







  

  
  agi | Archives

 | Modify
 Your Subscription


  

  


 




  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] natural language - algebra (was Defining AGI)

2008-10-21 Thread Terren Suydam

Matthias wrote:
 Your claim is that natural language understanding is
 sufficient for AGI. Then you must be able to prove that
 everything what AGI can is also possible by a system which
 is able to understand natural language. AGI can learn to
 solve x*3 = y for arbitrary y. And AGI can do this with
 Mathematica or without Mathematica. Simply prove that a
 natural language understanding system must necessarily be
 able to do the same.

Here's my simple proof: algebra, or any other formal language for that matter, 
is expressible in natural language, if inefficiently. 

Words like quantity, sum, multiple, equals, and so on, are capable of conveying 
the same meaning that the sentence x*3 = y conveys. The rules for 
manipulating equations are likewise expressible in natural language. 

Thus it is possible in principle to do algebra without learning the 
mathematical symbols. Much more difficult for human minds perhaps, but possible 
in principle. Thus, learning mathematical formalism via translation from 
natural language concepts is possible (which is how we do it, after all). 
Therefore, an intelligence that can learn natural language can learn to do math.

 I have given the model why we have the illusion that we
 believe our thoughts are build from language. 
 
. snipped description of model
 
 My model explains several phenomena:
 
 1. We hear our thoughts
 2. We think with the same speed as we speak (this is not
 trivial!)
 3. We hear our thoughts with our own voice (strong evidence
 for my model!)
 4. We have problems to think in a very noisy and loud
 environment (because we have to listen to our thoughts)
 
I believe there are linguistic forms of thought (exactly as you describe) and 
non-linguistic forms of thought (as described by Einstein - thinking in 
'pictures'). I agree with your premise that thought is not necessarily 
linguistic (as I have in previous emails!). 

Your model (which is quite good at explaining internal monologue) - and list of 
phenomena above - does not apply to the non-linguistic form of thought (as I 
experience it) except perhaps for (4), but that could simply be due to 
sensorial competition for one's attention, not a need to hear thought. This 
non-linguistic kind of thought is much faster and obviously non-verbal - it is 
not 'heard'. It can be quite a struggle to express the products of such 
thinking in natural language. 

This faculty for non-linguistic mental manipulation is most likely exclusively 
how chimps, ravens, and other highly intelligent animals solve problems. But 
relying on this form of thought alone is not sufficient for the development of 
the symbolic conceptual framework necessary to perform human-level analytical 
thought.

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] natural language - algebra (was Defining AGI)

2008-10-21 Thread Terren Suydam

As unpopular as philosophical discussions are lately, that was what this is - a 
debate about whether language is separable from general intelligence, in 
principle. So in-principle arguments about language and intelligence are 
relevant in that context, even if not embraced with open arms by the whole list.

Terren

--- On Tue, 10/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
OK, but I didn't think we were talking about what is possible in principle 
but may be unrealizable in practice...

It's possible in principle to create a supercomputer via training pigeons to 
peck in appropriate patterns, in response to the patterns that they notice 
other pigeons peck.  My friends in Perth and I designed such a machine once and 
called it the PC or Pigeon Computer.  I wish I'd retained the drawings and 
schematics!  We considered launching a company to sell them, IBM or 
International Bird Machines ... but failed to convince any VC's (even in the 
Internet bubble!!) and gave up...


ben g
 






  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Terren Suydam

Matthias, still awaiting a response to this post, quoted below.

Thanks,
Terren


Matthias wrote:
 I don't think that learning of language is the entire
 point. If I have only
 learned language I still cannot create anything. A human
 who can understand
 language is by far still no good scientist. Intelligence
 means the ability
 to solve problems. Which problems can a system solve if it
 can nothing else
 than language understanding?

Language understanding requires a sophisticated conceptual framework complete 
with causal models, because, whatever meaning means, it must be captured 
somehow in an AI's internal models of the world.

The Piraha tribe in the Amazon basin has a very primitive language compared to 
all modern languages - it has no past or future tenses, for example - and as a 
people they exhibit barely any of the hallmarks of abstract reasoning that are 
so common to the rest of humanity, such as story-telling, artwork, religion... 
see http://en.wikipedia.org/wiki/Pirah%C3%A3_people. 

How do you explain that?

 Einstein had to express his (non-linguistic) internal
 insights in natural
 language and in mathematical language.  In both
 modalities he had to use
 his intelligence to make the translation from his
 mental models.

 The point is that someone else could understand Einstein
 even if he haven't
 had the same intelligence. This is a proof that
 understanding AI1 does not
 necessarily imply to have the intelligence of AI1.

I'm saying that if an AI understands  speaks natural language, you've solved 
AGI - your Nobel will be arriving soon.  The difference between AI1 that 
understands Einstein, and any AI currently in existence, is much greater then 
the difference between AI1 and Einstein.

 Deaf people speak in sign language, which is only
 different from spoken
 language in superficial ways. This does not tell us
 much about language
 that we didn't already know.

 But it is a proof that *natural* language understanding is
 not necessary for
 human-level intelligence.

Sorry, I don't see that, can you explain the proof?  Are you saying that sign 
language isn't natural language?  That would be patently false. (see 
http://crl.ucsd.edu/signlanguage/)

 I have already outlined the process of self-reflectivity:
 Internal patterns
 are translated into language.

So you're agreeing that language is necessary for self-reflectivity. In your 
models, then, self-reflectivity is not important to AGI, since you say AGI can 
be realized without language, correct?

 This is routed to the
 brain's own input
 regions. You *hear* your own thoughts and have the illusion
 that you think
 linguistically.
 If you can speak two languages then you can make an easy
 test: Try to think
 in the foreign language. It works. If language would be
 inherently involved
 in the process of thoughts then thinking alternatively in
 two languages
 would cost many resources of the brain. In fact you need
 just use the other
 module for language translation. This is a big hint that
 language and
 thoughts do not have much in common.

 -Matthias

I'm not saying that language is inherently involved in thinking, but it is 
crucial for the development of *sophisticated* causal models of the world - the 
kind of models that can support self-reflectivity. Word-concepts form the basis 
of abstract symbol manipulation.

That gets the ball rolling for humans, but the conceptual framework that 
emerges is not necessarily tied to linguistics, especially as humans get 
feedback from the world in ways that are not linguistic (scientific 
experimentation/tinkering, studying math, art, music, etc).

Terren

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Terren Suydam

--- On Sun, 10/19/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 Every email program can receive meaning, store meaning and
 it can express it
 outwardly in order to send it to another computer. It even
 can do it without
 loss of any information. Regarding this point, it even
 outperforms humans
 already who have no conscious access to the full meaning
 (information) in
 their brains.

Email programs do not store meaning, they store data. The email program has no 
understanding of the stuff it stores, so this is a poor analogy. 
 
 The only thing which needs much intelligence from the
 nowadays point of view
 is the learning of the process of outwardly expressing
 meaning, i.e. the
 learning of language. The understanding of language itself
 is simple.

Isn't the *learning* of language the entire point? If you don't have an answer 
for how an AI learns language, you haven't solved anything.  The understanding 
of language only seems simple from the point of view of a fluent speaker. 
Fluency however should not be confused with a lack of intellectual effort - 
rather, it's a state in which the effort involved is automatic and beyond 
awareness.

 To show that intelligence is separated from language
 understanding I have
 already given the example that a person could have spoken
 with Einstein but
 needed not to have the same intelligence. Another example
 are humans who
 cannot hear and speak but are intelligent. They only have
 the problem to get
 the knowledge from other humans since language is the
 common social
 communication protocol to transfer knowledge from brain to
 brain.

Einstein had to express his (non-linguistic) internal insights in natural 
language and in mathematical language.  In both modalities he had to use his 
intelligence to make the translation from his mental models. 

Deaf people speak in sign language, which is only different from spoken 
language in superficial ways. This does not tell us much about language that we 
didn't already know. 

 In my opinion language is overestimated in AI for the
 following reason:
 When we think we believe that we think in our language.
 From this we
 conclude that our thoughts are inherently structured by
 linguistic elements.
 And if our thoughts are so deeply connected with language
 then it is a small
 step to conclude that our whole intelligence depends
 inherently on language.

It is surely true that much/most of our cognitive processing is not at all 
linguistic, and that there is much that happens beyond our awareness. However, 
language is a necessary tool, for humans at least, to obtain a competent 
conceptual framework, even if that framework ultimately transcends the 
linguistic dynamics that helped develop it. Without language it is hard to see 
how humans could develop self-reflectivity. 

Terren

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Terren Suydam

Matthias wrote:
 I don't think that learning of language is the entire
 point. If I have only
 learned language I still cannot create anything. A human
 who can understand
 language is by far still no good scientist. Intelligence
 means the ability
 to solve problems. Which problems can a system solve if it
 can nothing else
 than language understanding?

Language understanding requires a sophisticated conceptual framework complete 
with causal models, because, whatever meaning means, it must be captured 
somehow in an AI's internal models of the world.

The Piraha tribe in the Amazon basin has a very primitive language compared to 
all modern languages - it has no past or future tenses, for example - and as a 
people they exhibit barely any of the hallmarks of abstract reasoning that are 
so common to the rest of humanity, such as story-telling, artwork, religion... 
see http://en.wikipedia.org/wiki/Pirah%C3%A3_people.  

How do you explain that?

 Einstein had to express his (non-linguistic) internal
 insights in natural
 language and in mathematical language.  In both
 modalities he had to use
 his intelligence to make the translation from his
 mental models. 
 
 The point is that someone else could understand Einstein
 even if he haven't
 had the same intelligence. This is a proof that
 understanding AI1 does not
 necessarily imply to have the intelligence of AI1. 

I'm saying that if an AI understands  speaks natural language, you've solved 
AGI - your Nobel will be arriving soon.  The difference between AI1 that 
understands Einstein, and any AI currently in existence, is much greater then 
the difference between AI1 and Einstein.

 Deaf people speak in sign language, which is only
 different from spoken
 language in superficial ways. This does not tell us
 much about language
 that we didn't already know.
 
 But it is a proof that *natural* language understanding is
 not necessary for
 human-level intelligence.

Sorry, I don't see that, can you explain the proof?  Are you saying that sign 
language isn't natural language?  That would be patently false. (see 
http://crl.ucsd.edu/signlanguage/)

 I have already outlined the process of self-reflectivity:
 Internal patterns
 are translated into language. 

So you're agreeing that language is necessary for self-reflectivity. In your 
models, then, self-reflectivity is not important to AGI, since you say AGI can 
be realized without language, correct?

 This is routed to the
 brain's own input
 regions. You *hear* your own thoughts and have the illusion
 that you think
 linguistically.
 If you can speak two languages then you can make an easy
 test: Try to think
 in the foreign language. It works. If language would be
 inherently involved
 in the process of thoughts then thinking alternatively in
 two languages
 would cost many resources of the brain. In fact you need
 just use the other
 module for language translation. This is a big hint that
 language and
 thoughts do not have much in common.
 
 -Matthias

I'm not saying that language is inherently involved in thinking, but it is 
crucial for the development of *sophisticated* causal models of the world - the 
kind of models that can support self-reflectivity. Word-concepts form the basis 
of abstract symbol manipulation. 

That gets the ball rolling for humans, but the conceptual framework that 
emerges is not necessarily tied to linguistics, especially as humans get 
feedback from the world in ways that are not linguistic (scientific 
experimentation/tinkering, studying math, art, music, etc). 

Terren

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] Re: Defining AGI

2008-10-18 Thread Terren Suydam

Nice post.

I'm not sure language is separable from any kind of intelligence we can 
meaningfully interact with.

It's important to note (at least) two ways of talking about language:

1. specific aspects of language - what someone building an NLP module is 
focused on (e.g. the rules of English grammar and such).

2. the process of language - the expression of the internal state in some 
outward form in such a way that conveys shared meaning. 

If we conceptualize language as in #2, we can be talking about a great many 
human activities besides conversing: playing chess, playing music, programming 
computers, dancing, and so on. And in each example listed there is a learning 
curve that goes from pure novice to halting sufficiency to masterful fluency, 
just like learning a language. 

So *specific* forms of language (including the non-linguistic) are not in 
themselves important to intelligence (perhaps this is Matthias' point?), but 
the process of outwardly expressing meaning is fundamental to any social 
intelligence.

Terren

--- On Sat, 10/18/08, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 From: [EMAIL PROTECTED] [EMAIL PROTECTED]
 Subject: Re: AW: [agi] Re: Defining AGI
 To: agi@v2.listbox.com
 Date: Saturday, October 18, 2008, 12:02 PM
 Matthias wrote:
 
  There is no big depth in the language. There is only
 depth in the
  information (i.e. patterns) which is transferred using
 the language.
 
 This is a claim with which I obviously disagree.  I imagine
 linguists
 would have trouble with it, as well.
 
 And goes on to conclude:
  Therefore I think, the ways towards AGI mainly by
 studying language
  understanding will be very long and possibly always go
 in a dead end.
 
 It seems similar to my point, too.  That's really what
 I see as a
 definition of AI-complete as well.  If you had something
 that could
 understand language, it would have to be able to do
 everything that a full
 intelligence would do.  It seems there is a claim here that
 one could have
 something that understands language but doesn't have
 anything else
 underneath it.  Or maybe that language could just be
 something separated
 away from some real intelligence lying underneath, and so
 studying just
 that would be limiting.  And that is a possibility.  There
 are certainly
 specific language modules that people have to
 assist them with their use
 of language, but it does seem like intelligence is more
 integrated with
 it.
 
 And somebody suggested that it sounds like Matthias has
 some kind of
 mentalese hidden down in there.  That spoken and written
 language is not
 interesting because it is just a rearrangement of whatever
 internal
 representation system we have.  That is a fairly bold
 claim, and has
 logical problems like a homunculus.  It is natural for a
 computer person
 to think that mental things can be modifiable and
 transmittable strings,
 but it would be hard to see how that would work with
 people.
 
 Also, I get a whole sense that Matthias is thinking there
 might be some
 small general domain where we can find a shortcut to AGI. 
 No way. 
 Natural language will be a long, hard road.  Any path going
 to a general
 intelligence will be a long, hard road.  I would guess.  It
 still happens
 regularly that people will say they're cooking up the
 special sauce, but I
 have seen that way too many times.
 
 Maybe I'm being too negative.  Ben is trying to push
 this list to being
 more positive with discussions about successful areas of
 development.  It
 certainly would be nice to have some domains where we can
 explore general
 mechanism.  I guess the problem a see with just math as a
 domain is that
 the material could get too narrow a focus.  If we want
 generality in
 intelligence, I think it is helpful to be able to have a
 possibility that
 some bit of knowledge or skill from one domain could be
 tried in a
 different area, and it is my claim that general language
 use is one of the
 few areas where that happens.
 
 
 andi
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Terren Suydam

Hi Ben,

I think that the current focus has its pros and cons and the more narrowed 
focus you suggest would have *its* pros and cons. As you said, the con of the 
current focus is the boring repetition of various anti positions. But the pro 
of allowing that stuff is for those of us who use the conflict among competing 
viewpoints to clarify our own positions and gain insight. Since you seem to be 
fairly clear about your own viewpoint, it is for you a situation of diminishing 
returns (although I will point out that a recent blog post of yours on the 
subject of play was inspired, I think, by a point Mike Tintner made, who is 
probably the most obvious target of your frustration). 

For myself, I have found tremendous value here in the debate (which probably 
says a lot about the crudeness of my philosophy). I have had many new insights 
and discovered some false assumptions. If you narrowed the focus, I would 
probably leave (I am not offering that as a reason not to do it! :-)  I would 
be disappointed, but I would understand if that's the decision you made.

Finally, although there hasn't been much novelty among the debate (from your 
perspective, anyway), there is always the possibility that there will be. This 
seems to be the only public forum for AGI discussion out there (are there 
others, anyone?), so presumably there's a good chance it would show up here, 
and that is good for you and others actively involved in AGI research.

Best,
Terren


--- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: [agi] META: A possible re-focusing of this list
To: agi@v2.listbox.com
Date: Wednesday, October 15, 2008, 11:01 AM


Hi all,

I have been thinking a bit about the nature of conversations on this list.

It seems to me there are two types of conversations here:

1)
Discussions of how to design or engineer AGI systems, using current computers, 
according to designs that can feasibly be implemented by moderately-sized 
groups of people


2)
Discussions about whether the above is even possible -- or whether it is 
impossible because of weird physics, or poorly-defined special characteristics 
of human creativity, or the so-called complex systems problem, or because AGI 
intrinsically requires billions of people and quadrillions of dollars, or 
whatever


Personally I am pretty bored with all the conversations of type 2.

It's not that I consider them useless discussions in a grand sense ... 
certainly, they are valid topics for intellectual inquiry.   


But, to do anything real, you have to make **some** decisions about what 
approach to take, and I've decided long ago to take an approach of trying to 
engineer an AGI system.

Now, if someone had a solid argument as to why engineering an AGI system is 
impossible, that would be important.  But that never seems to be the case.  
Rather, what we hear are long discussions of peoples' intuitions and opinions 
in this regard.  People are welcome to their own intuitions and opinions, but I 
get really bored scanning through all these intuitions about why AGI is 
impossible.


One possibility would be to more narrowly focus this list, specifically on 
**how to make AGI work**.

If this re-focusing were done, then philosophical arguments about the 
impossibility of engineering AGI in the near term would be judged **off topic** 
by definition of the list purpose.


Potentially, there could be another list, something like agi-philosophy, 
devoted to philosophical and weird-physics and other discussions about whether 
AGI is possible or not.  I am not sure whether I feel like running that other 
list ... and even if I ran it, I might not bother to read it very often.  I'm 
interested in new, substantial ideas related to the in-principle possibility of 
AGI, but not interested at all in endless philosophical arguments over various 
peoples' intuitions in this regard.


One fear I have is that people who are actually interested in building AGI, 
could be scared away from this list because of the large volume of anti-AGI 
philosophical discussion.   Which, I add, almost never has any new content, and 
mainly just repeats well-known anti-AGI arguments (Penrose-like physics 
arguments ... mind is too complex to engineer, it has to be evolved ... no 
one has built an AGI yet therefore it will never be done ... etc.)


What are your thoughts on this?

-- Ben




On Wed, Oct 15, 2008 at 10:49 AM, Jim Bromer [EMAIL PROTECTED] wrote:

On Wed, Oct 15, 2008 at 10:14 AM, Ben Goertzel [EMAIL PROTECTED] wrote:




 Actually, I think COMP=false is a perfectly valid subject for discussion on

 this list.



 However, I don't think discussions of the form I have all the answers, but

 they're top-secret and I'm not telling you, hahaha are particularly useful.



 So, speaking as a list participant, it seems to me this thread has probably

 met its natural end, with this reference to proprietary weird-physics IP.



 However, speaking 

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Terren Suydam

One other important point... if I were a potential venture capitalist or some 
other sort of funding decision-maker, I would be on this list and watching the 
debate. I'd be looking for intelligent defense of (hopefully) intelligent 
criticism to increase my confidence about the decision to fund.  This kind of 
forum also allows you to sort of advertise your approach to those who are new 
to the game, particularly young folks who might one day be valuable 
contributors, although I suppose that's possible in the more tightly-focused 
forum as well.

--- On Wed, 10/15/08, Terren Suydam [EMAIL PROTECTED] wrote:
From: Terren Suydam [EMAIL PROTECTED]
Subject: Re: [agi] META: A possible re-focusing of this list
To: agi@v2.listbox.com
Date: Wednesday, October 15, 2008, 11:29 AM


Hi Ben,

I think that the current focus has its pros and cons and the more narrowed 
focus you suggest would have *its* pros and cons. As you said, the con of the 
current focus is the boring repetition of various anti positions. But the pro 
of allowing that stuff is for those of us who use the conflict among competing 
viewpoints to clarify our own positions and gain insight. Since you seem to be 
fairly clear about your own viewpoint, it is for you a situation of diminishing 
returns (although I will point out that a recent blog post of yours on the 
subject of play was inspired, I think, by a point Mike Tintner made, who is 
probably the most obvious target of your frustration). 

For myself, I have found tremendous value here in the debate (which probably 
says a lot about the crudeness of my philosophy). I have had many new insights 
and discovered
 some false assumptions. If you narrowed the focus, I would probably leave (I 
am not offering that as a reason not to do it! :-)  I would be disappointed, 
but I would understand if that's the decision you made.

Finally, although there hasn't been much novelty among the debate (from your 
perspective, anyway), there is always the possibility that there will be. This 
seems to be the only public forum for AGI discussion out there (are there 
others, anyone?), so presumably there's a good chance it would show up here, 
and that is good for you and others actively involved in AGI research.

Best,
Terren


--- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: [agi] META: A possible re-focusing of this list
To: agi@v2.listbox.com
Date: Wednesday,
 October 15, 2008, 11:01 AM


Hi all,

I have been thinking a bit about the nature of conversations on this list.

It seems to me there are two types of conversations here:

1)
Discussions of how to design or engineer AGI systems, using current computers, 
according to designs that can feasibly be implemented by moderately-sized 
groups of people


2)
Discussions about whether the above is even possible -- or whether it is 
impossible because of weird physics, or poorly-defined special characteristics 
of human creativity, or the so-called complex systems problem, or because AGI 
intrinsically requires billions of people and quadrillions of dollars, or 
whatever


Personally I am pretty bored with all the conversations of type 2.

It's not that I consider them useless discussions in a grand sense ... 
certainly, they are valid topics for intellectual inquiry.   


But, to do anything real, you have to make **some** decisions about what 
approach to take, and I've decided long ago to take an approach of trying to 
engineer an AGI system.

Now, if someone had a solid argument as to why engineering an AGI system is 
impossible, that would be important.  But that never seems to be the case.  
Rather, what we hear are long discussions of peoples' intuitions and opinions 
in this regard.  People are welcome to their own intuitions and opinions, but I 
get really bored scanning through all these intuitions about why AGI is 
impossible.


One possibility would be to more narrowly focus this list, specifically on 
**how to make AGI work**.

If this re-focusing were done, then philosophical arguments about the 
impossibility of engineering AGI in the near term would be judged **off topic** 
by definition of the list purpose.


Potentially, there could be another list, something like agi-philosophy, 
devoted to philosophical and weird-physics and other discussions about whether 
AGI is possible or not.  I am not sure whether I feel like running that other 
list ... and even if I ran it, I might not bother to read it very often.  I'm 
interested in new, substantial ideas related to the in-principle possibility of 
AGI, but not interested at all in endless philosophical arguments over various 
peoples' intuitions in this regard.


One fear I have is that people who are actually interested in building AGI, 
could be scared away from this list because of the large volume of anti-AGI 
philosophical discussion.   Which, I add, almost never has any new content, and 
mainly just repeats well-known anti-AGI

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Terren Suydam

This is a publicly accessible forum with searchable archives... you don't 
necessarily have to be subscribed and inundated to find those nuggets. I don't 
know any funding decision makers myself, but if I were in control of a budget 
I'd be using every resource at my disposal to clarify my decision. If I were 
considering Novamente for example I'd be looking for exactly the kind of 
exchanges you and Richard Loosemore (for example) have had on the list, to gain 
a better understanding of possible criticism, and because others may be able to 
articulate such criticism far better than me.  Obviously the same goes for 
anyone else on the list who would look for funding... I'd want to see you 
defend your ideas, especially in the absence of peer-reviewed journals 
(something the JAGI hopes to remedy obv).

Terren

--- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] META: A possible re-focusing of this list
To: agi@v2.listbox.com
Date: Wednesday, October 15, 2008, 3:37 PM


Terren,

I know a good number of VC's and government and private funding decision 
makers... and believe me, **none** of them has remotely enough extra time to 
wade through the amount of text that flows on this list, to find the nuggets of 
real intellectual interest!!!


-- Ben G

On Wed, Oct 15, 2008 at 12:07 PM, Terren Suydam [EMAIL PROTECTED] wrote:



One other important point... if I were a potential venture capitalist or some 
other sort of funding decision-maker, I would be on this list and watching the 
debate. I'd be looking for intelligent defense of (hopefully) intelligent 
criticism to increase my confidence about the decision to fund.  This kind of 
forum also allows you to sort of advertise your approach to those who are new 
to the game, particularly young folks who might one day be valuable 
contributors, although I suppose that's possible in the more tightly-focused 
forum as well.


--- On Wed, 10/15/08, Terren Suydam [EMAIL PROTECTED] wrote:

From: Terren Suydam [EMAIL PROTECTED]
Subject: Re: [agi] META: A possible re-focusing of this list
To:
 agi@v2.listbox.com
Date: Wednesday, October 15, 2008, 11:29 AM



Hi Ben,


I think that the current focus has its pros and cons and the more narrowed 
focus you suggest would have *its* pros and cons. As you said, the con of the 
current focus is the boring repetition of various anti positions. But the pro 
of allowing that stuff is for those of us who use the conflict among competing 
viewpoints to clarify our own positions and gain insight. Since you seem to be 
fairly clear about your own viewpoint, it is for you a situation of diminishing 
returns (although I will point out that a recent blog post of yours on the 
subject of play was inspired, I think, by
 a point Mike Tintner made, who is probably the most obvious target of your 
frustration). 

For myself, I have found tremendous value here in the debate (which probably 
says a lot about the crudeness of my philosophy). I have had many new insights 
and discovered
 some false assumptions. If you narrowed the focus, I would probably leave (I 
am not offering that as a reason not to do it! :-)  I would be disappointed, 
but I would understand if that's the decision you made.


Finally, although there hasn't been much novelty among the debate (from your 
perspective, anyway), there is always the possibility that there will be. This 
seems to be the only public forum for AGI discussion out there (are there 
others, anyone?), so presumably there's a good chance it would show up here, 
and that is good for you and others actively involved in AGI research.


Best,
Terren


--- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED] wrote:

From: Ben Goertzel [EMAIL PROTECTED]
Subject: [agi] META: A possible re-focusing of this list
To: agi@v2.listbox.com

Date: Wednesday,
 October 15, 2008, 11:01 AM


Hi all,

I have been thinking a bit about the nature of conversations on this list.

It seems to me there are two types of conversations here:


1)
Discussions of how to design or engineer AGI systems, using current computers, 
according to designs that can feasibly be implemented by moderately-sized 
groups of people


2)
Discussions about whether the above is even possible -- or whether it is 
impossible because of weird physics, or poorly-defined special characteristics 
of human creativity, or the so-called complex systems problem, or because AGI 
intrinsically requires billions of people and quadrillions of dollars, or 
whatever



Personally I am pretty bored with all the conversations of type 2.

It's not that I consider them useless discussions in a grand sense ... 
certainly, they are valid topics for intellectual inquiry.   



But, to do anything real, you have to make **some** decisions about what 
approach to take, and I've decided long ago to take an approach of trying to 
engineer an AGI system.

Now, if someone had a solid argument as to why engineering an AGI

RE: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Terren Suydam

All that means is that they weren't as diligent as they could have been. Rule 
number one in investing is do your homework. Obviously there are other sources 
of information than this list, but this is the next best thing to 
journal-mediated peer review.

--- On Wed, 10/15/08, Peter Voss [EMAIL PROTECTED] wrote:
From: Peter Voss [EMAIL PROTECTED]
Subject: RE: [agi] META: A possible re-focusing of this list
To: agi@v2.listbox.com
Date: Wednesday, October 15, 2008, 4:51 PM




 
 







Not a single one of our current investors (dozen) or
potential investors have used AGI lists to evaluate our project (or the
competition) 

   

Peter Voss 

a2i2 

   



From: Terren Suydam
[mailto:[EMAIL PROTECTED] 

Sent: Wednesday, October 15, 2008 1:25 PM

To: agi@v2.listbox.com

Subject: Re: [agi] META: A possible re-focusing of this list 



   


 
  
  

  This is a publicly accessible forum with searchable archives... you don't
  necessarily have to be subscribed and inundated to find those nuggets. I
  don't know any funding decision makers myself, but if I were in control of a
  budget I'd be using every resource at my disposal to clarify my decision. If
  I were considering Novamente for example I'd be looking for exactly the kind
  of exchanges you and Richard Loosemore (for example) have had on the list, to
  gain a better understanding of possible criticism, and because others may be
  able to articulate such criticism far better than me.  Obviously the
  same goes for anyone else on the list who would look for funding... I'd want
  to see you defend your ideas, especially in the absence of peer-reviewed
  journals (something the JAGI hopes to remedy obv).

  

  Terren

  

  --- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED]
  wrote: 
  From: Ben Goertzel
  [EMAIL PROTECTED]

  Subject: Re: [agi] META: A possible re-focusing of this list

  To: agi@v2.listbox.com

  Date: Wednesday, October 15, 2008, 3:37 PM 
  
  
  

  Terren,

  

  I know a good number of VC's and government and private funding decision
  makers... and believe me, **none** of them has remotely enough extra time to
  wade through the amount of text that flows on this list, to find the nuggets
  of real intellectual interest!!!

  

  -- Ben G 
  
  On Wed, Oct 15, 2008 at 12:07 PM, Terren Suydam [EMAIL PROTECTED]
  wrote: 
  
   



One other important point... if I were a potential venture capitalist or
some other sort of funding decision-maker, I would be on this list and
watching the debate. I'd be looking for intelligent defense of (hopefully)
intelligent criticism to increase my confidence about the decision to
fund.  This kind of forum also allows you to sort of advertise your
approach to those who are new to the game, particularly young folks who
might one day be valuable contributors, although I suppose that's possible
in the more tightly-focused forum as well.



--- On Wed, 10/15/08, Terren Suydam [EMAIL PROTECTED]
wrote: 
From: Terren
Suydam [EMAIL PROTECTED]

Subject: Re: [agi] META: A possible re-focusing of this list 



To: agi@v2.listbox.com 

Date:
Wednesday, October 15, 2008, 11:29 AM 


   


 
  
  

  Hi Ben,

  

  I think that the current focus has its pros and cons and the more
  narrowed focus you suggest would have *its* pros and cons. As you said,
  the con of the current focus is the boring repetition of various anti
  positions. But the pro of allowing that stuff is for those of us who use
  the conflict among competing viewpoints to clarify our own positions and
  gain insight. Since you seem to be fairly clear about your own viewpoint,
  it is for you a situation of diminishing returns (although I will point
  out that a recent blog post of yours on the subject of play was inspired,
  I think, by a point Mike Tintner made, who is probably the most obvious
  target of your frustration). 

  

  For myself, I have found tremendous value here in the debate (which
  probably says a lot about the crudeness of my philosophy). I have had
  many new insights and discovered some false assumptions. If you narrowed
  the focus, I would probably leave (I am not offering that as a reason not
  to do it! :-)  I would be disappointed, but I would understand if
  that's the decision you made.

  

  Finally, although there hasn't been much novelty among the debate (from
  your perspective, anyway), there is always the possibility that there
  will be. This seems to be the only public forum for AGI discussion out
  there (are there others, anyone?), so presumably there's a good chance it
  would show up here, and that is good for you and others actively involved
  in AGI research.

  

  Best,

  Terren

  

  

  --- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Terren Suydam

If you're trying to get an idea funded, and you're representing yourself in a 
public forum, then it is wise to approach the forum *as if* potential funding 
sources are reading, or may some day read. Which is also to say, a forum such 
as this one is potentially valuable for investors and engineers alike, even if 
they're not currently used that way. What investors currently or typically do 
is beside the point.

--- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] META: A possible re-focusing of this list
To: agi@v2.listbox.com
Date: Wednesday, October 15, 2008, 5:09 PM


Terren,

What an investor will typically do, if they want to be very careful, is hire a 
few domain experts and have them personally evaluate the technology of the firm 
they are consider investing in.


I have played this role for some investors considering other technology 
investments, now and then...

-- Ben G

On Wed, Oct 15, 2008 at 5:06 PM, Terren Suydam [EMAIL PROTECTED] wrote:



All that means is that they weren't as diligent as they could have been. Rule 
number one in investing is do your homework. Obviously there are other sources 
of information than this list, but this is the next best thing to 
journal-mediated peer review.


--- On Wed, 10/15/08, Peter Voss [EMAIL PROTECTED] wrote:

From: Peter Voss [EMAIL PROTECTED]
Subject: RE: [agi] META: A possible re-focusing of this list
To: agi@v2.listbox.com

Date: Wednesday, October 15, 2008, 4:51 PM




 
 





Not a single one of our current investors (dozen) or
potential investors have used AGI lists to evaluate our project (or the
competition) 

   

Peter Voss 

a2i2 

   



From: Terren Suydam
[mailto:[EMAIL PROTECTED] 

Sent: Wednesday, October 15, 2008 1:25 PM

To: agi@v2.listbox.com

Subject: Re: [agi] META: A possible re-focusing of this list 



   


 
  
  

  This is a publicly accessible forum with searchable archives... you don't
  necessarily have to be subscribed and inundated to find those nuggets. I
  don't know any funding decision makers myself, but if I were in control of a
  budget I'd be using every resource at my disposal to clarify my decision. If
  I were considering Novamente for example I'd be looking for exactly the kind
  of exchanges you and Richard Loosemore (for example) have had on the list, to
  gain a better understanding of possible criticism, and because others may be
  able to articulate such criticism far better than me.  Obviously the
  same goes for anyone else on the list who would look for funding... I'd want
  to see you defend your ideas, especially in the absence of peer-reviewed
  journals (something the JAGI hopes to remedy obv).

  

  Terren

  

  --- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED]
  wrote: 
  From: Ben Goertzel
  [EMAIL PROTECTED]

  Subject: Re: [agi] META: A possible re-focusing of this list

  To: agi@v2.listbox.com

  Date: Wednesday, October 15, 2008, 3:37 PM 
  
  
  

  Terren,

  

  I know a good number of VC's and government and private funding decision
  makers... and believe me, **none** of them has remotely enough extra time to
  wade through the amount of text that flows on this list, to find the nuggets
  of real intellectual interest!!!

  

  -- Ben G 
  
  On Wed, Oct 15, 2008 at 12:07 PM, Terren Suydam [EMAIL PROTECTED]
  wrote: 
  
   



One other important point... if I were a potential venture capitalist or
some other sort of funding decision-maker, I would be on this list and
watching the debate. I'd be looking for intelligent defense of (hopefully)
intelligent criticism to increase my confidence about the decision to
fund.  This kind of forum also allows you to sort of advertise your
approach to those who are new to the game, particularly young folks who
might one day be valuable contributors, although I suppose that's possible
in the more tightly-focused forum as well.



--- On Wed, 10/15/08, Terren Suydam [EMAIL PROTECTED]
wrote: 
From: Terren
Suydam [EMAIL PROTECTED]

Subject: Re: [agi] META: A possible re-focusing of this list 



To: agi@v2.listbox.com 

Date:
Wednesday, October 15, 2008, 11:29 AM 


   


 
  
  

  Hi Ben,

  

  I think that the current focus has its pros and cons and the more
  narrowed focus you suggest would have *its* pros and cons. As you said,
  the con of the current focus is the boring repetition of various anti
  positions. But the pro of allowing that stuff is for those of us who use
  the conflict among competing viewpoints to clarify our own positions and
  gain insight. Since you seem to be fairly clear about your own viewpoint,
  it is for you a situation of diminishing returns (although I will point
  out that a recent blog post of yours on the subject of play was inspired,
  I think, by a point Mike Tintner made

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-15 Thread Terren Suydam

The small point I was trying to make was that cognitive architecture is much 
more important to the realization of AGI than the amount of processing power 
you have at your disposal, or some other such platform-related considerations. 

It doesn't seem like a very controversial point to me. Objecting to it on the 
basis of the difficulty/impossibility of measuring intelligence seems like a 
bit of a tangent. 

--- On Wed, 10/15/08, Charles Hixson [EMAIL PROTECTED] wrote:

 From: Charles Hixson [EMAIL PROTECTED]
 Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
 To: agi@v2.listbox.com
 Date: Wednesday, October 15, 2008, 8:09 PM
 It doesn't need to satisfy everyone, it just has to be
 the definition 
 that you are using in your argument, and which you agree to
 stick to.
 
 E.g., if you define intelligence to be the resources used
 (given some 
 metric) in solving some particular selection of problems,
 then that is a 
 particular definition of intelligence.  It may not be a
 very good one, 
 though, as it looks like a system that knows the answers
 ahead of time 
 and responds quickly would win over one that understood the
 problems in 
 depth.  Rather like a multiple choice test rather than an
 essay.
 
 I'm sure that one could fudge the definition to skirt
 that particular 
 pothole, but it would be an ad hoc patch.  I don't
 trust that entire 
 mechanism of defining intelligence.  Still, if I know what
 you mean, I 
 don't have to accept your interpretations to understand
 your argument.  
 (You can't average across all domains, only across some
 pre-specified 
 set of domains.  Infinity doesn't exist in the
 implementable universe.)
 
 Personally, I'm not convinced by the entire process of
 measuring 
 intelligence.  I don't think that there *IS* any
 such thing.  If it 
 were a disease, I'd call intelligence a syndrome rather
 than a 
 diagnosis.  It's a collection of partially related
 capabilities given 
 one name to make them easy to think about, while ignoring
 details.  As 
 such it has many uses, but it's easy to mistake it for
 some genuine 
 thing, especially as it's an intangible.
 
 As an analogy consider the gene for blue eyes. 
 There is no such 
 gene.  There is a combination of genes that yields blue
 eyes, and it's 
 characterized by the lack of genes for other eye colors. 
 (It's more 
 complex than that, but that's enough.)
 
 E.g., there appears to be a particular gene which is
 present in almost 
 all people which enables them to parse grammatical
 sentences.  But there 
 have been found a few people in one family where this gene
 is damaged.  
 The result is that about half the members of that family
 can't speak or 
 understand language.  Are they unintelligent?  Well, the
 can't parse 
 grammatical sentences, and they can't learn language. 
 In most other 
 ways they appear as intelligent as anyone else.
 
 So I'm suspicious of ALL definitions of intelligence
 which treat it as 
 some kind of global thing.  But if you give me the
 definition that you 
 are using in an argument, then I can at least attempt to
 understand what 
 you are saying.
 
 
 Terren Suydam wrote:
  Charles,
 
  I'm not sure it's possible to nail down a
 measure of intelligence that's going to satisfy
 everyone. Presumably, it would be some measure of
 performance in problem solving across a wide variety of
 novel domains in complex (i.e. not toy) environments.
 
  Obviously among potential agents, some will do better
 in domain D1 than others, while doing worse in D2. But
 we're looking for an average across all domains. My
 task-specific examples may have confused the issue there,
 you were right to point that out.
 
  But if you give all agents identical processing power
 and storage space, then the winner will be the one that was
 able to assimilate and model each problem space the most
 efficiently, on average. Which ultimately means the one
 which used the *least* amount of overall computation.
 
  Terren
 
  --- On Tue, 10/14/08, Charles Hixson
 [EMAIL PROTECTED] wrote:
 

  From: Charles Hixson
 [EMAIL PROTECTED]
  Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
  To: agi@v2.listbox.com
  Date: Tuesday, October 14, 2008, 2:12 PM
  If you want to argue this way (reasonable), then
 you need a
  specific 
  definition of intelligence.  One that allows it to
 be
  accurately 
  measured (and not just in principle). 
 IQ
  definitely won't serve.  
  Neither will G.  Neither will GPA (if you're
 discussing
  a student).
 
  Because of this, while I think your argument is
 generally
  reasonable, I 
  don't thing it's useful.  Most of what you
 are
  discussing is task 
  specific, and as such I'm not sure that
  intelligence is a reasonable 
  term to use.  An expert engineer might be, e.g., a
 lousy
  bridge player.  
  Yet both are thought of as requiring intelligence.
  I would
  assert that 
  in both cases a lot of what's being measured
 is task
  specific 
  processing, i.e., narrow AI

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Terren Suydam

Hi Colin,

Are there other forums or email lists associated with some of the other AI 
communities you mention?  I've looked briefly but in vain ... would appreciate 
any helpful pointers.

Thanks,
Terren

--- On Tue, 10/14/08, Colin Hales [EMAIL PROTECTED] wrote:
From: Colin Hales [EMAIL PROTECTED]
Subject: Re: [agi] Advocacy Is no Excuse for Exaggeration
To: agi@v2.listbox.com
Date: Tuesday, October 14, 2008, 12:43 AM




  
Hi Matt,

... The Gamez paper situation is now...erm...resolved. You are right:
the paper doesn't argue that solving consciousness is necessary for
AGI. What has happened recently is a subtle shift  - those involved
simple fail to make claims about the consciousness or otherwise of the
machines! This does not entail that they are not actually working on
it. They are just being cautious...Also, you correctly observe that
solving AGI on a purely computational basis is not prohibited by the
workers involved in the GAMEZ paper.. indeed most of their work assumes
it!... I don't have a problem with this...However...'attributing'
consciousness to it based on its behavior is probably about as
unscientific as it gets. That outcome betrays no understanding whatever
of consciousness, its mechanism or its roleand merely assumes COMP
is true and creates an agreement based on ignorance. This is fatally
flawed non-science. 



[BTW: We need an objective test (I have one - I am waiting for it to
get published...). I'm going to try and see where it's at in that
process. If my test is acceptable then I predict all COMP entrants will
fail, but I'll accept whatever happens... - and external behaviour is
decisive. Bear with me a while till I get it sorted.]



I am still getting to know the folks [EMAIL PROTECTED] And the group may be
diverse, as you say ... but if they are all COMP, then that diversity
is like a group dedicated to an unresolved argument over the colour of
a fish's bicycle. If we can attract the attention of the likes of those
in the GAMEZ paper... and others such as Hynna and Boahen at Stanford,
who have an unusual hardware neural architecture...(Hynna,
K. M. and Boahen, K. 'Thermodynamically equivalent silicon models of
voltage-dependent ion channels', Neural
Computation vol. 19, no. 2, 2007. 327-350.) ...and others ...
then things will be diverse and authoritative. In particular, those who
have recently essentially squashed the computational theories of mind
from a neuroscience perspective- the 'integrative neuroscientists':

 
 Poznanski,
R. R., Biophysical neural networks : foundations of integrative
neuroscience,
Mary Ann Liebert, Larchmont, NY, 2001, pp. viii, 503 p.
Pomerantz, J. R., Topics in integrative
neuroscience : from cells to cognition, Cambridge University Press,
Cambridge,
UK ; New York, 2008, pp. xix, 427 p.
Gordon, E., Ed. (2000). Integrative
neuroscience : bringing together biological, psychological and clinical
models
of the human brain. Amsterdam, Harwood Academic.
 The only working, known model of general
intelligence is the human. If we base AGI on anything that fails to
account scientifically and completely for all aspects of human
cognition, including consciousness, then we open ourselves to critical
inferiority... and the rest of science will simply find the group an
irrelevant cultish backwater. Strategically the group would do well to
make choices that attract the attention of the 'machine consciousness'
crowd - they are directly linked to neuroscience via cog sci. The
crowd that runs with JETAI (journal of theoretical and experimental
artificial intelligence) is also another relevant one. It'd be
nice if those people also saw the AGI journal as a viable repository
for their output. I for one will try and help in that regard. Time will
tell I suppose.

 

cheers,

colin hales





Matt Mahoney wrote:

  --- On Mon, 10/13/08, Colin Hales [EMAIL PROTECTED] wrote:

  
  
In the wider world of science it is the current state of play that the

  
  theoretical basis for real AGI is an open and multi-disciplinary
question.  A forum that purports to be invested in achievement of real
AGI as a target, one would expect that forum to a multidisciplianry
approach on many fronts, all competing scientifically for access to
real AGI. 

I think this group is pretty diverse. No two people here can agree on how to 
build AGI.

  
  
Gamez, D. 'Progress in machine consciousness', Consciousness and

  
  Cognition vol. 17, no. 3, 2008. 887-910.

$31.50 from Science Direct. I could not find a free version. I don't understand 
why an author would not at least post their published papers on their personal 
website. It greatly increases the chance that their paper is cited. I 
understand some publications require you to give up your copyright including 
your right to post your own paper. I refuse to publish with them.

(I don't know the copyright policy for Science Direct, but they are really 
milking the publish or perish mentality of academia. Apparently 

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Terren Suydam

Hi Will,

I think humans provide ample evidence that intelligence is not necessarily 
correlated with processing power. The genius engineer in my example solves a 
given problem with *much less* overall processing than the ordinary engineer, 
so in this case intelligence is correlated with some measure of cognitive 
efficiency (which I will leave undefined). Likewise, a grandmaster chess 
player looks at a given position and can calculate a better move in one second 
than you or me could come up with if we studied the board for an hour. 
Grandmasters often do publicity events where they play dozens of people 
simultaneously, spending just a few seconds on each board, and winning most of 
the games.

Of course, you were referring to intelligence above a certain level, but if 
that level is high above human intelligence, there isn't much we can assume 
about that since it is by definition unknowable by humans.

Terren

--- On Tue, 10/14/08, William Pearson [EMAIL PROTECTED] wrote:
 The relationship between processing power and results is
 not
 necessarily linear or even positively  correlated. And as
 an increase
 in intelligence above a certain level requires increased
 processing
 power (or perhaps not? anyone disagree?).
 
 When the cost of adding more computational power, outweighs
 the amount
 of money or energy that you acquire from adding the power,
 there is
 not much point adding the computational power.  Apart from
 if you are
 in competition with other agents, that can out smart you.
 Some of the
 traditional views of RSI neglects this and thinks that
 increased
 intelligence is always a useful thing. It is not very
 
 There is a reason why lots of the planets biomass has
 stayed as
 bacteria. It does perfectly well like that. It survives.
 
 Too much processing power is a bad thing, it means less for
 self-preservation and affecting the world. Balancing them
 is a tricky
 proposition indeed.
 
   Will Pearson
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Terren Suydam


--- On Tue, 10/14/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 An AI that is twice as smart as a
 human can make no more progress than 2 humans. 

Spoken like someone who has never worked with engineers. A genius engineer can 
outproduce 20 ordinary engineers in the same timeframe. 

Do you really believe the relationship between intelligence and output is 
linear?

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Terren Suydam

Matt,

Your measure of intelligence seems to be based on not much more than storage 
capacity, processing power, I/O, and accumulated knowledge. This has the 
advantage of being easily formalizable, but has the disadvantage of missing a 
necessary aspect of intelligence.

I have yet to see from you any acknowledgment that cognitive architecture is at 
all important to realized intelligence. Even your global brain requires an 
explanation of how cognition actually happens at each of the nodes, be they 
humans or AI. 

Cognitive architecture (whatever form that takes) determines the efficiency of 
an intelligence given more external constraints like processing power etc.  I 
assume that it is this aspect that is the primary target of significant 
(disruptive) improvement in RSI schemes.

Terren

--- On Tue, 10/14/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 Two brains have twice as much storage capacity, processing
 power, and I/O as one brain. They have less than twice as
 much knowledge because some of it is shared. They can do
 less than twice as much work because the brain has a fixed
 rate of long term learning (2 bits per second), and a
 portion of that must be devoted to communicating with the
 other brain.
 
 The intelligence of 2 brains is between 1 and 2 depending
 on the degree to which the intelligence test can be
 parallelized. The degree of parallelization is generally
 higher for humans than it is for dogs because humans can
 communicate more efficiently. Ants and bees communicate to
 some extent, so we observe that a colony is more intelligent
 (at finding food) than any individual.



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Terren Suydam
Charles,

I'm not sure it's possible to nail down a measure of intelligence that's going 
to satisfy everyone. Presumably, it would be some measure of performance in 
problem solving across a wide variety of novel domains in complex (i.e. not 
toy) environments.

Obviously among potential agents, some will do better in domain D1 than others, 
while doing worse in D2. But we're looking for an average across all domains. 
My task-specific examples may have confused the issue there, you were right to 
point that out.

But if you give all agents identical processing power and storage space, then 
the winner will be the one that was able to assimilate and model each problem 
space the most efficiently, on average. Which ultimately means the one which 
used the *least* amount of overall computation.

Terren

--- On Tue, 10/14/08, Charles Hixson [EMAIL PROTECTED] wrote:

 From: Charles Hixson [EMAIL PROTECTED]
 Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
 To: agi@v2.listbox.com
 Date: Tuesday, October 14, 2008, 2:12 PM
 If you want to argue this way (reasonable), then you need a
 specific 
 definition of intelligence.  One that allows it to be
 accurately 
 measured (and not just in principle).  IQ
 definitely won't serve.  
 Neither will G.  Neither will GPA (if you're discussing
 a student).
 
 Because of this, while I think your argument is generally
 reasonable, I 
 don't thing it's useful.  Most of what you are
 discussing is task 
 specific, and as such I'm not sure that
 intelligence is a reasonable 
 term to use.  An expert engineer might be, e.g., a lousy
 bridge player.  
 Yet both are thought of as requiring intelligence.  I would
 assert that 
 in both cases a lot of what's being measured is task
 specific 
 processing, i.e., narrow AI. 
 
 (Of course, I also believe that an AGI is impossible in the
 true sense 
 of general, and that an approximately AGI will largely act
 as a 
 coordinator between a bunch of narrow AI pieces of varying
 generality.  
 This seems to be a distinctly minority view.)
 
 Terren Suydam wrote:
  Hi Will,
 
  I think humans provide ample evidence that
 intelligence is not necessarily correlated with processing
 power. The genius engineer in my example solves a given
 problem with *much less* overall processing than the
 ordinary engineer, so in this case intelligence is
 correlated with some measure of cognitive
 efficiency (which I will leave undefined). Likewise, a
 grandmaster chess player looks at a given position and can
 calculate a better move in one second than you or me could
 come up with if we studied the board for an hour.
 Grandmasters often do publicity events where they play
 dozens of people simultaneously, spending just a few seconds
 on each board, and winning most of the games.
 
  Of course, you were referring to intelligence
 above a certain level, but if that level is high
 above human intelligence, there isn't much we can assume
 about that since it is by definition unknowable by humans.
 
  Terren
 
  --- On Tue, 10/14/08, William Pearson
 [EMAIL PROTECTED] wrote:

  The relationship between processing power and
 results is
  not
  necessarily linear or even positively  correlated.
 And as
  an increase
  in intelligence above a certain level requires
 increased
  processing
  power (or perhaps not? anyone disagree?).
 
  When the cost of adding more computational power,
 outweighs
  the amount
  of money or energy that you acquire from adding
 the power,
  there is
  not much point adding the computational power. 
 Apart from
  if you are
  in competition with other agents, that can out
 smart you.
  Some of the
  traditional views of RSI neglects this and thinks
 that
  increased
  intelligence is always a useful thing. It is not
 very
 
  There is a reason why lots of the planets biomass
 has
  stayed as
  bacteria. It does perfectly well like that. It
 survives.
 
  Too much processing power is a bad thing, it means
 less for
  self-preservation and affecting the world.
 Balancing them
  is a tricky
  proposition indeed.
 
Will Pearson
 
  
 

 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Terren Suydam
--- On Tue, 10/14/08, William Pearson [EMAIL PROTECTED] wrote:
 There are things you can't model with limits of
 processing
 power/memory which restricts your ability to solve them.

Processing power, storage capacity, and so forth, are all important in the 
realization of an AI but I don't see how they limit your ability to model or 
solve problems except in terms of performance... i.e. can a problem be solved 
within time T. Those are factors outside of the black box of intelligence. 

Cognitive architecture is the guts of the black box. Any attempt to create AGI 
cannot be taken seriously if it doesn't explain what intelligence does, inside 
the black box, whether you're talking about an individual agent or a globally 
distributed one.

(By the way, it's worth noting that problem solving ability Y is uncomputable 
since it's basically just a twist on Kolmogorov Complexity. Which is to say, 
you can never prove that you have the perfect (un-improvable) cognitive 
architecture given finite resources.)

With toy problems like chess, increasing computing power can compensate for 
what amounts to a wildly inefficient cognitive architecture. In the real world 
of AGI, you have to work on efficiency first because the complexity is just too 
high to manage. So while you can get linear improvement on Y by increasing 
out-of-the-black-box factors, it's inside the box you get the non-linear, 
punctuated gains that are in all likelihood necessary to create AGI.

Terren

--- On Tue, 10/14/08, William Pearson [EMAIL PROTECTED] wrote:

 From: William Pearson [EMAIL PROTECTED]
 Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
 To: agi@v2.listbox.com
 Date: Tuesday, October 14, 2008, 1:13 PM
 Hi Terren,
 
  I think humans provide ample evidence that
 intelligence is not necessarily correlated with processing
 power. The genius engineer in my example solves a given
 problem with *much less* overall processing than the
 ordinary engineer, so in this case intelligence is
 correlated with some measure of cognitive
 efficiency (which I will leave undefined). Likewise, a
 grandmaster chess player looks at a given position and can
 calculate a better move in one second than you or me could
 come up with if we studied the board for an hour.
 Grandmasters often do publicity events where they play
 dozens of people simultaneously, spending just a few seconds
 on each board, and winning most of the games.
 
 
 What I meant was at processing power/memory Z, there is an
 problem
 solving ability Y which is the maximum. To increase the
 problem
 solving ability above Y you would have to increase
 processing
 power/memory. That is when cognitive efficiency reaches
 one, in your
 terminology. Efficiency is normally measured in ratios so
 that seems
 natural.
 
 There are things you can't model with limits of
 processing
 power/memory which restricts your ability to solve them.
 
  Of course, you were referring to intelligence
 above a certain level, but if that level is high
 above human intelligence, there isn't much we can assume
 about that since it is by definition unknowable by humans.
 
 
 Not quite what I meant.
 
   Will
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-10 Thread Terren Suydam

Hi Ben,

I wonder if you've read Bohm's Thought as a System, or if you've been 
influenced by Niklas Luhmann on any level.

Terren

--- On Fri, 10/10/08, Ben Goertzel [EMAIL PROTECTED] wrote:
There is a sense in which social groups are mindplexes: they have
mind-ness on the collective level, as well as on the individual level.






  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-10 Thread Terren Suydam

Yeah, that book is really good. Bohm was one of the great ones.

Luhmann may have been the first to seriously suggest/defend the idea that 
social systems are not just concepts but real ontological entities. Luhmann 
took Maturana/Verala's autopoieisis and extended that to social systems.

Which leads nicely into a question I've been meaning to ask... you're the only 
AI researcher I'm aware of (in the US anyway) that has talked about 
autopoieisis. I wonder what your thoughts are about it?  To what extent has 
that influenced your philosphy? Not looking for an essay here, but I'd be 
interested in your brief reflections on it.

Terren

--- On Fri, 10/10/08, Ben Goertzel [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] open or closed source for AGI project?
To: agi@v2.listbox.com
Date: Friday, October 10, 2008, 10:43 AM


Bohm: Yes ... a great book, though at the time I read it, I'd already 
encountered most of the same ideas elsewhere...

Luhmann: nope, never encountered his work...

ben



On Fri, Oct 10, 2008 at 10:26 AM, Terren Suydam [EMAIL PROTECTED] wrote:



Hi Ben,

I wonder if you've read Bohm's Thought as a System, or if you've been 
influenced by Niklas Luhmann on any level.

Terren

--- On Fri, 10/10/08, Ben Goertzel [EMAIL PROTECTED] wrote:

There is a sense in which social groups are mindplexes: they have
mind-ness on the collective level, as well as on the individual level.










  



  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first 
overcome   - Dr Samuel Johnson









  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-10 Thread Terren Suydam
Mike,

Autopoieisis is a basic building block of my philosophy of life and of 
cognition as well. I see life as: doing work to maintain an internal 
self-organization. It requires a boundary in which the entropy inside the 
boundary is kept lower than the entropy outside. Cognition is autopoieitic as 
well, although this is harder to see.

I have already shared my ideas on how to build a virtual intelligence that 
satisfies this definition. But in summary, you'd design a framework in which 
large numbers of interacting parts would evolve into an environment with 
emergent, persistent entities. Through a guided process you would make the 
environment more and more challenging, forcing the entities to solve harder and 
harder problems to stay alive, corresponding with ever increasing intelligence. 
At some distant point we may perhaps arrive at something with human-level 
intelligence or beyond. 

Terren

--- On Fri, 10/10/08, Mike Tintner [EMAIL PROTECTED] wrote:
From: Mike Tintner [EMAIL PROTECTED]
Subject: Re: [agi] open or closed source for AGI project?
To: agi@v2.listbox.com
Date: Friday, October 10, 2008, 11:30 AM



 
 

Terren:autopoieisis. I wonder what your thoughts are about 
it? 
 
Does anyone have any idea how to translate that 
biological principle into building a machine, or software? Do you or anyone 
else 
have any idea what it might entail? The only thing I can think of that comes 
anywhere close is the Carnegie Mellon starfish robot with its sense of 
self.



  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-10 Thread Terren Suydam

Well, identity is not a great choice of word, because it implies a static 
nature. As far as I understand it, Maturana et al simply meant, that which 
distinguishes the thing from its environment, in terms of its 
self-organization. The nature of that self-organization is dynamic, always 
changing. When it stops changing, in fact, it loses its identity, it dies.

I think also that you're confusing some sort of teleological principle here 
with autopoieisis, as if there is a design involved. Life doesn't adhere to a 
flexible plan, it just goes, and it either works, or it doesn't. If it works, 
and it is able to reproduce itself, then the pattern becomes persistent.

Terren

--- On Fri, 10/10/08, Mike Tintner [EMAIL PROTECTED] wrote:
From: Mike Tintner [EMAIL PROTECTED]
Subject: Re: [agi] open or closed source for AGI project?
To: agi@v2.listbox.com
Date: Friday, October 10, 2008, 12:55 PM



 
Terren,
 
Thanks for reply. I think I have some idea, no 
doubt confused, about how you want to evolve a system. But the big deal re 
autopoiesis for me - correct me - is the capacity of a living system to 
*maintain its identity* despite considerable disturbances. That can be both in 
the embryonic/developmental stages and also later in life. A *simple* example 
of 
the latter is an experiment where they screwed around with the nerves to a 
monkey's hands, and neverthless its brain maps rewired themselves, so to speak, 
to restore normal functioning within months. Neuroplasticity generally is an 
example - the brain's capacity, when parts are damaged, to get new parts to 
take 
on their functions.
 
How a system can be evolved - computationally, say, 
as you propose  - is, in my understanding, no longer quite such a 
problematic thing to understand or implement. But how a living system manages 
to 
adhere to a flexible plan of its identity despite disturbances, is, IMO, a much 
more problematic thing to understand and implement. And that, for me - again 
correct me - is the essence of autopoiesis,  (which BTW seems to me not the 
best explained of ideas - by Varela  co).

   
  


  Mike,

Autopoieisis is a basic building block of my 
philosophy of life and of cognition as well. I see life as: doing work 
to maintain an internal self-organization. It requires a boundary in 
which the entropy inside the boundary is kept lower than the entropy 
outside. Cognition is autopoieitic as well, although this is harder to 
see.

I have already shared my ideas on how to build a virtual 
intelligence that satisfies this definition. But in summary, you'd 
design a framework in which large numbers of interacting parts would 
evolve into an environment with emergent, persistent entities. Through 
a 
guided process you would make the environment more and more 
challenging, 
forcing the entities to solve harder and harder problems to stay alive, 
corresponding with ever increasing intelligence. At some distant point 
we may perhaps arrive at something with human-level intelligence or 
beyond. 

Terren

--- On Fri, 10/10/08, Mike Tintner 
[EMAIL PROTECTED] wrote:

From: 
  Mike Tintner [EMAIL PROTECTED]
Subject: Re: [agi] 
  open or closed source for AGI project?
To: 
  agi@v2.listbox.com
Date: Friday, October 10, 2008, 11:30 AM


  
  

  Terren:autopoieisis. I wonder what your thoughts are about 
  it? 
   
  Does anyone have any idea how to 
  translate that biological principle into building a machine, or 
  software? Do you or anyone else have any idea what it might entail? 
  The only thing I can think of that comes anywhere close is the 
  Carnegie Mellon starfish robot with its sense of self.
  
  

  


  agi | Archives  | Modify Your Subscription
  

  
  

  


  agi | Archives  | 
Modify 
Your Subscription
  



  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-10 Thread Terren Suydam

Agreed. Yet, as far as I can tell, Novamente/OCP aren't designed to allow this 
autopoiesis to emerge. Although some emergence is implicit in the design, there 
is not a clear boundary between the internal organization and the external 
environment. For example, a truly autopoietic system would have to learn 
(self-organize) all language, yet Novamente/OCP will take advantage of external 
NLP modules. 

If you agree with this, then in what sense do you consider autopoiesis to be an 
important concept?

Terren

--- On Fri, 10/10/08, Ben Goertzel [EMAIL PROTECTED] wrote:
I think autopoiesis is an important concept, which has been underappreciated in 
AI because its original advocates (Varela especially) tied it in with 
anti-computationalism

Varela liked to contrast autopoietic systems with computational ones


OTOH, I think of autopoiesis as a emergent property that computational systems 
are capable of giving rise to...

-- Ben G

On Fri, Oct 10, 2008 at 11:19 AM, Terren Suydam [EMAIL PROTECTED] wrote:



Yeah, that book is really good. Bohm was one of the great ones.

Luhmann may have been the first to seriously suggest/defend the idea that 
social systems are not just concepts but real ontological entities. Luhmann 
took Maturana/Verala's autopoieisis and extended that to social systems.


Which leads nicely into a question I've been meaning to ask... you're the only 
AI researcher I'm aware of (in the US anyway) that has talked about 
autopoieisis. I wonder what your thoughts are about it?  To what extent has 
that influenced your philosphy? Not looking for an essay here, but I'd be 
interested in your brief reflections on it.


Terren

--- On Fri, 10/10/08, Ben Goertzel [EMAIL PROTECTED] wrote:

From: Ben Goertzel
 [EMAIL PROTECTED]
Subject: Re: [agi] open or closed source for AGI project?
To: agi@v2.listbox.com

Date: Friday, October 10, 2008, 10:43 AM


Bohm: Yes ... a great book, though at the time I read it, I'd already 
encountered most of the same ideas elsewhere...


Luhmann: nope, never encountered his work...

ben



On Fri, Oct 10, 2008 at 10:26 AM, Terren Suydam [EMAIL PROTECTED] wrote:





Hi Ben,

I wonder if you've read Bohm's Thought as a System, or if you've been 
influenced by Niklas Luhmann on any level.

Terren

--- On Fri, 10/10/08, Ben Goertzel [EMAIL PROTECTED] wrote:


There is a sense in which social groups are mindplexes: they have
mind-ness on the collective level, as well as on the individual level.













  



  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


Nothing will ever be attempted if all possible objections must be first 
overcome   - Dr Samuel Johnson









  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  



  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first 
overcome   - Dr Samuel Johnson









  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Terren Suydam

Hi Ben,

If Richard Loosemore is half-right, how is he half-wrong? 

Terren

--- On Mon, 9/29/08, Ben Goertzel [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] Dangerous Knowledge
To: agi@v2.listbox.com
Date: Monday, September 29, 2008, 6:50 PM






I mean that a more productive approach would be to try to understand why the 
problem is so hard. 

IMO Richard Loosemore is half-right ... the reason AGI is so hard has to do 
with Santa Fe Institute style

complexity ...

Intelligence is not fundamentally grounded in any particular mechanism but 
rather in emergent structures
and dynamics that arise in certain complex systems coupled with their 
environments ...


Characterizing what these emergent structures/dynamics are is hard, and then 
figuring out how to make these 
structures/dynamics emerge from computationally feasible knowledge 
representation and creation structures/

dynamics is hard ...

It's hard for much the reason that systems biology is hard: it rubs against the 
grain of the reductionist
approach to science that has become prevalent ... and there's insufficient data 
to do it fully rigorously so

you gotta cleverly and intuitively fill in some big gaps ... (until a few 
decades from now, when better bio
data may provide a lot more info for cog sci, AGI and systems biology...

-- Ben







  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Terren Suydam

Right, was just looking for exactly that kind of summary, not to rehash 
anything! Thanks.

Terren

--- On Tue, 9/30/08, Ben Goertzel [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] Dangerous Knowledge
To: agi@v2.listbox.com
Date: Tuesday, September 30, 2008, 12:42 PM


I don't want to recapitulate that whole long tedious thread again!!

However, a brief summary of my response to Loosemore's arguments is here:

http://opencog.org/wiki/OpenCogPrime:FAQ#What_about_the_.22Complex_Systems_Problem.3F.22


(that FAQ is very incomplete which is why it hasn't been publicized yet ... but 
it does already
address this particular issue...)

ben

On Tue, Sep 30, 2008 at 12:23 PM, Terren Suydam [EMAIL PROTECTED] wrote:





Hi Ben,

If Richard Loosemore is half-right, how is he half-wrong? 

Terren

--- On Mon, 9/29/08, Ben Goertzel [EMAIL PROTECTED] wrote:


From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] Dangerous Knowledge


To: agi@v2.listbox.com
Date: Monday, September 29, 2008, 6:50 PM








I mean that a more productive approach would be to try to understand why the 
problem is so hard. 

IMO Richard Loosemore is half-right ... the reason AGI is so hard has to do 
with Santa Fe Institute style



complexity ...

Intelligence is not fundamentally grounded in any particular mechanism but 
rather in emergent structures
and dynamics that arise in certain complex systems coupled with their 
environments ...




Characterizing what these emergent structures/dynamics are is hard, and then 
figuring out how to make these 
structures/dynamics emerge from computationally feasible knowledge 
representation and creation structures/



dynamics is hard ...

It's hard for much the reason that systems biology is hard: it rubs against the 
grain of the reductionist
approach to science that has become prevalent ... and there's insufficient data 
to do it fully rigorously so



you gotta cleverly and intuitively fill in some big gaps ... (until a few 
decades from now, when better bio
data may provide a lot more info for cog sci, AGI and systems biology...

-- Ben







  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  



  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first 
overcome   - Dr Samuel Johnson










  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-29 Thread Terren Suydam

Interestingly, Helen Keller's story provides a compelling example of what it 
means for a symbol to go from ungrounded to grounded. Specifically, the moment 
at the water pump when she realized that the word water being spelled into 
her hand corresponded with her experience of water - that moment signified the 
transition from an ungrounded symbol to a grounded one. Until that moment those 
symbols were meaningless and they were nothing more than a boring game of rote 
repetition for HK the child. At that moment, her whole world changed and her 
development as a fully cognitive human being was underway in earnest. The 
symbols became far more than a game, they became tools for understanding, 
expression, and all the other things we do with language.

Terren

--- On Sun, 9/28/08, David Hart [EMAIL PROTECTED] wrote:
From: David Hart [EMAIL PROTECTED]
Subject: Re: [agi] universal logical form for natural language
To: agi@v2.listbox.com
Date: Sunday, September 28, 2008, 5:23 AM

On Sun, Sep 28, 2008 at 3:16 PM, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

I think I may be able to short-circuit the learning loop by using

minimal grounding.  The Helen Keller argument =)
Actually, It's been my hunch for some time that the richness and importance of 
Hellen Keller's sensational environment is frequently grossly underestimated. 
The sensations of a deaf/blind person still include proprioception, vestibular 
senses, smell, touch, pressure, temperature, vibration, etc., easily enough 
rich sensory information to create an internal mental represenation of a 
continous external reality. ;-) 


-dave





  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-17 Thread Terren Suydam

OK, how's that different from the collaboration inherent in any human project? 
Can you just explain your viewpoint?

--- On Tue, 9/16/08, Bryan Bishop [EMAIL PROTECTED] wrote:
 On Tuesday 16 September 2008, Terren Suydam wrote:
  Not really familiar with apt-get.  How is it a
 complex system?  It
  looks like it's just a software installation tool.
 
 How many people are writing the software?
 
 - Bryan
 
 http://heybryan.org/
 Engineers: http://heybryan.org/exp.html
 irc.freenode.net #hplusroadmap
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-17 Thread Terren Suydam

That is interesting. Sorry if I was short before, but I wish you would have 
just explained that from the start. Few here are going to be familiar with 
linux install tools or the communities around them.

I think a similar case could be made for a lot of large open source projects 
such as Linux itself. However, in this case and others, the software itself is 
the result of a high-level super goal defined by one or more humans. Even if no 
single person is directing the subgoals, the supergoal is still well defined by 
the ostensible aim of the software. People who contribute align themselves with 
that supergoal, even if not directed explicitly to do so. So it's not exactly 
self-organized, since the supergoal is conceived when the software project was 
first instantiated and stays constant, for the most part.

As opposed to markets, which can emerge without anything to spawn it except for 
folks with different goals (one to buy, one to sell). Perhaps then in a roughly 
similar way the organization of the brain emerges as a result of certain 
regions of the brain having something to sell and others having something to 
buy.  I think Hebbian learning can be made to fit that model.

Terren

--- On Wed, 9/17/08, Bryan Bishop [EMAIL PROTECTED] wrote:

 From: Bryan Bishop [EMAIL PROTECTED]
 Subject: Re: [agi] self organization
 To: agi@v2.listbox.com
 Date: Wednesday, September 17, 2008, 3:23 PM
 On Wednesday 17 September 2008, Terren Suydam wrote:
  OK, how's that different from the collaboration
 inherent in any human
  project? Can you just explain your viewpoint?
 
 When you have something like 20,000+ contributors writing
 software that 
 can very, very easily break, I think it's an
 interesting feat to have 
 it managed effectively. There's no way that we top-down
 designed this 
 and gave every 20,000 of these people a separate job to do
 on a giant 
 todo list, it was self-organizing. So, you were mentioning
 the 
 applicability of such things to the design of intelligence
 ... just 
 thought it was relevant.
 
 - Bryan
 
 http://heybryan.org/
 Engineers: http://heybryan.org/exp.html
 irc.freenode.net #hplusroadmap
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-16 Thread Terren Suydam

Hi Will,

Such an interesting example in light of a recent paper, which deals with 
measuring the difference between activation of the visual cortex and blood flow 
to the area, depending on whether the stimulus was subjectively invisible. If 
the result can be trusted, it shows that blood flow to the cortex is correlated 
with whether the stimulus is being perceived or not, as opposed to the neural 
activity, which does not change... see a discussion here:

http://network.nature.com/groups/bpcc/forum/topics/2974

In this case then the reward that the cortex receives in the form of 
nutrients is based somehow on feedback from other parts of the brain involved 
with attention. It's like a heuristic that says, if we're paying attention to 
something, we're probably going to keep paying attention to it.


Maier A, Wilke M, Aura C, Zhu C, Ye FQ, Leopold DA.  Nat Neurosci. 2008 Aug 24. 
[Epub ahead of print], Divergence of fMRI and neural signals in V1 during 
perceptual suppression in the awake monkey.


--- On Tue, 9/16/08, William Pearson [EMAIL PROTECTED] wrote:
 However despite it being nothing to do with bayesian
 reasoning or
 rational decision making, if we didn't have a good way
 of allocating
 blood flow in our brains we really couldn't do very
 much of use at all
 (as blood would be directed to the wrong parts at the wrong
 times).



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-16 Thread Terren Suydam

Hey Bryan, 

Not really familiar with apt-get.  How is it a complex system?  It looks like 
it's just a software installation tool.

Terren

--- On Tue, 9/16/08, Bryan Bishop [EMAIL PROTECTED] wrote:
 Have you considered looking into the social dynamics
 allowed by apt-get 
 before? It's a complex system, people can fork it or
 patch it, and it's 
 resulted in the software running the backbone of the
 internet. On the 
 extropian mailing list the other day I mentioned I have a
 linux live cd 
 for building brains, I call it mind on a
 disc, but unfortunately 
 I'm strapped for time and can only give a partial
 download. It's quite 
 the alternative way of going about things, but people do
 seem to 
 generally understand (sometimes): http://p2pfoundation.net/
 
 - Bryan
 
 http://heybryan.org/
 Engineers: http://heybryan.org/exp.html
 irc.freenode.net #hplusroadmap
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] self organization

2008-09-15 Thread Terren Suydam

Hi all,

Came across this article called Pencils and Politics. Though a bit of a 
tangent, it's the clearest explanation of self-organization in economics I've 
encountered.

http://www.newsweek.com/id/158752

I send this along because it's a great example of how systems that 
self-organize can result in structures and dynamics that are more complex and 
efficient than anything we can purposefully design. The applicability to the 
realm of designed intelligence is obvious.

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-15 Thread Terren Suydam

Once again, I'm not saying that modeling an economy is all that's necessary to 
explain intelligence. I'm not even saying it's a necessary condition of it. 
What I am saying is that it looks very likely that the brain/mind is 
self-organized, and for those of us looking to biological intelligence for 
inspiration, this may be important.

There are a class of highly complex, unstable (in the thermodynamic sense) 
systems that self-organize in such a way as to most efficiently dissipate the 
imbalances inherent in the environment (hurricanes, tornadoes, watersheds, life 
itself, the economy).  And, perhaps, the brain/mind is such a system. If so, 
that description is obviously not enough to guess the password to the safe. 
But that doesn't mean that self-organization has no value at all. The value of 
it is to show that efficient design can emerge spontaneously, and perhaps we 
can take advantage of that.

By your argumentation, it would seem you won't find any argument about 
intelligence of worth unless it explains everything. I've never understood the 
strong resistance of many in the AI community to the concepts involved with 
complexity theory, particularly as applied to intelligence. It would seem to me 
to be a promising frontier for exploration and gathering insight.

Terren

--- On Mon, 9/15/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 From: Vladimir Nesov [EMAIL PROTECTED]
 Subject: Re: [agi] self organization
 To: agi@v2.listbox.com
 Date: Monday, September 15, 2008, 6:06 PM
 I guess that intuitively, argument goes like this:
 1) economy is more powerful than individual agents, it
 allows to
 increase the power of intelligence in individual agents;
 2) therefore, economy has an intelligence-increasing
 potency;
 3) so, we can take stupid agents, apply the economy potion
 to them and
 get powerful intelligence as a result.
 
 But it's easy to see how this kind of argument may be
 invalid. Adding
 gasoline to the fire makes the fire stronger, more
 firey, therefore
 it contains fire-potency, therefore applying sufficient
 amount of
 gasoline to water, which is originally much less firey,
 will create as
 strong fire as necessary. Doesn't work? You didn't
 add enough
 gasoline, is all.
 
 When you consider a system as complex as a human economy,
 you can't
 just take one aspect apart from all other aspects, and
 declare it the
 essence of the process. There are too many alternatives,
 you can't win
 this lottery blindfolded. Some small number of aspects may
 in fact be
 the essence, but you can't find these elements before
 you factored out
 other functional parts of the process and showed that your
 model works
 without them. You can't ignore the spark, this
 *obviously*
 insignificant tiny fluke in the blazing river of fire, and
 accept only
 the gasoline into your model. Why are you *obviously*
 allowed to
 ignore human intelligence, the most powerful force in the
 known
 universe, in your model of what makes human economy
 intelligent? This
 argument is void, it must not move you, you must not
 rationalize your
 thinking by it. If you are to know the conclusion to be
 valid, there
 needs to be a valid force to convince you.
 
 Now, consider evolution. Evolution is understood, and
 technically so.
 It has *no* mind. It has no agents, no goals, no desires.
 It doesn't
 think its designs, it is a regularity in the way designs
 develop, a
 property of physics that explains why complicated
 functional systems
 such as eye are *likely* to develop. Its efficiency comes
 from
 incremental improvement and massively parallel exploration.
 It is a
 society of simple mechanisms, with no purposeful design.
 The
 evolutionary process is woven from the threads of
 individual
 replicators, an algorithm steadily converting these threads
 into the
 new forms. This process is blind to the structure of the
 threads, it
 sees not beauty or suffering, speed or strength, it remains
 the same
 irrespective of the vehicles weaving the evolutionary
 regularity,
 unless the rules of the game fundamentally change. It
 doesn't matter
 for evolution whether a rat is smarter than the butterfly.
 Intelligence is irrelevant for evolution, you can safely
 take it out
 of the picture as just another aspect of phenotype
 contributing to the
 rates of propagation of the genes.
 
 What about economy? Is it able to ignore intelligence like
 evolution
 does? Can you invent a dinosaur in a billion years with it,
 or is it
 faster? Why? Does it invent a dinosaur or a pencil? If the
 theory of
 economics doesn't give you a technical answer to it,
 not a description
 that fits the human society, but a separate, self-contained
 algorithm
 that has the required property, who is to say that theory
 found the
 target? You know that the password to the safe is more than
 zero but
 less than a million, and you have an experimentally
 confirmed theory
 that it's also less than 500 thousand. This theory
 doesn't allow you
 to find the key, even 

Re: [agi] self organization

2008-09-15 Thread Terren Suydam

Vlad,

At this point, we ought to acknowledge that we just have different approaches. 
You're trying to hit a very small target accurately and precisely. I'm not. 
It's not important to me the precise details of how a self-organizing system 
would actually self-organize, what form that would take or what goals would 
emerge for that system beyond persistence/replication. We've already gone over 
the Friendliness debate so I won't go any further with that here.

My approach is to try and recreate the processes that led to the emergence of 
life, and of intelligence. I see life and intelligence as strongly 
interrelated, yet I don't see either as dependent on our particular biological 
substrate. Life I define as a self-organizing process that does work (expends 
energy) to maintain its self-organization (which is to say it maintains a 
boundary between itself and the environment, in which the entropy inside is 
lower than the entropy outside). Life at the simplest possible level is 
therefore a kind of hard-coded intelligence. My hunch is that anything 
sufficiently advanced to be considered generally intelligent needs to be alive 
in the above sense. But suffice it to say, pursuit of AGI is not in my short 
term plans.

Just as an aside, because sometimes this feels combative, or overly defensive: 
I have not come on to this list to try and persuade anyone to adopt my 
approach, or to dissuade others from theirs. Rather, I came here to gather 
feedback and criticism of my thoughts, to defend them when challenged, and to 
change my mind when it seems like my current ideas are inadequate. And of 
course, to provide the same kind of feedback for others when I have something 
to contribute. In that spirit, I'm grateful for your feedback. I'm also very 
curious to see the results of your approach, and those of others here... I may 
be critical of what you're trying to do, but that doesn't mean I think you 
shouldn't do it (in most cases anyway :-] ).

Terren



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Terren Suydam

Hey Bryan,

To me, this is indistinguishable from the 1st option I laid out. Deterministic 
but impossible to predict.

Terren


--- On Sun, 9/7/08, Bryan Bishop [EMAIL PROTECTED] wrote:

 From: Bryan Bishop [EMAIL PROTECTED]
 Subject: Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser
 To: agi@v2.listbox.com
 Date: Sunday, September 7, 2008, 11:44 AM
 On Friday 05 September 2008, Terren Suydam wrote:
  So, Mike, is free will:
 
  1) an illusion based on some kind of unpredictable,
 complex but
  *deterministic* interaction of physical components 2)
 the result of
  probabilistic physics - a *non-deterministic*
 interaction described
  by something like quantum mechanics 3) the expression
 of our
  god-given spirit, or some other non-physical mover of
 physical things
 
 I've already mentioned an alternative on this mailing
 list that you 
 haven't included in your question, would you consider
 it?
 http://heybryan.org/free_will.html
 ^ Just so that I don't have to keep on rewriting it
 over and over again.
 
 - Bryan
 
 http://heybryan.org/
 Engineers: http://heybryan.org/exp.html
 irc.freenode.net #hplusroadmap
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Philosophy of General Intelligence

2008-09-07 Thread Terren Suydam

Hi Mike,

Good summary. I think your point of view is valuable in the sense of helping 
engineers in AGI to see what they may be missing. And your call for technical 
AI folks to take up the mantle of more artistic modes of intelligence is also 
important. 

But it's empty, for you've demonstrated no willingness to cross over to engage 
in technical arguments beyond a certain, quite limited, depth. Admitting your 
ignorance is one thing, and it's laudable, but it only goes so far. I think if 
you're serious about getting folks (like Pei Wang) to take you seriously, then 
you need to also demonstrate your willingness to get your hands dirty and do 
some programming, or in some other way abolish your ignorance about technical 
subjects - exactly what you're asking others to do. 

Otherwise, you have to admit the folly of trying to compel any such folks to 
move from their hard-earned perspectives, if you're not willing to do that 
yourself.

Terren


--- On Sun, 9/7/08, Mike Tintner [EMAIL PROTECTED] wrote:

 From: Mike Tintner [EMAIL PROTECTED]
 Subject: [agi] Philosophy of General Intelligence
 To: agi@v2.listbox.com
 Date: Sunday, September 7, 2008, 6:26 PM
 Jiri: Mike,
 
 If you think your AGI know-how is superior to the know-how
 of those
 who already built testable thinking machines then why
 don't you try to
 build one yourself?
 
 Jiri,
 
 I don't think I know much at all about machines or
 software  never claim 
 to. I think I know certain, only certain, things about the
 psychological and 
 philosophical aspects of general intelligence - esp. BTW
 about the things 
 you guys almost never discuss, the kinds of problems that a
 general 
 intelligence must solve.
 
 You may think that your objections to me are entirely
 personal  about my 
 manner. I suggest that there is also a v. deep difference
 of philosophy 
 involved here.
 
 I believe that GI really is about *general* intelligence -
 a GI, and the 
 only serious example we have is human, is, crucially, and
 must be, able to 
 cross domains - ANY domain. That means the whole of our
 culture and society. 
 It means every kind of representation, not just
 mathematical and logical and 
 linguistic, but everything - visual, aural, solid, models,
 embodied etc etc. 
 There is a vast range. That means also every subject domain
  - artistic, 
 historical, scientific, philosophical, technological,
 politics, business 
 etc. Yes, you have to start somewhere, but there should be
 no limit to how 
 you progress.
 
 And the subject of general intelligence is tberefore, in no
 way, just the 
 property of a small community of programmers, or
 roboticists - it's the 
 property of all the sciences, incl. neuroscience,
 psychology, semiology, 
 developmental psychology, AND the arts and philosophy etc.
 etc. And it can 
 only be a collaborative effort. Some robotics disciplines,
 I believe, do 
 think somewhat along those lines and align themselves with
 certain sciences. 
 Some AI-ers also align themselves broadly with scientists
 and philosophers.
 
 By definition, too, general intelligence should embrace
 every kind of 
 problem that humans have to deal with - again artistic,
 practical, 
 technological, political, marketing etc. etc.
 
 The idea that general intelligence really could be anything
 else but truly 
 general is, I suggest, if you really think about it,
 absurd. It's like 
 preaching universal brotherhood, and a global society, and
 then practising 
 severe racism.
 
 But that's exactly what's happening in current AGI.
 You're actually 
 practising a highly specialised approach to AGI - only
 certain kinds of 
 representation, only certain kinds of problems are
 considered - basically 
 the ones you were taught and are comfortable with - a very,
 very narrow 
 range - (to a great extent in line with the v. narrow
 definition of 
 intelligence involved in the IQ test).
 
 When I raised other kinds of problems, Pei considered it
 not constructive. 
 When I recently suggested an in fact brilliant game for
 producing creative 
 metaphors, DZ considered it childish,  because
 it was visual and 
 imaginative, and you guys don't do those things, or
 barely. (Far from being 
 childish, that game produced a rich series of visual/verbal
 metaphors, where 
 AGI has produced nothing).
 
 If you aren't prepared to use your imagination and
 recognize the other half 
 of the brain, you are, frankly, completely buggered as far
 as AGI is 
 concerned. In over 2000 years, logic and mathematics
 haven't produced a 
 single metaphor or analogy or crossed any domains.
 They're not meant to, 
 that's expressly forbidden. But the arts produce
 metaphors and analogies on 
 a daily basis by the thousands. The grand irony here is
 that creativity 
 really is - from a strictly technical pov -  largely what
 our culture has 
 always said it is - imaginative/artistic and not rational..
 (Many rational 
 thinkers are creative - but by using their imagination).
 AGI will in 

Re: [agi] Philosophy of General Intelligence

2008-09-07 Thread Terren Suydam

Hi Mike,

It's not so much the *kind* of programming that I or anyone else could 
recommend, it's just the general skill of programming - getting used to 
thinking in terms of, how exactly do I solve this problem - what model or 
procedure do I create? How do you specify something so completely and 
precisely that a mindless machine can execute it?

It's not just that, it's also understanding how the written specification (the 
program) translates into actions at the processor level. That's important too.

Obviously having these skills and knowledge is not the answer to creating AGI - 
if it was, it'd have been solved decades ago. But without understanding how 
computers work, and how we make them work for us, it is too easy to fall into 
the trap of mistaking a computer's operation in terms of some kind of 
homunculus, or that it has a will of its own, or some other kind of anthropic 
confusion. If you don't understand how to program a computer, you will be 
tempted to say that a chess program that can beat Gary Kasparov is intelligent.

Your repeated appeals to creating programs that can decide for themselves 
without specifying what they do underscores your technical weakness, because 
programs are nothing but exact specifications. 

You make good points about what General Intelligence entails, but if you had a 
solid grasp of the technical aspects of computing, you could develop your 
philosophy so much further. Matt Mahoney's suggestion of trying to create an 
Artificial Artist is a great example of a direction that is closed to you until 
you learn the things I'm talking about. 

Terren

in response to your PS: I'm not suggesting everyone be proficient at 
everything, although such folks are extremely valuable... why not become one?  
Anyway, sharing expertise is all well and good but in order to do so, you have 
to give ground to the experts - something I haven't seen you do. You seem (to 
me) to be quite attached to your viewpoint, even regarding topics that you 
admit ignorance to. Am I wrong?


--- On Sun, 9/7/08, Mike Tintner [EMAIL PROTECTED] wrote:
 Can you tell me which kind of programming is necessary for
 which 
 end-problem[s] that general intelligence must solve? Which
 kind of 
 programming, IOW, can you *guarantee* me  will definitely
 not be a waste of 
 my time (other than by way of general education) ?  Which
 kind are you 
 *sure* will help solve which unsolved problem of AGI?
 
 P.S. OTOH the idea that in the kind of general community
 I'm espousing, (and 
 is beginning to crop up in other areas), everyone must be
 proficient in 
 everyone else's speciality is actually a non-starter,
 Terren. It defeats the 
 object of the division of labour central to all parts of
 the economy. If you 
 had to spend as much time thinking about those end-problems
 as I have, I 
 suggest you'd have to drop everything. Let's just
 share expertise instead?
 
 
 Terren: Good summary. I think your point of view is
 valuable in the sense of 
 helping engineers in AGI to see what they may be missing.
 And your call for 
 technical AI folks to take up the mantle of more artistic
 modes of 
 intelligence is also important.
 
  But it's empty, for you've demonstrated no
 willingness to cross over to 
  engage in technical arguments beyond a certain, quite
 limited, depth. 
  Admitting your ignorance is one thing, and it's
 laudable, but it only goes 
  so far. I think if you're serious about getting
 folks (like Pei Wang) to 
  take you seriously, then you need to also demonstrate
 your willingness to 
  get your hands dirty and do some programming, or in
 some other way abolish 
  your ignorance about technical subjects - exactly what
 you're asking 
  others to do.
 
  Otherwise, you have to admit the folly of trying to
 compel any such folks 
  to move from their hard-earned perspectives, if
 you're not willing to do 
  that yourself.
 
  Terren
 
 
  --- On Sun, 9/7/08, Mike Tintner
 [EMAIL PROTECTED] wrote:
 
  From: Mike Tintner
 [EMAIL PROTECTED]
  Subject: [agi] Philosophy of General Intelligence
  To: agi@v2.listbox.com
  Date: Sunday, September 7, 2008, 6:26 PM
  Jiri: Mike,
 
  If you think your AGI know-how is superior to the
 know-how
  of those
  who already built testable thinking machines then
 why
  don't you try to
  build one yourself?
 
  Jiri,
 
  I don't think I know much at all about
 machines or
  software  never claim
  to. I think I know certain, only certain, things
 about the
  psychological and
  philosophical aspects of general intelligence -
 esp. BTW
  about the things
  you guys almost never discuss, the kinds of
 problems that a
  general
  intelligence must solve.
 
  You may think that your objections to me are
 entirely
  personal  about my
  manner. I suggest that there is also a v. deep
 difference
  of philosophy
  involved here.
 
  I believe that GI really is about *general*
 intelligence -
  a GI, and the
  only serious example we have is human, is,
 crucially, 

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread Terren Suydam

Hi Mike, comments below...

--- On Fri, 9/5/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Again - v. briefly - it's a reality - nondeterministic
 programming is a 
 reality, so there's no material, mechanistic, software
 problem in getting a 
 machine to decide either way. 

This is inherently dualistic to say this. On one hand you're calling it a 
'reality' and on the other you're denying the influence of material or 
mechanism. What exactly is deciding then, a soul?  How do you get one of those 
into an AI? 

 Yes, strictly, a nondeterministic *program* can be regarded
 as a 
 contradiction - i.e. a structured *series* of instructions
 to decide freely. 

At some point you will have to explain how this deciding freely works. As of 
now, all you have done is name it. 

 The way the human mind is programmed is that
 we are not only free, and 
 have to, *decide* either way about certain decisions, but
 we are also free 
 to *think* about it - i.e. to decide metacognitively
 whether and how we 
 decide at all - we continually decide. for
 example, to put off the 
 decision till later.

There is an entire school of thought, quite mainstream now, in cognitive 
science that says that what appears to be free will is an illusion. Of 
course, you can say that you are free to choose whatever you like, but that 
only speaks to the strength of the illusion - that in itself is not enough to 
disprove the claim. 

In fact, it is plain to see that if you do not commit yourself to this view 
(free will as illusion), you are either a dualist, or you must invoke some kind 
of probabilistic mechanism (as some like Penrose have done by saying that the 
free-will buck stops at the level of quantum mechanics). 

So, Mike, is free will:

1) an illusion based on some kind of unpredictable, complex but *deterministic* 
interaction of physical components
2) the result of probabilistic physics - a *non-deterministic* interaction 
described by something like quantum mechanics
3) the expression of our god-given spirit, or some other non-physical mover of 
physical things


 By contrast, all deterministic/programmed machines and
 computers are 
 guaranteed to complete any task they begin. (Zero
 procrastination or 
 deviation). Very different kinds of machines to us. Very
 different paradigm. 
 (No?)

I think the difference of paradigm between computers and humans is not that one 
is deterministic and one isn't, but rather that one is a paradigm of top-down, 
serialized control, and the other is bottom-up, massively parallel, and 
emergent. It comes down to design vs. emergence.

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-04 Thread Terren Suydam

Hi Ben,

You may have stated this explicitly in the past, but I just want to clarify - 
you seem to be suggesting that a phenomenological self is important if not 
critical to the actualization of general intelligence. Is this your belief, and 
if so, can you provide a brief justification of that?  (I happen to believe 
this myself.. just trying to understand your philosophy better.)

Terren

--- On Thu, 9/4/08, Ben Goertzel [EMAIL PROTECTED] wrote:
However, I think that not all psychologically-embodied systems possess a 
sufficiently rich psychological-embodiment to lead to significantly general 
intelligence  My suggestion is that a laptop w/o network connection or odd 
sensor-peripherals, probably does not have sufficiently rich correlations btw 
its I/O stream and its physical state, to allow it to develop a robust 
self-model of its physical self (which can then be used as a basis for a more 
general phenomenal self).  






  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Terren Suydam

Mike,

Thanks for the reference to Dennis Noble, he sounds very interesting and his 
views on Systems Biology as expressed on his Wikipedia page are perfectly in 
line with my own thoughts and biases.

I agree in spirit with your basic criticisms regarding current AI and 
creativity. However, it must be pointed out that if you abandon determinism, 
you find yourself in the world of dualism, or worse. There are several ways out 
of this conundrum, one involves complexity/emergence (global behavior cannot be 
understood in terms of reduction to local behavior), another involves 
algorithmic complexity (or complicatedness, behavior cannot be predicted due to 
limitations of our inborn abilities to mentally model such complicatedness), 
although either can be predicted in principle with sufficient computational 
resources. This is true of humans as well - and if you think it isn't, once 
again, you're committing yourself to some kind of dualistic position (e.g., we 
are motivated by our spirit).

If you accept the proposition that the appearance of free will in an agent 
comes down to one's ability to predict its behavior, then either of the schemes 
above serves to produce free will (or the illusion of it, if you prefer).

Thus is creativity possible while preserving determinism. Of course, you still 
need to have an explanation for how creativity emerges in either case, but in 
contrast to what you said before, some AI folks have indeed worked on this 
issue. 

Terren

--- On Thu, 9/4/08, Mike Tintner [EMAIL PROTECTED] wrote:

 From: Mike Tintner [EMAIL PROTECTED]
 Subject: Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser
 To: agi@v2.listbox.com
 Date: Thursday, September 4, 2008, 12:47 AM
 Terren,
 
 If you think it's all been said, please point me to the
 philosophy of AI 
 that includes it.
 
 A programmed machine is an organized structure. A keyboard
 (and indeed a 
 computer with keyboard) are something very different -
 there is no 
 organization to those 26 letters etc.   They can be freely
 combined and 
 sequenced to create an infinity of texts. That is the very
 essence and 
 manifestly, the whole point, of a keyboard.
 
 Yes, the keyboard is only an instrument. But your body -
 and your brain - 
 which use it,  are themselves keyboards. They consist of
 parts which also 
 have no fundamental behavioural organization - that can be
 freely combined 
 and sequenced to create an infinity of sequences of
 movements and thought - 
 dances, texts, speeches, daydreams, postures etc.
 
 In abstract logical principle, it could all be
 preprogrammed. But I doubt 
 that it's possible mathematically - a program for
 selecting from an infinity 
 of possibilities? And it would be engineering madness -
 like trying to 
 preprogram a particular way of playing music, when an
 infinite repertoire is 
 possible and the environment, (in this case musical
 culture), is changing 
 and evolving with bewildering and unpredictable speed.
 
 To look at computers as what they are (are you disputing
 this?) - machines 
 for creating programs first, and following them second,  is
 a radically 
 different way of looking at computers. It also fits with
 radically different 
 approaches to DNA - moving away from the idea of DNA as
 coded program, to 
 something that can be, as it obviously can be, played like
 a keyboard  - see 
 Dennis Noble, The Music of Life. It fits with the fact
 (otherwise 
 inexplicable) that all intelligences have both deliberate
 (creative) and 
 automatic (routine) levels - and are not just automatic,
 like purely 
 programmed computers. And it fits with the way computers
 are actually used 
 and programmed, rather than the essentially fictional
 notion of them as pure 
 turing machines.
 
 And how to produce creativity is the central problem of AGI
 - completely 
 unsolved.  So maybe a new approach/paradigm is worth at
 least considering 
 rather than more of the same? I'm not aware of a single
 idea from any AGI-er 
 past or present that directly addresses that problem - are
 you?
 
 
 
  Mike,
 
  There's nothing particularly creative about
 keyboards. The creativity 
  comes from what uses the keyboard. Maybe that was your
 point, but if so 
  the digression about a keyboard is just confusing.
 
  In terms of a metaphor, I'm not sure I understand
 your point about 
  organizers. It seems to me to refer simply
 to that which we humans do, 
  which in essence says general intelligence is
 what we humans do. 
  Unfortunately, I found this last email to be quite
 muddled. Actually, I am 
  sympathetic to a lot of your ideas, Mike, but I also
 have to say that your 
  tone is quite condescending. There are a lot of smart
 people on this list, 
  as one would expect, and a little humility and respect
 on your part would 
  go a long way. Saying things like You see,
 AI-ers simply don't understand 
  computers, or understand only half of them. 
 More often than not you 
  position 

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Terren Suydam

OK, I'll bite: what's nondeterministic programming if not a contradiction?

--- On Thu, 9/4/08, Mike Tintner [EMAIL PROTECTED] wrote:
 Nah. One word (though it would take too long here to
 explain) ; 
 nondeterministic programming.



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-03 Thread Terren Suydam

Hi Ben, 

My own feeling is that computation is just the latest in a series of technical 
metaphors that we apply in service of understanding how the universe works. 
Like the others before it, it captures some valuable aspects and leaves out 
others. It leaves me wondering: what future metaphors will we apply to the 
universe, ourselves, etc., that will make computation-as-metaphor seem as 
quaint as the old clockworks analogies?

I believe that computation is important in that it can help us simulate 
intelligence, but intelligence itself is not simply computation (or if it is, 
it's in a way that requires us to transcend our current notions of 
computation). Note that I'm not suggesting anything mystical or dualistic at 
all, just offering the possibility that we can find still greater metaphors for 
how intelligence works. 

Either way though, I'm very interested in the results of your work - at worst, 
it will shed some needed light on the subject. At best... well, you know that 
part. :-]

Terren

--- On Tue, 9/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] Recursive self-change: some definitions
To: agi@v2.listbox.com
Date: Tuesday, September 2, 2008, 4:50 PM



On Tue, Sep 2, 2008 at 4:43 PM, Eric Burton [EMAIL PROTECTED] wrote:

I really see a number of algorithmic breakthroughs as necessary for

the development of strong general AI 

I hear that a lot, yet I never hear any convincing  arguments in that regard...

So, hypothetically (and I hope not insultingly),
 I tend to view this as a kind of unconscious overestimation of the awesomeness 
of our own

species ... we feel intuitively like we're doing SOMETHING so cool in our 
brains, it couldn't
possibly be emulated or superseded by mere algorithms like the ones computer 
scientists
have developed so far ;-)


ben







  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread Terren Suydam

Hi Vlad,

Thanks for the response. It seems that you're advocating an incremental 
approach *towards* FAI, the ultimate goal being full attainment of 
Friendliness... something you express as fraught with difficulty but not 
insurmountable. As you know, I disagree that it is attainable, because it is 
not possible in principle to know whether something that considers itself 
Friendly actually is. You have to break a few eggs to make an omelet, as the 
saying goes, and Friendliness depends on whether you're the egg or the cook.

Terren

--- On Sat, 8/30/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 From: Vladimir Nesov [EMAIL PROTECTED]
 Subject: [agi] What is Friendly AI?
 To: agi@v2.listbox.com
 Date: Saturday, August 30, 2008, 1:53 PM
 On Sat, Aug 30, 2008 at 8:54 PM, Terren Suydam
 [EMAIL PROTECTED] wrote:
  --- On Sat, 8/30/08, Vladimir Nesov
 [EMAIL PROTECTED] wrote:
 
  You start with what is right? and end
 with
  Friendly AI, you don't
  start with Friendly AI and close the
 circular
  argument. This doesn't
  answer the question, but it defines Friendly AI
 and thus
  Friendly AI
  (in terms of right).
 
  In your view, then, the AI never answers the question
 What is right?.
  The question has already been answered in terms of the
 algorithmic process
  that determines its subgoals in terms of Friendliness.
 
 There is a symbolic string what is right? and
 what it refers to, the
 thing that we are trying to instantiate in the world. The
 whole
 process of  answering the question is the meaning of life,
 it is what
 we want to do for the rest of eternity (it is roughly a
 definition of
 right rather than over-the-top extrapolation
 from it). It is an
 immensely huge object, and we know very little about it,
 like we know
 very little about the form of a Mandelbrot set from the
 formula that
 defines it, even though it entirely unfolds from this
 little formula.
 What's worse, we don't know how to safely establish
 the dynamics for
 answering this question, we don't know the formula, we
 only know the
 symbolic string, formula, that we assign some
 fuzzy meaning to.
 
 There is no final answer, and no formal question, so I use
 question-answer pairs to describe the dynamics of the
 process, which
 flows from question to answer, and the answer is the next
 question,
 which then follows to the next answer, and so on.
 
 With Friendly AI, the process begins with the question a
 human asks to
 himself, what is right?. From this question
 follows a technical
 solution, initial dynamics of Friendly AI, that is a device
 to make a
 next step, to initiate transferring the dynamics of
 right from human
 into a more reliable and powerful form. In this sense,
 Friendly AI
 answers the question of right, being the next
 step in the process.
 But initial FAI doesn't embody the whole dynamics, it
 only references
 it in the humans and learns to gradually transfer it, to
 embody it.
 Initial FAI doesn't contain the content of
 right, only the structure
 of absorb it from humans.
 
 Of course, this is simplification, there are all kinds of
 difficulties. For example, this whole endeavor needs to be
 safeguarded
 against mistakes made along the way, including the mistakes
 made
 before the idea of implementing FAI appeared, mistakes in
 everyday
 design that went into FAI, mistakes in initial stages of
 training,
 mistakes in moral decisions made about what
 right means. Initial
 FAI, when it grows up sufficiently, needs to be able to
 look back and
 see why it turned out to be the way it did, was it because
 it was
 intended to have a property X, or was it because of some
 kind of
 arbitrary coincidence, was property X intended for valid
 reasons, or
 because programmer Z had a bad mood that morning, etc.
 Unfortunately,
 there is no objective morality, so FAI needs to be made
 good enough
 from the start to eventually be able to recognize what is
 valid and
 what is not, reflectively looking back at its origin, with
 all the
 depth of factual information and optimization power to run
 whatever
 factual queries it needs.
 
 I (vainly) hope this answered (at least some of the) other
 questions as well.
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread Terren Suydam

Hey Vlad - 

By considers itself Friendly, I'm refering to an FAI that is renormalizing in 
the sense you suggest. It's an intentional stance interpretation of what it's 
doing, regardless of whether the FAI is actually considering itself Friendly, 
whatever that would mean.

I'm asserting that if you had an FAI in the sense you've described, it wouldn't 
be possible in principle to distinguish it with 100% confidence from a rogue 
AI. There's no Turing Test for Friendliness.

Terren

--- On Wed, 9/3/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 From: Vladimir Nesov [EMAIL PROTECTED]
 Subject: Re: [agi] What is Friendly AI?
 To: agi@v2.listbox.com
 Date: Wednesday, September 3, 2008, 5:04 PM
 On Thu, Sep 4, 2008 at 12:46 AM, Terren Suydam
 [EMAIL PROTECTED] wrote:
 
  Hi Vlad,
 
  Thanks for the response. It seems that you're
 advocating an incremental
  approach *towards* FAI, the ultimate goal being full
 attainment of Friendliness...
  something you express as fraught with difficulty but
 not insurmountable.
  As you know, I disagree that it is attainable, because
 it is not possible in
  principle to know whether something that considers
 itself Friendly actually
  is. You have to break a few eggs to make an omelet, as
 the saying goes,
  and Friendliness depends on whether you're the egg
 or the cook.
 
 
 Sorry Terren, I don't understand what you are trying to
 say in the
 last two sentences. What does considering itself
 Friendly means and
 how it figures into FAI, as you use the phrase? What (I
 assume) kind
 of experiment or arbitrary decision are you talking about?
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread Terren Suydam

I'm talking about a situation where humans must interact with the FAI without 
knowledge in advance about whether it is Friendly or not. Is there a test we 
can devise to make certain that it is?

--- On Wed, 9/3/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 From: Vladimir Nesov [EMAIL PROTECTED]
 Subject: Re: [agi] What is Friendly AI?
 To: agi@v2.listbox.com
 Date: Wednesday, September 3, 2008, 6:11 PM
 On Thu, Sep 4, 2008 at 1:34 AM, Terren Suydam
 [EMAIL PROTECTED] wrote:
 
  I'm asserting that if you had an FAI in the sense
 you've described, it wouldn't
  be possible in principle to distinguish it with 100%
 confidence from a rogue AI.
  There's no Turing Test for
 Friendliness.
 
 
 You design it to be Friendly, you don't generate an
 arbitrary AI and
 then test it. The latter, if not outright fatal, might
 indeed prove
 impossible as you suggest, which is why there is little to
 be gained
 from AI-boxes.
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-03 Thread Terren Suydam

Hi Mike,

I see two ways to answer your question. One is along the lines that Jaron 
Lanier has proposed - the idea of software interfaces that are fuzzy. So rather 
than function calls that take a specific set of well defined arguments, 
software components talk somehow in 'patterns' such that small errors can be 
tolerated. While there would still be a kind of 'code' that executes, the 
process of translating it to processor instructions would be much more highly 
abstracted than any current high level language. I'm not sure I truly grokked 
Lanier's concept, but it's clear that for it to work, this high-level pattern 
idea would still need to somehow translate to instructions the processor can 
execute.

The other way of answering this question is in terms of creating simulations of 
things like brains that don't execute code. You model the parallelism in code 
from which emerges the structures of interest. This is the A-Life approach that 
I advocate.

But at bottom, a computer is a processor that executes instructions. Unless 
you're talking about a radically different kind of computer... if so, care to 
elaborate?

Terren

--- On Wed, 9/3/08, Mike Tintner [EMAIL PROTECTED] wrote:
From: Mike Tintner [EMAIL PROTECTED]
Subject: Re: [agi] Recursive self-change: some definitions
To: agi@v2.listbox.com
Date: Wednesday, September 3, 2008, 7:02 PM



 
 

Terren:My own 
feeling is that computation is just the latest in a series of technical 
metaphors that we apply in service of understanding how the universe works. 
Like 
the others before it, it captures some valuable aspects and leaves out others. 
It leaves me wondering: what future metaphors will we apply to the universe, 
ourselves, etc., that will make computation-as-metaphor seem as quaint as the 
old clockworks analogies?

I think this is a good important point. 
I've been groping confusedly here. It seems to me computation necessarily 
involves the idea of using a code (?). But the nervous system seems to me 
something capable of functioning without a code - directly being imprinted on 
by 
the world, and directly forming movements, (even if also involving complex 
hierarchical processes), without any code. I've been wondering whether 
computers 
couldn't also be designed to function without a code in somewhat similar 
fashion.  Any thoughts or ideas of your own?



  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-03 Thread Terren Suydam

Mike,

There's nothing particularly creative about keyboards. The creativity comes 
from what uses the keyboard. Maybe that was your point, but if so the 
digression about a keyboard is just confusing.

In terms of a metaphor, I'm not sure I understand your point about 
organizers. It seems to me to refer simply to that which we humans do, which 
in essence says general intelligence is what we humans do.  Unfortunately, I 
found this last email to be quite muddled. Actually, I am sympathetic to a lot 
of your ideas, Mike, but I also have to say that your tone is quite 
condescending. There are a lot of smart people on this list, as one would 
expect, and a little humility and respect on your part would go a long way. 
Saying things like You see, AI-ers simply don't understand computers, or 
understand only half of them.  More often than not you position yourself as 
the sole source of enlightened wisdom on AI and other subjects, and that does 
not make me want to get to know your ideas any better.  Sorry to veer off topic 
here, but I say these things because I think some of your ideas are valid and 
could really benefit from an adjustment in your
 presentation of them, and yourself.  If I didn't think you had anything 
worthwhile to say, I wouldn't bother.

Terren

--- On Wed, 9/3/08, Mike Tintner [EMAIL PROTECTED] wrote:

 From: Mike Tintner [EMAIL PROTECTED]
 Subject: [agi] A NewMetaphor for Intelligence - the Computer/Organiser
 To: agi@v2.listbox.com
 Date: Wednesday, September 3, 2008, 9:42 PM
 Terren's request for new metaphors/paradigms for
 intelligence threw me 
 temporarily off course.Why a new one - why not the old one?
 The computer. 
 But the whole computer.
 
 You see, AI-ers simply don't understand computers, or
 understand only half 
 of them
 
 What I'm doing here is what I said philosophers do -
 outline existing 
 paradigms and point out how they lack certain essential
 dimensions.
 
 When AI-ers look at a computer, the paradigm that they
 impose on it is that 
 of a Turing machine - a programmed machine, a device for
 following programs.
 
 But that is obviously only the half of it.Computers are
 obviously much more 
 than that - and  Turing machines. You just have to look at
 them. It's 
 staring you in the face. There's something they have
 that Turing machines 
 don't. See it? Terren?
 
 They have -   a keyboard.
 
 And as a matter of scientific, historical fact, computers
 are first and 
 foremost keyboards - i.e.devices for CREATING programs  on
 keyboards, - and 
 only then following them. [Remember how AI gets almost
 everything about 
 intelligence back to front?] There is not and never has
 been a program that 
 wasn't first created on a keyboard. Indisputable fact.
 Almost everything 
 that happens in computers happens via the keyboard.
 
 So what exactly is a keyboard? Well, like all keyboards
 whether of 
 computers, musical instruments or typewriters, it is a
 creative instrument. 
 And what makes it creative is that it is - you could say -
 an organiser.
 
 A device with certain organs (in this case
 keys) that are designed to be 
 creatively organised - arranged in creative, improvised
 (rather than 
 programmed) sequences of  action/ association./organ
 play.
 
 And an extension of the body. Of the organism. All
 organisms are 
 organisers - devices for creatively sequencing
 actions/ 
 associations./organs/ nervous systems first and developing
 fixed, orderly 
 sequences/ routines/ programs second.
 
 All organisers are manifestly capable of an infinity of
 creative, novel 
 sequences, both rational and organized, and crazy and
 disorganized.  The 
 idea that organisers (including computers) are only meant
 to follow 
 programs - to be straitjacketed in movement and thought - 
 is obviously 
 untrue. Touch the keyboard. Which key comes first?
 What's the program for 
 creating any program? And there lies the secret of AGI.
 
 
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-30 Thread Terren Suydam
--- On Sat, 8/30/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 You start with what is right? and end with
 Friendly AI, you don't
 start with Friendly AI and close the circular
 argument. This doesn't
 answer the question, but it defines Friendly AI and thus
 Friendly AI
 (in terms of right).

In your view, then, the AI never answers the question What is right?.  The 
question has already been answered in terms of the algorithmic process that 
determines its subgoals in terms of Friendliness.

Therefore it comes down to whether an algorithmic process can provide the basis 
of action in a way that what is right? could always be answered correctly in 
advance. You say yes it can, but we don't know how, yet. I say no it can't, 
because Friendliness can't be formalized.  
 
 All things change through time, which doesn't make them
 cease to exist.

Friendliness changes through time in the same sense that our morals change 
through time. I didn't say it ceases to exist. The implication here is that 
Friendliness is impossible to specify in any static sense - it must be updated 
as it changes. 
 
 Maybe it
 complicates the
 procedure a little, making the decision procedure
 conditional, if(A)
 press 1, else press 1, or maybe it complicates it
 much more, but it
 doesn't make the challenge ill-defined.

Ah, the expert-systems approach to morality. If I have a rule-book large 
enough, I can always act rightly in the world, is that the idea?  If the 
challenge is not ill-defined, that should be possible, correct?

But the challenge *is* ill-defined, because the kinds of contextual 
distinctions that make a difference in moral evaluation can be so minor or 
seemingly arbitrary. 

For example, it's easy to contrive a scenario in which the situation's moral 
evaluation changes based on what kind of shoes someone is wearing: an endorsed 
athlete willingly wears a competitor's brand in public. 

To make matters more difficult, moral valuations depend as often as not on 
intention. If the athlete in the above example knowingly wears a competitor's 
brand, it suggests a different moral valuation than if he mistakenly wears it. 
That suggests that an AGI will require a theory of mind that can make judgments 
about the intentions of fellow agents/humans, and that these judgments of 
intentionality are fed in to the Friendliness algorithm. 

On top of all that, novel situations with novel combinations of moral concerns 
occur constantly - witness humanity's struggle to understand the moral 
implications of cloning - an algorithm is going to answer that one for us?  

This is not an argument from ignorance. I leave open the possibility that I am 
too limited to see how one could resolve those objections. But it doesn't 
matter whether I can see it or not, it's still true that whoever hopes to 
design a Friendliness algorithm must deal with these objections head-on. And 
you haven't dealt with them, yet, you've merely dismissed them. Which is 
especially worrying coming from someone who essentially tethers the fate of 
humanity to our ability to solve this problem.

Let me put it this way: I would think anyone in a position to offer funding for 
this kind of work would require good answers to the above.
 
Terren



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-30 Thread Terren Suydam

--- On Sat, 8/30/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 Won't work, Moore's law is ticking, and one day a
 morally arbitrary
 self-improving optimization will go FOOM. We have to try.

I wish I had a response to that. I wish I could believe it was even possible. 
To me, this is like saying we have to try to build an anti-H-bomb device 
before someone builds an H-bomb.  Oh, and the anti-H-bomb looks exactly like 
the H-bomb. It just behaves differently. We have to try, right?

 Given the psychological unity of humankind, giving the
 focus of
 right to George W. Bush personally will be
 enormously better for
 everyone than going in any direction assumed by AI without
 the part of
 Friendliness structure that makes it absorb the goals from
 humanity.
 CEV is an attempt to describe how to focus AI on humanity
 as a whole,
 rather than on a specific human.

Psychological unity of humankind?!  What of suicide bombers and biological 
weapons and all the other charming ways we humans have of killing one another?  
If giving an FAI to George Bush, or Barack Obama, or any other political 
leader, is your idea of Friendliness, then I have to wonder about your grasp of 
human nature. It is impossible to see how that technology would not be used as 
a weapon. 
 
 And you are assembling the H-bomb (err, evolved
 intelligence) in the
 garage just out of curiosity, and occasionally to use it as
 a tea
 table, all the while advocating global disarmament.

That's why I advocate limiting the scope and power of any such creation, which 
is possible because it's simulated, and not RSI.  
 
  The question is whether its possible to know in
 advance that an modification
  won't be unstable, within the finite computational
 resources available to an AGI.
 
 If you write something redundantly 10^6 times, it won't
 all just
 spontaneously *change*, in the lifetime of the universe. In
 the worst
 case, it'll all just be destroyed by some catastrophe
 or another, but
 it won't change in any interesting way.

You lost me there - not sure how that relates to whether its possible to know 
in advance that an modification won't be unstable, within the finite 
computational resources available to an AGI.
 
  With the kind of recursive scenarios we're talking
 about, simulation is the only
  way to guarantee that a modification is an
 improvement, and an AGI simulating
  its own modified operation requires exponentially
 increasing resources, particularly
  as it simulates itself simulating itself simulating
 itself, and so on for N future
  modifications.
 
 Again, you are imagining an impossible or faulty strategy,
 pointing to
 this image, and saying don't do that!.
 Doesn't mean there is no good
 strategy.

What was faulty or impossible about what I wrote? 

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-30 Thread Terren Suydam

I agree with that to the extent that theoretical advances could address the 
philosophical objections I am making. But until those are dealt with, 
experimentation is a waste of time and money.

If I was talking about how to build faster-than-lightspeed travel, you would 
want to know how I plan to overcome theoretical limitations. You wouldn't fund 
experimentation on that until those objections on principle had been dealt with.

--- On Sat, 8/30/08, Ben Goertzel [EMAIL PROTECTED] wrote:
About Friendly AI..





Let me put it this way: I would think anyone in a position to offer funding for 
this kind of work would require good answers to the above.



Terren

My view is a little different.  I think these answers are going to come out of 
a combination of theoretical advances with lessons learned via experimenting 
with early-stage AGI systems, rather than being arrived at in-advance based on 
pure armchair theorization...


-- Ben G
 






  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-30 Thread Terren Suydam
comments below...

[BG]
Hi,

Your philosophical objections aren't really objections to my perspective, so 
far as I have understood so far...

[TS]
Agreed. They're to the Eliezer perspective that Vlad is arguing for.

[BG]
I don't plan to hardwire beneficialness (by which I may not mean precisely the 
same thing as Friendliness in Eliezer's vernacular), I plan to teach it ... 
to an AGI with an architecture that's well-suited to learn it, by design...


[TS]
This is essentially what we do with our kids, so no objections to the 
methodology here. But from the you have to guarantee it or we're doomed 
perspective, that's not good enough.

[BG]
I do however plan to hardwire **a powerful, super-human capability for 
empathy** ... and a goal-maintenance system hardwired toward **stability of 
top-level goals under self-modification**.   But I agree this is different from 
hardwiring specific goal content ... though it strongly *biases* the system 
toward learning certain goals.

[TS]
Hardwired empathy strikes me as a basic oxymoron. Empathy must involve embodied 
experience and the ability to imagine the embodied experience of another. When 
we have an empathic experience, it's because we see ourselves in another's 
situation - it's hard to understand what empathy could mean without that basic  
subjective aspect.




  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-29 Thread Terren Suydam

--- On Fri, 8/29/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
 I don't see why an un-embodied system couldn't
 successfully use the
 concept of self in its models. It's just another
 concept, except that
 it's linked to real features of the system.

To an unembodied agent, the concept of self is indistinguishable from any other 
concept it works with. I use concept in quotes because to the unembodied 
agent, it is not a concept at all, but merely a symbol with no semantic context 
attached. All such an agent can do is perform operations on ungrounded symbols 
- at best, the result of which can appear to be intelligent within some domain 
(e.g., a chess program).

 Even though this particular
 AGI never
 heard about any of those other tools being used for cutting
 bread (and
 is not self-aware in any sense), it still can (when asked
 for advice)
 make a reasonable suggestion to try the T2
 (because of the
 similarity) = coming up with a novel idea 
 demonstrating general
 intelligence.

Sounds like magic to me. You're taking something that we humans can do and 
sticking it in as a black box into a hugely simplified agent in a way that 
imparts no understanding about how we do it.  Maybe you left that part out for 
brevity - care to elaborate?

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Terren Suydam

--- On Fri, 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
 Saying that ethics is entirely driven by evolution is NOT
 the same as saying 
 that evolution always results in ethics.  Ethics is 
 computationally/cognitively expensive to successfully
 implement (because a 
 stupid implementation gets exploited to death).  There are
 many evolutionary 
 niches that won't support that expense and the
 successful entities in those 
 niches won't be ethical.  Parasites are a
 prototypical/archetypal example of 
 such a niche since they tend to degeneratively streamlined
 to the point of 
 being stripped down to virtually nothing except that which
 is necessary for 
 their parasitism.  Effectively, they are single goal
 entities -- the single 
 most dangerous type of entity possible.

Works for me. Just wanted to point out that saying ethics is entirely driven 
by evolution is not enough to communicate with precision what you mean by that.
 
 OK.  How about this . . . . Ethics is that behavior that,
 when shown by you, 
 makes me believe that I should facilitate your survival. 
 Obviously, it is 
 then to your (evolutionary) benefit to behave ethically.

Ethics can't be explained simply by examining interactions between individuals. 
It's an emergent dynamic that requires explanation at the group level. It's a 
set of culture-wide rules and taboos - how did they get there?
 
 Matt is decades out of date and needs to catch up on his
 reading.

Really? I must be out of date too then, since I agree with his explanation of 
ethics. I haven't read Hauser yet though, so maybe you're right.
 
 Ethics is *NOT* the result of group selection.  The
 *ethical evaluation of a 
 given action* is a meme and driven by the same social/group
 forces as any 
 other meme.  Rational memes when adopted by a group can
 enhance group 
 survival but . . . . there are also mechanisms by which
 seemingly irrational 
 memes can also enhance survival indirectly in *exactly* the
 same fashion as 
 the seemingly irrational tail displays of
 peacocks facilitates their group 
 survival by identifying the fittest individuals.  Note that
 it all depends 
 upon circumstances . . . .
 
 Ethics is first and foremost what society wants you to do. 
 But, society 
 can't be too pushy in it's demands or individuals
 will defect and society 
 will break down.  So, ethics turns into a matter of
 determining what is the 
 behavior that is best for society (and thus the individual)
 without unduly 
 burdening the individual (which would promote defection,
 cheating, etc.). 
 This behavior clearly differs based upon circumstances but,
 equally clearly, 
 should be able to be derived from a reasonably small set of
 rules that 
 *will* be context dependent.  Marc Hauser has done a lot of
 research and 
 human morality seems to be designed exactly that way (in
 terms of how it 
 varies across societies as if it is based upon fairly
 simple rules with a 
 small number of variables/variable settings.  I highly
 recommend his 
 writings (and being familiar with them is pretty much a
 necessity if you 
 want to have a decent advanced/current scientific
 discussion of ethics and 
 morals).
 
 Mark

I fail to see how your above explanation is anything but an elaboration of the 
idea that ethics is due to group selection. The following statements all 
support it: 
 - memes [rational or otherwise] when adopted by a group can enhance group 
survival
 - Ethics is first and foremost what society wants you to do.
 - ethics turns into a matter of determining what is the behavior that is best 
for society

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-28 Thread Terren Suydam

  It doesn't matter what I do with the question. It
 only matters what an AGI does with it.
 
 AGI doesn't do anything with the question, you do. You
 answer the
 question by implementing Friendly AI. FAI is the answer to
 the
 question.

The question is: how could one specify Friendliness in such a way that an AI 
will be guaranteed-Friendly? Is your answer to that really just you build a 
Friendly AI?  Why do I feel like a dog chasing my own tail?

I've been saying that Friendliness is impossible to implement because 1) it's a 
moving target (as in, changes through time), since 2) its definition is 
dependent on context (situational context, cultural context, etc).  In other 
words, Friendliness is not something that can be hardwired. It can't be 
formalized, coded, designed, implemented, or proved. It is an invention of the 
collective psychology of humankind, and every bit as fuzzy as that sounds. At 
best, it can be approximated. 

I'll put a challenge out to demonstrate my claim. I challenge anyone who 
believes that Friendliness is attainable in principle to construct a scenario 
in which there is a clear right action that does not depend on cultural or 
situational context. If you say, an AGI is alone in a room with a human. That 
AGI should not kill the human. I say, what if the human in the room has just 
killed a hundred people in cold blood, and will certainly kill more?  OK, you 
up the ante: it's a child who hasn't killed anyone. I say: yet. The child is 
contagious with an extremely deadly airborne pathogen.  So you say, ok, fine, 
the child is healthy. I say: what if the child has asked the AI to assist in 
her suicide? Let's say the child's father has dishonored the family and in this 
child's culture, whenever a father does a terrible thing, the family is 
expected to commit suicide. If this child does not commit suicide, it will 
bring even greater dishonor to the extended
 family, who will all be ritually massacred.

You see where I'm going, I hope. You can always construct increasingly 
elaborate scenarios based on nothing but human culture and the valuations that 
go with it.  Friendliness *must* take these cultural considerations seriously, 
because that's what a particular culture's morality is based on. And if you 
accept this, you have to see that these valuations change through time, that 
they are essentially invented. From an objective standpoint, the best you can 
do is to show that the morality of a particular culture lends stability to that 
collective. But cultural stability does not imply preservation of individual 
life, or human rights in general - they are separate concepts.

The only out is if there is such a thing as objective morality... if you can 
specify right from wrong without any reference to a particular set of cultural 
valuations. 

  If you can't guarantee Friendliness, then
 self-modifying approaches to
  AGI should just be abandoned. Do we agree on that?
 
 More or less, but keeping in mind that
 guarantee doesn't need to be
 a formal proof of absolute certainty. If you can't show
 that a design
 implements Friendliness, you shouldn't implement it.

What does guarantee mean if not absolute certainty?  

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-28 Thread Terren Suydam
--- On Wed, 8/27/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 One of the main motivations for the fast development of
 Friendly AI is
 that it can be allowed to develop superintelligence to
 police the
 human space from global catastrophes like Unfriendly AI,
 which
 includes as a special case a hacked design of Friendly AI
 made
 Unfriendly.

That is certainly the most compelling reason to do this kind of research. And I 
wish I had something more than disallow self-modifying approaches, as if that 
would be enforcible. But I just don't see Friendliness as attainable, in 
principle, so I think we treat this like nuclear weaponry - we do our best to 
prevent it.
 
 If we can understand it and know that it does what we want,
 we don't
 need to limit its power, because it becomes our power. 

Whose power?  Who is referred to by our?  More importantly, whose agenda is 
served by this power? Power corrupts. One culture's good is another's evil. 
What we call Friendly, our political enemies might call Unfriendly. If you 
think no agenda would be served, you're naive. And if you think the AGI would 
somehow know to not serve its masters in service to Friendliness to humanity, 
then you believe in an objective morality... in a universally compelling 
argument.

 With simulated
 intelligence, understanding might prove as difficult as in
 neuroscience, studying resulting design that is unstable
 and thus in
 long term Unfriendly. Hacking it to a point of Friendliness
 would be
 equivalent to solving the original question of
 Friendliness,
 understanding what you want, and would in fact involve
 something close
 to hands-on design, so it's unclear how much help
 experiments can
 provide in this regard relative to default approach.

Agreed, although I would not advocate hacking Friendliness. I'd advocate 
limiting the simulated environment in which the agent exists. The point of this 
line of reasoning is to avoid the Singularity, period. Perhaps that's every bit 
as unrealistic as I believe Friendliness to be.
 
 It's self-improvement, not self-retardation. If
 modification is
 expected to make you unstable and crazy, don't do that
 modification,
 add some redundancy instead and think again.

The question is whether its possible to know in advance that an modification 
won't be unstable, within the finite computational resources available to an 
AGI. With the kind of recursive scenarios we're talking about, simulation is 
the only way to guarantee that a modification is an improvement, and an AGI 
simulating its own modified operation requires exponentially increasing 
resources, particularly as it simulates itself simulating itself simulating 
itself, and so on for N future modifications.

  What does it compare *against*?
 
 Originally, it compares against humans, later
 on it improves on the
 information about the initial conditions, renormalizing the
 concept
 against itself.

For it to compare against humans suggests that it's possible for humans to 
specify Friendliness to an AGI, and I have dealt with that elsewhere. 

I was expecting you to say that renormalizing continues to occur *against 
humans*, not itself. How would it account for the possibility that what humans 
consider Friendly changes through time? 
 
Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Terren Suydam

--- On Thu, 8/28/08, Mark Waser [EMAIL PROTECTED] wrote:
 Actually, I *do* define good and ethics not only in
 evolutionary terms but 
 as being driven by evolution.  Unlike most people, I
 believe that ethics is 
 *entirely* driven by what is best evolutionarily while not
 believing at all 
 in red in tooth and claw.  I can give you a
 reading list that shows that 
 the latter view is horribly outdated among people who keep
 up with the 
 research rather than just rehashing tired old ideas.

I think it's a stretch to derive ethical ideas from what you refer to as best 
evolutionarily.  Parasites are pretty freaking successful, from an 
evolutionary point of view, but nobody would say parasitism is ethical.

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Terren Suydam

Hi Jiri, 

Comments below...

--- On Thu, 8/28/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
 That's difficult to reconcile if you don't
 believe embodiment is all that important.
 
 Not really. We might be qualia-driven, but for our AGIs
 it's perfectly
 ok (and only natural) to be driven by given
 goals.

I've argued elsewhere that goals that are not grounded in an AGI's experience 
impart no meaning. Either an agent has some kind of embodied experience, in 
which case the specified goal is not grounded in anything the agent can relate 
to, or it is not embodied at all, in which case it is a mindless automaton.
 
 question I would pose to you non-embodied advocates is:
 how in the world will you motivate your creation? I suppose
 that you won't. You'll just tell it what to do
 (specify its goals) and it will do it..
 
 Correct. AGIs driven by human-like-qualia would be less
 safe  harder
 to control. Human-like-qualia are too high-level to be
 safe. When
 implementing qualia (not that we know hot to do it ;-))
  increasing
 granularity for safety, you would IMO end up with basically
 giving
 the goals - which is of course easier without messing
 with qualia
 implementation. Forget qualia as a motivation for our AGIs.
 Our AGIs
 are supposed to work for us, not for themselves.
 
So much talk about Friendliness implies that the AGI will have no ability to 
choose its own goals. It seems that AGI researchers are usually looking to 
create clever slaves. That may fit your notion of general intelligence, but not 
mine.

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Terren Suydam

Hi Mark,

Obviously you need to complicated your original statement I believe that 
ethics is *entirely* driven by what is best evolutionarily... in such a way 
that we don't derive ethics from parasites. You did that by invoking social 
behavior - parasites are not social beings. 

So from there you need to identify how evolution operates in social groups in 
such a way that you can derive ethics. As Matt alluded to before, would you 
agree that ethics is the result of group selection? In other words, that human 
collectives with certain taboos make the group as a whole more likely to 
persist?

Terren


--- On Thu, 8/28/08, Mark Waser [EMAIL PROTECTED] wrote:

 From: Mark Waser [EMAIL PROTECTED]
 Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
 Re: [agi] The Necessity of Embodiment))
 To: agi@v2.listbox.com
 Date: Thursday, August 28, 2008, 9:21 PM
 Parasites are very successful at surviving but they
 don't have other 
 goals.  Try being parasitic *and* succeeding at goals other
 than survival. 
 I think you'll find that your parasitic ways will
 rapidly get in the way of 
 your other goals the second that you need help (or even
 non-interference) 
 from others.
 
 - Original Message - 
 From: Terren Suydam [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Thursday, August 28, 2008 5:03 PM
 Subject: Re: AGI goals (was Re: Information theoretic
 approaches to AGI (was 
 Re: [agi] The Necessity of Embodiment))
 
 
 
  --- On Thu, 8/28/08, Mark Waser
 [EMAIL PROTECTED] wrote:
  Actually, I *do* define good and ethics not only
 in
  evolutionary terms but
  as being driven by evolution.  Unlike most people,
 I
  believe that ethics is
  *entirely* driven by what is best evolutionarily
 while not
  believing at all
  in red in tooth and claw.  I can give
 you a
  reading list that shows that
  the latter view is horribly outdated among people
 who keep
  up with the
  research rather than just rehashing tired old
 ideas.
 
  I think it's a stretch to derive ethical ideas
 from what you refer to as 
  best evolutionarily.  Parasites are pretty
 freaking successful, from an 
  evolutionary point of view, but nobody would say
 parasitism is ethical.
 
  Terren
 
 
 
 
 
  ---
  agi
  Archives:
 https://www.listbox.com/member/archive/303/=now
  RSS Feed:
 https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: 
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
  
 
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Terren Suydam

Jiri,

I think where you're coming from is a perspective that doesn't consider or 
doesn't care about the prospect of a conscious intelligence, an awake being 
capable of self reflection and free will (or at least the illusion of it).

I don't think any kind of algorithmic approach, which is to say, un-embodied, 
will ever result in conscious intelligence. But an embodied agent that is able 
to construct ever-deepening models of its experience such that it eventually 
includes itself in its models, well, that is another story. I think btw that is 
a valid description of humans.

We may argue about whether consciousness (mindfulness) is necessary for general 
intelligence. I think it is, and that informs much of my perspective. When I 
say something like mindless automaton, I'm implicitly suggesting that it 
won't be intelligent in a general sense, although it could be in a narrow sense 
(like a chess program).

Terren


--- On Thu, 8/28/08, Jiri Jelinek [EMAIL PROTECTED] wrote:

 From: Jiri Jelinek [EMAIL PROTECTED]
 Subject: Re: [agi] How Would You Design a Play Machine?
 To: agi@v2.listbox.com
 Date: Thursday, August 28, 2008, 10:39 PM
 Terren,
 
 is not embodied at all, in which case it is a mindless
 automaton
 
 Researchers and philosophers define mind and intelligence
 in many
 different ways = their classifications of particular AI
 systems
 differ. What really counts though are problem solving
 abilities of the
 system. Not how it's labeled according to a particular
 definition of
 mind.
 
  So much talk about Friendliness implies that the AGI
 will have no ability to choose its own goals.
 
 Developer's choice.. My approach:
 Main goals - definitely not;
 Sub goals - sure, with restrictions though.
 
 It seems that AGI researchers are usually looking to
 create clever slaves.
 
 We are talking about our machines.
 What else are they supposed to be?
 
 clever slaves. That may fit your notion of general
 intelligence, but not mine.
 
 To me, general intelligence is a cross-domain ability to
 gain
 knowledge in one context and correctly apply it in another
 [in terms
 of problem solving]. The source of the primary goal(s)
 (/problem(s) to
 solve) doesn't (from my perspective) have anything to
 do with the
 level of system's intelligence. It doesn't make it
 more or less
 intelligent. It's just a separate thing. The system
 gets the initial
 goal [from whatever source] and *then* it's time to
 apply its
 intelligence.
 
 Regards,
 Jiri Jelinek
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Terren Suydam

That's a fair criticism. I did explain what I mean by embodiment in a previous 
post, and what I mean by autonomy in the article of mine I referenced. But I do 
recognize that in both cases there is still some ambiguity, so I will withdraw 
the question until I can formulate it in more concise terms. 

Terren

--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 
 This is fuzzy, mysterious and frustrating. Unless you
 *functionally*
 explain what you mean by autonomy and embodiment, the
 conversation
 degrades to a kind of meaningless philosophy that occupied
 some smart
 people for thousands of years without any results.
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-26 Thread Terren Suydam

Are you saying Friendliness is not context-dependent?  I guess I'm struggling 
to understand what a conceptual dynamics would mean that isn't dependent on 
context. The AGI has to act, and at the end of the day, its actions are our 
only true measure of its Friendliness. So I'm not sure what it could mean to 
say that Friendliness isn't expressed in individual decisions.

--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 
 Friendliness is not object-level output of AI, not
 individual
 decisions that it makes in certain contexts. Friendliness
 is a
 conceptual dynamics that is embodied by AI, underlying any
 specific
 decisions. And likewise Friendliness is derived not from
 individual
 actions of humans, but from underlying dynamics imperfectly
 implemented in humans, which in turn doesn't equate
 with
 implementation of humans, but is an aspect of this
 implementation
 which we can roughly refer to.
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Terren Suydam

I don't think it's necessary to be self-aware to do self-modifications. 
Self-awareness implies that the entity has a model of the world that separates 
self from other, but this kind of distinction is not necessary to do 
self-modifications. It could act on itself without the awareness that it was 
acting on itself.  (Goedelian machines would qualify, imo).

The reverse is true, as well. Humans are self aware but we cannot improve 
ourselves in the dangerous ways we talk about with the hard-takeoff scenarios 
of the Singularity. We ought to be worried about self-modifying agents, yes, 
but self-aware agents that can't modify themselves are much less worrying. 
They're all around us.

--- On Tue, 8/26/08, David Hart [EMAIL PROTECTED] wrote:
From: David Hart [EMAIL PROTECTED]
Subject: Re: [agi] How Would You Design a Play Machine?
To: agi@v2.listbox.com
Date: Tuesday, August 26, 2008, 8:20 AM

On 8/26/08, Mike Tintner [EMAIL PROTECTED] wrote:
Is anyone trying to design a self-exploring robot or computer? Does this 
principle have a name?
Interestingly, some views on AI advocate specifically prohibiting 
self-awareness and self-exploration as a precaution against the development of 
unfriendly AI. In my opinion, these views erroneously transfer familiar human 
motives onto 'alien' AGI cognitive architectures - there's a history of 
discussing this topic  on SL4 and other places.


I believe however that most approaches to designing AGI (those that do not 
specifically prohibit self-aware and self-explortative behaviors) take for 
granted, and indeed intentionally promote, self-awareness and self-exploration 
at most stages of AGI development. In other words, efficient and effective 
recursive self-improvement (RSI) requires self-awareness and self-exploration. 
If any term exists to describe a 'self-exploring robot or computer', that term 
is RSI. Coining a lesser term for 'self-exploring AI' may be useful in some 
proto-AGI contexts, but I suspect that 'RSI' is ultimately a more useful and 
meaningful term.


-dave





  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-26 Thread Terren Suydam

If Friendliness is an algorithm, it ought to be a simple matter to express what 
the goal of the algorithm is. How would you define Friendliness, Vlad?

--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 It is expressed in individual decisions, but it isn't
 these decisions
 themselves. If a decision is context-dependent, it
 doesn't translate
 into Friendliness being context-dependent (what would it
 even mean?).
 Friendliness is an algorithm implemented in a calculator
 (or an
 algorithm for assembling a calculator), it is not the
 digits that show
 on its display depending on what buttons were pressed. On
 the other
 hand, the initial implementation of Friendliness leads to
 very
 different dynamics, depending on what sort of morality it
 is referred
 to (see
 http://www.overcomingbias.com/2008/08/mirrors-and-pai.html
 ).
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-26 Thread Terren Suydam

I didn't say the algorithm needs to be simple, I said the goal of the algorithm 
ought to be simple. What are you trying to compute? 

Your answer is, what is the right thing to do?

The obvious next question is, what does the right thing mean?  The only way 
that the answer to that is not context-dependent is if there's such a thing as 
objective morality, something you've already dismissed by referring to the 
there are no universally compelling arguments post on the Overcoming Bias 
blog.

You have to concede here that Friendliness is not objective. Therefore, it 
cannot be expressed formally. It can only be approximated, with error. 


--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 From: Vladimir Nesov [EMAIL PROTECTED]
 Subject: Re: [agi] The Necessity of Embodiment
 To: agi@v2.listbox.com
 Date: Tuesday, August 26, 2008, 1:21 PM
 On Tue, Aug 26, 2008 at 8:54 PM, Terren Suydam
 [EMAIL PROTECTED] wrote:
 
  If Friendliness is an algorithm, it ought to be a
 simple matter to express
  what the goal of the algorithm is. How would you
 define Friendliness, Vlad?
 
 
 Algorithm doesn't need to be simple. The actual
 Friendly AI that
 started to incorporate properties of human morality in it
 is a very
 complex algorithm, and so is the human morality itself.
 Original
 implementation of Friendly AI won't be too complex
 though, it'll only
 need to refer to the complexity outside in a right way, so
 that it'll
 converge on dynamic with the right properties. Figuring out
 what this
 original algorithm needs to be, not to count the technical
 difficulties of implementing it, is very tricky though. You
 start from
 the question what is the right thing to do?
 applied in the context
 of unlimited optimization power, and work on extracting a
 technical
 answer, surfacing the layers of hidden machinery that
 underlie this
 question when *you* think about it, translating the
 question into a
 piece of engineering that answers it, and this is Friendly
 AI.
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-26 Thread Terren Suydam

It doesn't matter what I do with the question. It only matters what an AGI does 
with it. 

I'm challenging you to demonstrate how Friendliness could possibly be specified 
in the formal manner that is required to *guarantee* that an AI whose goals 
derive from that specification would actually do the right thing.

If you can't guarantee Friendliness, then self-modifying approaches to AGI 
should just be abandoned. Do we agree on that?

Terren

--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 The question itself doesn't exist in vacuum. When
 *you*, as a human,
 ask it, there is a very specific meaning associated with
 it. You don't
 search for the meaning that the utterance would
 call in a
 mind-in-general, you search for meaning that *you* give to
 it. Or, to
 make the it more reliable, for the meaning given by the
 idealized
 dynamics implemented in you (
 http://www.overcomingbias.com/2008/08/computations.html ).
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-26 Thread Terren Suydam

--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 But what is safe, and how to improve safety? This is a
 complex goal
 for complex environment, and naturally any solution to this
 goal is
 going to be very intelligent. Arbitrary intelligence is not
 safe
 (fatal, really), but what is safe is also intelligent.

Look, the bottom line is that even if you could somehow build a self-modifying 
AI that was provably Friendly, some evil hacker could come along and modify the 
code. One way or another, we have to treat all smarter-than-us intelligences as 
inherently risky.  

So safe, for me, refers instead to the process of creating the intelligence. 
Can we stop it? Can we understand it?  Can we limit its scope, its power?  With 
simulated intelligences, the answer to all of the above is yes. Pinning your 
hopes of safe AI on the Friendliness of the AI is the mother of all gambles, 
one that in a well-informed democratic process would surely not be undertaken.

 There is no law that makes large computations less lawful
 than small
 computations, if it is in the nature of computation to
 preserve
 certain invariants. A computation that multiplies two huge
 numbers
 isn't inherently more unpredictable than computation
 that multiplies
 two small numbers. 

I'm not talking about straight-forward, linear computation. Since we're talking 
about self-modification, the computation is necessarily recursive and 
iterative. Recursive computation can easily lead to chaos (as in chaos theory, 
not disorder).

The archetypical example of this is the simple equation from population 
dynamics, y=rx(1-x), which is recursively applied for each time interval. For 
values of r greater than some threshold, the behavior is chaotic and thus 
unpredictable, which is a surprising result for such a simple equation.

I'm making a rather broad analogy here by comparing the above example to a 
self-modifying AGI, but the principle holds. An AGI with present goal system G 
computes the Friendliness of a modification M, based on G. It decides to go 
ahead with the modification. This next iteration results in goal system G'. And 
so on, performing Friendliness computations against the resulting goal systems. 
In what sense could one guarantee that this process would not lead to chaos?  
I'm not sure you could even guarantee it would continue self-modifying.

 You have intuitive
 expectation that making Z will make AI uncontrollable,
 which will lead
 to a bad outcome, and so you point out that this design
 that suggests
 doing Z will turn out bad. But the answer is that AI itself
 will check
 whether Z is expected to lead to a good outcome before
 making a
 decision to implement Z.

As has been pointed out before, by others, the goal system can drift as the 
modifications are applied. The question once again is, in what *objective 
sense* can the AI validate that its Friendliness algorithm corresponds to what 
humans actually consider to be Friendly?  What does it compare *against*?
 
 This remark makes my note that the field of AI actually did
 something
 for the last 50 years not that minor. Again you make an
 argument from
 ignorance: I do not know how to do it, nobody knows how to
 do it,
 therefore it can not be done. Argue from knowledge, not
 from
 ignorance. If you know the path, follow it, describe it. If
 you know
 that the path has a certain property, show it. If you know
 that a
 class of algorithms doesn't find a path, say that these
 algorithms
 won't give the answer. But if you are lost, if your map
 is blank,
 don't assert that the territory is blank also, for you
 don't know.

You can do better than that, I hope. I'm not saying it can't be done just 
because I don't know how to do it. I'm giving you epistemological objections 
for why Friendliness can't be specified. It's an argument from principle. If 
those objections are valid, the fanciest algorithm in the world won't solve the 
problem (assuming finite resources, of course). Address those objections first 
before you pick on my ignorance about Friendliness algorithms.
 
 Causal models are not perfect, you say. But perfection is
 causal,
 physical laws are the most causal phenomenon. All the
 causal rules
 that we employ in our approximate models of environment are
 not
 strictly causal, they have exceptions. Evolution has the
 advantage of
 optimizing with the whole flow of environment, but
 evolution doesn't
 have any model of this environment, the counterpart of
 human models in
 evolution is absent. What it has is a simple regularity in
 the
 environment, natural selection. Will all the imperfections,
 human
 models of environment are immensely more precise than this
 regularity
 that relies on natural repetition of context. Evolution
 doesn't have a
 perfect model, it has an exceedingly simplistic model, so
 simple in
 fact that it managed to *emerge* by chance. Humans with
 their
 admittedly limited intelligence, on the other hand, already
 manage to
 

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

Actually, kittens play because it's fun. Evolution has equipped them with the 
rewarding sense of fun because it optimizes their fitness as hunters. But 
kittens are adaptation executors, evolution is the fitness optimizer. It's a 
subtle but important distinction.

See http://www.overcomingbias.com/2007/11/adaptation-exec.html

Terren

They're adaptation executors, not fitness optimizers. 

--- On Mon, 8/25/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 Kittens play with small moving objects because it teaches
 them to be better hunters. Play is not a goal in itself, but
 a subgoal that may or may not be a useful part of a
 successful AGI design.
 
  -- Matt Mahoney, [EMAIL PROTECTED]
 
 
 
 - Original Message 
 From: Mike Tintner [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Monday, August 25, 2008 8:59:06 AM
 Subject: Re: [agi] How Would You Design a Play Machine?
 
 Brad,
 
 That's sad.  The suggestion is for a mental exercise,
 not a full-scale 
 project. And play is fundamental to the human mind-and-body
 - it 
 characterises our more mental as well as more physical
 activities - 
 drawing, designing, scripting, humming and singing scat in
 the bath, 
 dreaming/daydreaming  much more. It is generally
 acknowledged by 
 psychologists to be an essential dimension of creativity -
 which is the goal 
 of AGI. It is also an essential dimension of animal
 behaviour and animal 
 evolution.  Many of the smartest companies have their play
 areas.
 
 But I'm not aware of any program or computer design for
 play - as distinct 
 from elaborating systematically and methodically or
 genetically on 
 themes - are you? In which case it would be good to think
 about one - it'll 
 open your mind  give you new perspectives.
 
 This should be a group where people are not too frightened
 to play around 
 with ideas.
 
 Brad: Mike Tintner wrote: ...how would you design
 a play machine - a 
 machine
  that can play around as a child does?
 
  I wouldn't.  IMHO that's just another waste of
 time and effort (unless 
  it's being done purely for research purposes). 
 It's a diversion of 
  intellectual and financial resources that those
 serious about building an 
  AGI any time in this century cannot afford.  I firmly
 believe if we had 
  not set ourselves the goal of developing human-style
 intelligence 
  (embodied or not) fifty years ago, we would already
 have a working, 
  non-embodied AGI.
 
  Turing was wrong (or at least he was wrongly
 interpreted).  Those who 
  extended his imitation test to humanoid, embodied AI
 were even more wrong. 
  We *do not need embodiment* to be able to build a
 powerful AGI that can be 
  of immense utility to humanity while also surpassing
 human intelligence in 
  many ways.  To be sure, we want that AGI to be
 empathetic with human 
  intelligence, but we do not need to make it equivalent
 (i.e., just like 
  us).
 
  I don't want to give the impression that a
 non-Turing intelligence will be 
  easy to design and build.  It will probably require at
 least another 
  twenty years of two steps forward, one step
 back effort.  So, if we are 
  going to develop a non-human-like, non-embodied AGI
 within the first 
  quarter of this century, we are going to have to
 just say no to Turing 
  and start to use human intelligence as an inspiration,
 not a destination.
 
  Cheers,
 
  Brad
 
 
 
  Mike Tintner wrote:
  Just a v. rough, first thought. An essential
 requirement of  an AGI is 
  surely that it must be able to play - so how would
 you design a play 
  machine - a machine that can play around as a
 child does?
 
  You can rewrite the brief as you choose, but my
 first thoughts are - it 
  should be able to play with
  a) bricks
  b)plasticine
  c) handkerchiefs/ shawls
  d) toys [whose function it doesn't know]
  and
  e) draw.
 
  Something that should be soon obvious is that a
 robot will be vastly more 
  flexible than a computer, but if you want to do it
 all on computer, fine.
 
  How will it play - manipulate things every which
 way?
  What will be the criteria of learning - of having
 done something 
  interesting?
  How do infants, IOW, play?
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Terren Suydam

Hi Vlad,

Thanks for taking the time to read my article and pose excellent questions. My 
attempts at answers below.

--- On Sun, 8/24/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Sun, Aug 24, 2008 at 5:51 PM, Terren Suydam
 What is the point of building general intelligence if all
 it does is
 takes the future from us and wastes it on whatever happens
 to act as
 its goal?

Indeed. Personally, I have no desire to build anything smarter than humans. 
That's a deal with the devil, so to speak, and one I believe most ordinary 
folks would be afraid to endorse, especially if they were made aware of the 
risks. The Singularity is not an inevitability, if we demand approaches that 
are safe in principle. And self-modifying approaches are not safe, assuming 
that they could work.

I do however revel in the possibility of creating something that we must admit 
is intelligent in a general sense. Achieving such a goal would go a long way 
towards understanding our own abilities. So for me it's about research and 
understanding, with applications towards improving the quality of life. I 
advocate the slow and steady evolutionary approach because we can control the 
process (if not the agent) at each step of the way. We can stop the process at 
any point, study it, and make decisions about when and how to proceed.

I'm all for limiting the intelligence of our creations before they ever get to 
the point that they can build their own or modify themselves. I'm against 
self-modifying approaches, largely because I don't believe it's possible to 
constrain their actions in the way Eliezer hopes. Iterative, recursive 
processes are generally emergent and unpredictable (the interesting ones, 
anyway). Not sure what kind of guarantees you could make for such systems in 
light of such emergent unpredictability.
 
 The problem with powerful AIs is that they could get their
 goals wrong
 and never get us the chance to fix that. And thus one of
 the
 fundamental problems that Friendliness theory needs to
 solve is giving
 us a second chance, building in deep down in the AI process
 the
 dynamic that will make it change itself to be what it was
 supposed to
 be. All the specific choices and accidental outcomes need
 to descend
 from the initial conditions, be insensitive to what went
 horribly
 wrong. This ability might be an end in itself, the whole
 point of
 building an AI, when considered as applying to the dynamics
 of the
 world as a whole and not just AI aspect of it. After all,
 we may make
 mistakes or be swayed by unlucky happenstance in all
 matters, not just
 in a particular self-vacuous matter of building AI.

I don't deny the possibility of disaster. But my stance is, if the only 
approach you have to mitigate disaster is being able to control the AI itself, 
well, the game is over before you even start it. It seems profoundly naive to 
me that anyone could, even in principle, guarantee a super-intelligent AI to 
renormalize, in whatever sense that means. Then you have the difference 
between theory and practice... just forget it. Why would anyone want to gamble 
on that?

  Right, in a way that suggests you didn't grasp
 what I was saying,
  and that may be a failure on my part.
 
 That's why I was exploring -- I didn't
 get what you meant, and I
 hypothesized a coherent concept that seemed to fit what you
 said. I
 still don't understand that concept.

Maybe I'll try again some other time if I can increase my own clarity on the 
concept. 

 http://machineslikeus.com/news/design-bad-or-why-artificial-intelligence-needs-artificial-life
 
 
 (answering to the article)
 
 Creating *an* intelligence might be good in itself, but not
 good
 enough and too likely with negative side effects like
 wiping out the
 humanity to sum out positive in the end. It is a tasty
 cookie with
 black death in it.

With the evolutionary approach, there is no self-modification. The agent never 
has access to its own code, because it's a simulation, not a program. So you 
don't have these hard take-off scenarios. However, it is very slow and that 
isn't appealing. AI folks want intelligence and they want it now. If the 
Singularity occurs to the detriment of the human race, it will be because of 
this rush to be the first to build something intelligent. I take some comfort 
in my belief that quick approaches simply won't succeed, but I admit I'm not 
100% confident in that belief.

 You can't assert that we are not closer to AI than 50
 years ago --
 it's just unclear how closer we are. Great many
 techniques were
 developed in these years, and some good lessons learned the
 wrong way.
 Is it useful? Most certainly some of it, but how can we
 tell...

Fair enough. It's a minor point though.

 Intelligence was created by a blind idiot evolutionary
 process that
 has no foresight and no intelligence. Of course it can be
 designed.
 Intelligence is all that evolution is, but immensely
 faster, better
 and flexible.

In certain domains

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

I'm not saying play isn't adaptive. I'm saying that kittens play not because 
they're optimizing their fitness, but because they're intrinsically motivated 
to (it feels good). The reason it feels good has nothing to do with the kitten, 
but with the evolutionary process that designed that adaption.

It may seem like a minor distinction, but it helps to understand why, for 
example, people have sex with birth control. We don't have sex to maximize our 
genetic fitness, but because it feels good (or a thousand other reasons). We 
are adaption executers, not fitness optimizers.

--- On Mon, 8/25/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] wrote:
 
  Actually, kittens play because it's fun. Evolution
 has equipped them with the rewarding sense of fun because it
 optimizes their fitness as hunters. But kittens are
 adaptation executors, evolution is the fitness optimizer.
 It's a subtle but important distinction.
 
  See
 http://www.overcomingbias.com/2007/11/adaptation-exec.html
 
 
 Saying that play is not adaptive requires some backing (I
 expect it
 plays some role, so you need to be more convincing).
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

Hi Mike,

As may be obvious by now, I'm not that interested in designing cognition. I'm 
interested in designing simulations in which intelligent behavior emerges.

But the way you're using the word 'adapt', in a cognitive sense of playing with 
goals, is different from the way I was using 'adaptation', which is the result 
of an evolutionary process. 

Terren

--- On Mon, 8/25/08, Mike Tintner [EMAIL PROTECTED] wrote:

 From: Mike Tintner [EMAIL PROTECTED]
 Subject: Re: [agi] How Would You Design a Play Machine?
 To: agi@v2.listbox.com
 Date: Monday, August 25, 2008, 3:41 PM
 Terren,
 
 Your broad distinctions are fine, but I feel you are not
 emphasizing the 
 area of most interest for AGI, which is *how* we adapt
 rather than why. 
 Interestingly, your blog uses the example of a screwdriver
 - Kauffman uses 
 the same in Chap 12 of Reinventing the Sacred as an example
 of human 
 creativity/divergence - i.e. our capacity to find infinite
 uses for a 
 screwdriver.
 
 Do we think we could write an algorithm, an effective
 procedure, to 
 generate a possibly infinite list of all possible uses of
 screwdrivers in 
 all possible circumstances, some of which do not yet exist?
 I don't think we 
 could get started.
 
 What emerges here, v. usefully, is that the
 capacity for play overlaps 
 with classically-defined, and a shade more rigorous and
 targeted,  divergent 
 thinking, e.g. find as many uses as you can for a
 screwdriver, rubber teat, 
 needle etc.
 
 ...How would you design a divergent (as well as play)
 machine that can deal 
 with the above open-ended problems? (Again surely essential
 for an AGI)
 
 With full general intelligence, the problem more typically
 starts with the 
 function-to-be-fulfilled - e.g. how do you open this paint
 can? - and only 
 then do you search for a novel tool, like a screwdriver or
 another can lid.
 
 
 
 Terren: Actually, kittens play because it's fun.
 Evolution has equipped 
 them with the rewarding sense of fun because it optimizes
 their fitness as 
 hunters. But kittens are adaptation executors, evolution is
 the fitness 
 optimizer. It's a subtle but important distinction.
 
  See
 http://www.overcomingbias.com/2007/11/adaptation-exec.html
 
  Terren
 
  They're adaptation executors, not fitness
 optimizers.
 
  --- On Mon, 8/25/08, Matt Mahoney
 [EMAIL PROTECTED] wrote:
  Kittens play with small moving objects because it
 teaches
  them to be better hunters. Play is not a goal in
 itself, but
  a subgoal that may or may not be a useful part of
 a
  successful AGI design.
 
   -- Matt Mahoney, [EMAIL PROTECTED]
 
 
 
  - Original Message 
  From: Mike Tintner
 [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Monday, August 25, 2008 8:59:06 AM
  Subject: Re: [agi] How Would You Design a Play
 Machine?
 
  Brad,
 
  That's sad.  The suggestion is for a mental
 exercise,
  not a full-scale
  project. And play is fundamental to the human
 mind-and-body
  - it
  characterises our more mental as well as more
 physical
  activities -
  drawing, designing, scripting, humming and singing
 scat in
  the bath,
  dreaming/daydreaming  much more. It is
 generally
  acknowledged by
  psychologists to be an essential dimension of
 creativity -
  which is the goal
  of AGI. It is also an essential dimension of
 animal
  behaviour and animal
  evolution.  Many of the smartest companies have
 their play
  areas.
 
  But I'm not aware of any program or computer
 design for
  play - as distinct
  from elaborating systematically and methodically
 or
  genetically on
  themes - are you? In which case it would be good
 to think
  about one - it'll
  open your mind  give you new perspectives.
 
  This should be a group where people are not too
 frightened
  to play around
  with ideas.
 
  Brad: Mike Tintner wrote: ...how would
 you design
  a play machine - a
  machine
   that can play around as a child does?
  
   I wouldn't.  IMHO that's just another
 waste of
  time and effort (unless
   it's being done purely for research
 purposes).
  It's a diversion of
   intellectual and financial resources that
 those
  serious about building an
   AGI any time in this century cannot afford. 
 I firmly
  believe if we had
   not set ourselves the goal of developing
 human-style
  intelligence
   (embodied or not) fifty years ago, we would
 already
  have a working,
   non-embodied AGI.
  
   Turing was wrong (or at least he was wrongly
  interpreted).  Those who
   extended his imitation test to humanoid,
 embodied AI
  were even more wrong.
   We *do not need embodiment* to be able to
 build a
  powerful AGI that can be
   of immense utility to humanity while also
 surpassing
  human intelligence in
   many ways.  To be sure, we want that AGI to
 be
  empathetic with human
   intelligence, but we do not need to make it
 equivalent
  (i.e., just like
   us).
  
   I don't want to give the impression that
 a
  non-Turing intelligence will be
   easy to design and build.  It will 

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

 Saying
 that a particular cat instance hunts because it feels good
 is not very explanatory

Even if I granted that, saying that a particular cat plays to increase its 
hunting skills is incorrect. It's an important distinction because by analogy 
we must talk about particular AGI instances. When we talk about, for instance, 
whether an AGI will play, will it play because it's trying to optimize its 
fitness, or because it is motivated in some other way?  We have to be that 
precise if we're talking about design.

Terren

--- On Mon, 8/25/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 The word because was misplaced. Cats hunt mice
 because they were
 designed to, and they were designed to, because it's
 adaptive. Saying
 that a particular cat instance hunts because it feels good
 is not very
 explanatory, like saying that it hunts because such is its
 nature or
 because the laws of physics drive the cat physical
 configuration
 through the hunting dynamics. Evolutionary design, on the
 other hand,
 is the point of explanation for the complex adaptation, the
 simple
 regularity in the Nature that causally produced the
 phenomenon we are
 explaining.
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Terren Suydam

Hi Will,

I don't doubt that provable-friendliness is possible within limited, 
well-defined domains that can be explicitly defined and hard-coded. I know 
chess programs will never try to kill me.

I don't believe however that you can prove friendliness within a framework that 
has the robustness required to make sense of a dynamic, unstable world. The 
basic problem, as I see it, is that Friendliness is a moving target, and 
context dependent. It cannot be defined within the kind of rigorous logical 
frameworks required to prove such a concept.

Terren

--- On Mon, 8/25/08, William Pearson [EMAIL PROTECTED] wrote:
 You may be interested in goedel machines. I think this
 roughly fits
 the template that Eliezer is looking for, something that
 reliably self
 modifies to be better.
 
 http://www.idsia.ch/~juergen/goedelmachine.html
 
 Although he doesn't like explicit utility functions,
 the provably
 better is something he want. Although what you would accept
 as axioms
 for the proofs upon which humanity fate rests I really
 don't know.
 
 Personally I think strong self-modification is not going to
 be useful,
 the very act of trying to understand the way the code for
 an
 intelligence is assembled will change the way that some of
 that code
 is assembled. That is I think that intelligences have to be
 weakly
 self modifying, in the same way bits of the brain rewire
 themselves
 locally and subconciously, so to, AI  will  need to have
 the same sort
 of changes in order to keep up with humans. Computers at
 the moment
 can do lots of things better that humans (logic, bayesian
 stats), but
 are really lousy at adapting and managing themselves so the
 blind
 spots of infallible computers are always exploited by slow
 and error
 prone, but changeable, humans.
 
   Will Pearson
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

If an AGI played because it recognized that it would improve its skills in some 
domain, then I wouldn't call that play, I'd call it practice. Those are 
overlapping but distinct concepts. 

Play, as distinct from pactice, is its own reward - the reward felt by a 
kitten. The spirit of Mike's question, I think, was about identifying the 
essential goalless-ness of play, the sense in which playing fosters adaptivity 
of goals. If you really want to interpret goal-satisfaction in play, it must be 
a meta-goal of mastering one's environment - and that is such a broadly defined 
goal that I don't see how one could specify it to a seed AI. I believe that's 
why evolution used the trick of making it fun.

Terren

--- On Mon, 8/25/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 Of course. Different optimization processes at work,
 different causes.
 Let's say (ignoring if it's actually so for the
 sake of illustration)
 that cat plays because it provides it with developmental
 advantage
 through training its nervous system, giving it better
 hunting skills,
 and so an adaptation that drives cat to play was chosen *by
 evolution*. Cat doesn't play because *it* reasons that
 it would give
 it superior hunting skills, cat plays because of the
 emotional drive
 installed by evolution (or a more general drive inherent in
 its
 cognitive dynamics). When AGI plays to get better at some
 skill, it
 may be either a result of programmer's advice, in which
 case play
 happens because *programmer* says so, or as a result of its
 own
 conclusion that play helps with skills, and if skills are
 desirable,
 play inherits the desirability. In the last case, play
 happens because
 AGI decides so, which in turn happens because there is a
 causal link
 from play to a desirable state of having superior skills.
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

Hi Mike,

Comments below...

--- On Mon, 8/25/08, Mike Tintner [EMAIL PROTECTED] wrote:
 Two questions: 1)  how do you propose that your simulations
 will avoid the 
 kind of criticisms you've been making of other systems
 of being too guided 
 by programmers' intentions? How can you set up a
 simulation without making 
 massive, possibly false assumptions about the nature of
 evolution?

Because I don't care about individual agents. Agents that fail to meet the 
requirements the environment demands, die. There's going to be a lot of death 
in my simulations. The risk I take is that nothing ever survives and I fail to 
demonstrate the feasibility of the approach.
 
 2) Have you thought about the evolution of play in animals?
 
 (We play BTW with just about every dimension of
 activities - goals, rules, 
 tools, actions, movements.. ).

Not much. Play is such an advanced concept in intelligence, and my aims are far 
lower than that.  I don't realistically expect to survive to see the evolution 
of human intelligence using the evolutionary approach I'm talking about.

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Terren Suydam

Eric,

We're talking Friendliness (capital F), a convention suggested by Eliezer 
Yudkowsky, that signifies the sense in which an AI does no harm to humans.

Yes, it's context dependent. Do no harm is the mantra within the medical 
community, but clearly there are circumstances in which you do a little harm to 
achieve greater health in the long run. Chemotherapy is a perfect example. 
Would we trust an AI if it proposed something like chemotherapy? Before we 
understood that to be a valid treatment, would we really believe it was being 
Friendly?  You want me to drink *what*?

Or take any number of ethical dilemmas, in which it's ok to steal food if it's 
to feed your kids. Or killing ten people to save twenty. etc. How do you define 
Friendliness in these circumstances? Depends on the context.

Terren

--- On Mon, 8/25/08, Eric Burton [EMAIL PROTECTED] wrote:
 Is friendliness really so context-dependent? Do you have to
 be human
 to act friendly at the exception of acting busy, greedy,
 angry, etc? I
 think friendliness is a trait we project onto things pretty
 readily
 implying it's wired at some fundamental level. It comes
 from the
 social circuits, it's about being considerate or
 inocuous. But I don't
 know
 
 On 8/25/08, Terren Suydam [EMAIL PROTECTED]
 wrote:
 
  Hi Will,
 
  I don't doubt that provable-friendliness is
 possible within limited,
  well-defined domains that can be explicitly defined
 and hard-coded. I know
  chess programs will never try to kill me.
 
  I don't believe however that you can prove
 friendliness within a framework
  that has the robustness required to make sense of a
 dynamic, unstable world.
  The basic problem, as I see it, is that
 Friendliness is a moving target,
  and context dependent. It cannot be defined within the
 kind of rigorous
  logical frameworks required to prove such a concept.
 
  Terren
 
  --- On Mon, 8/25/08, William Pearson
 [EMAIL PROTECTED] wrote:
  You may be interested in goedel machines. I think
 this
  roughly fits
  the template that Eliezer is looking for,
 something that
  reliably self
  modifies to be better.
 
  http://www.idsia.ch/~juergen/goedelmachine.html
 
  Although he doesn't like explicit utility
 functions,
  the provably
  better is something he want. Although what you
 would accept
  as axioms
  for the proofs upon which humanity fate rests I
 really
  don't know.
 
  Personally I think strong self-modification is not
 going to
  be useful,
  the very act of trying to understand the way the
 code for
  an
  intelligence is assembled will change the way that
 some of
  that code
  is assembled. That is I think that intelligences
 have to be
  weakly
  self modifying, in the same way bits of the brain
 rewire
  themselves
  locally and subconciously, so to, AI  will  need
 to have
  the same sort
  of changes in order to keep up with humans.
 Computers at
  the moment
  can do lots of things better that humans (logic,
 bayesian
  stats), but
  are really lousy at adapting and managing
 themselves so the
  blind
  spots of infallible computers are always exploited
 by slow
  and error
  prone, but changeable, humans.
 
Will Pearson
 
 
  ---
  agi
  Archives:
 https://www.listbox.com/member/archive/303/=now
  RSS Feed:
 https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
 
 
 
  ---
  agi
  Archives:
 https://www.listbox.com/member/archive/303/=now
  RSS Feed:
 https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

Hi Johnathon,

I disagree, play without rules can certainly be fun. Running just to run, 
jumping just to jump. Play doesn't have to be a game, per se. It's simply a 
purposeless expression of the joy of being alive. It turns out of course that 
play is helpful for achieving certain goals that we interpret as being 
installed by evolution. But we don't play to achieve goals, we do it because 
it's fun. As Mike said, this very discussion is a kind of play, and while we 
can certainly identify goals that we try to accomplish in the course of hashing 
these things out, there's an element in it, for me anyway, of just doing it 
because I love doing it. I suspect that's true for others here. I hope so, 
anyway.

Of course, those that are dogmatically functionalist will view such language as 
'fun' as totally irrelevant. That's ok. The cool thing about AI is that 
eventually, it will shed light on whether subjective experience (to 
functionalists, an inconvenience to be done away with) is critical to 
intelligence.

To address your second question, the implicit goal is always reproduction. If 
there is one basic reductionist element to all of life, it is that. Making play 
fun is a way of getting us to play at all, so that we are more likely to 
reproduce. There's a limit however to the usefulness and accuracy of reducing 
everything to reproduction. 

Terren

--- On Mon, 8/25/08, Jonathan El-Bizri [EMAIL PROTECTED] wrote:
Part of play is the specification of arbitrary goals and limitations within the 
overlying process. Games without rules aren't 'fun' to people or kittens. 

 


Play, as distinct from pactice, is its own reward - the reward felt by a 
kitten. The spirit of Mike's question, I think, was about identifying the 
essential goalless-ness of play, the sense in which playing fosters adaptivity 
of goals. If you really want to interpret goal-satisfaction in play, it must be 
a meta-goal of mastering one's environment - and that is such a broadly defined 
goal that I don't see how one could specify it to a seed AI. I believe that's 
why evolution used the trick of making it fun.



But making it 'fun' doesn't answer the question of what the implicit goals are. 
Piaget's theories of assimilation can bring us closer to this, I am of the mind 
that they encompass at least part of the intellectual drive toward play and 
investigation.


Jonathan El-Bizri






  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

Hi David,

 Any amount of guidance in such a simulation (e.g. to help avoid so many
of the useless
 eddies in a fully open-ended simulation) amounts to
designed cognition.


No, it amounts to guided evolution. The difference between a designed 
simulation and a designed cognition is the focus on the agent itself. In the 
latter, you design the agent and turn it loose, testing it to see if it does 
what you want it to. In the former (the simulation), you turn a bunch of 
candidate agents loose and let them compete to do what you want them to. The 
ones that don't, die. You're specifying the environment, not the agent. If you 
do it right, you don't even have to specify the goals.  With designed 
cognition, you must specify the goals, either directly (un-embodied), or in 
some meta-fashion (embodied). 

Terren

--- On Mon, 8/25/08, David Hart [EMAIL PROTECTED] wrote:
From: David Hart [EMAIL PROTECTED]
Subject: Re: [agi] How Would You Design a Play Machine?
To: agi@v2.listbox.com
Date: Monday, August 25, 2008, 6:04 PM

Where is the hard dividing line between designed cognition and designed 
simulation (where intelligent behavior is intended to be emergent in both 
cases)? Even if an approach is taken where everything possible is done allow a 
'natural' type evolution of behavior, the simulation design and parameters will 
still influence the outcome, sometimes in unknown and unknowable ways. Any 
amount of guidance in such a simulation (e.g. to help avoid so many of the 
useless eddies in a fully open-ended simulation) amounts to designed cognition.


That being said, I'm particularly interested in the OCF being used as a 
platform for 'pure simulation' (Alife and more sophisticated game theoretical 
simulations), and finding ways to work the resulting experience and methods 
into the OCP design, which is itself a hybrid approach (designed cognition + 
designed simulation) intended to take advantage of the benefits of both.


-dave

On 8/26/08, Mike Tintner [EMAIL PROTECTED] wrote:
Terren:As may be obvious by now, I'm not that interested in designing 
cognition. I'm interested in designing simulations in which intelligent 
behavior emerges.But the way you're using the word 'adapt', in a cognitive 
sense of playing with goals, is different from the way I was using 
'adaptation', which is the result of an evolutionary process.




Two questions: 1)  how do you propose that your simulations will avoid the kind 
of criticisms you've been making of other systems of being too guided by 
programmers' intentions? How can you set up a simulation without making 
massive, possibly false assumptions about the nature of evolution?




2) Have you thought about the evolution of play in animals?



(We play BTW with just about every dimension of activities - goals, rules, 
tools, actions, movements.. ).











---

agi

Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription: https://www.listbox.com/member/?;


Powered by Listbox: http://www.listbox.com







  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-24 Thread Terren Suydam

--- On Sun, 8/24/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 What do you mean by does not structure? What do
 you mean by fully or
 not fully embodied?

I've already discussed what I mean by embodiment in a previous post, the one 
that immediately preceded the post you initially responded to. When I say the 
agent does not structure the goals given to it by a boot-strapping mechanism, I 
mean that the content of those goals - the way they are structured - has 
already been created by something outside of the agent.
 
 Did you read CFAI? At least it dispels the mystique and
 ridicule of
 provable Friendliness and shows what kind of
 things are relevant for
 its implementation. You don't really want to fill the
 universe with
 paperclips, do you? The problem is that you can't take
 a wrong route
 just because it's easier, it is an illusion born of
 insufficient
 understanding of the issue that it might be OK anyway.

I'm not taking the easy way out here, I'm talking about what I see as the only 
possible path to general intelligence. I could be wrong of course, but that's 
why we're here, to talk about our differences.

I've read parts of the CFAI but like most of Eliezer's writings, if I had time 
to read every word he writes I'd have no life at all. The crux of his argument 
seems to come down to what he calls renormalization, in which the AI corrects 
its goals as it goes. But that begs the question of what the AI is comparing 
its behavior against - some supergoal or meta-ethics or whatever you want to 
call it - and the answer must ultimately come from us, pre-structured. 
Non-embodied.
 

 I was exploring the notion of nonembodied interaction that
 you talkied about.

Right, in a way that suggests you didn't grasp what I was saying, and that may 
be a failure on my part.  
 
  I'm saying that we don't specify that process.
 We let it emerge through
  large numbers of generations of simulated evolution.
 Now that's going
  to be a very unpopular idea in this forum, but it
 comes out of what I think
  are valid philosophical criticisms of designed (or
 metacognitive/metamoral
  if you wish) intelligence.
 
 Name them.

I refer you to my article Design is bad -- or why artificial intelligence 
needs artificial life:

http://machineslikeus.com/news/design-bad-or-why-artificial-intelligence-needs-artificial-life

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-23 Thread Terren Suydam

Yeah, that's where the misunderstanding is... low level input is too fuzzy a 
concept.

I don't know if this is the accepted mainstream definition of embodiment, but 
this is how I see it. The thing that distinguishes an embodied agent from an 
unembodied one is whether the agent is given pre-structured input - that is, 
whether information outside the agent is directly available to the agent. A 
fully embodied agent does not have any access at all to its environment. It 
only has access to the outputs of its sensory apparatus.

Obviously animal nervous systems are the inspiration here. For example, we have 
thermo-receptors in our skin that fire at different rates depending on the 
temperature. The interesting thing to note is that these receptors can be 
stimulated by things other than temperature, like the capsaicin in hot peppers. 
The reason that's important is because our experience of hotness is present 
only to the extent that our thermo-receptors fire, without regard to how 
they're stimulated. Likewise for the patterns we see when we rub our eyes for 
long enough - we're using physical pressure to stimulate photo-receptors.

What all that reveals is that there is a boundary between the environment and 
the agent, and at that boundary, information does not cross. The interaction 
between the environment and sensory apparatus results in *perturbations* in the 
agent. The agent constructs its models based solely on the perturbations on its 
sensory apparatus. It doesn't know what the environment is and in fact has no 
access to it whatsoever.

This is a key idea behind autopoiesis 
(http://en.wikipedia.org/wiki/Autopoiesis), which is a way to characterize the 
difference between living and non-living systems.

So all text-based I/O fails this test of embodiment because the agent is not 
structuring the input. That modality is based on the premise that you can 
directly import knowledge into the agent, and that is an unembodied approach.

Terren

--- On Fri, 8/22/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 On Fri, Aug 22, 2008 at 5:49 PM, Vladimir Nesov
 [EMAIL PROTECTED] wrote:
  On Fri, Aug 22, 2008 at 5:35 PM, Terren Suydam
 [EMAIL PROTECTED] wrote:
 
  She's not asking about the kind of embodiment,
 she's asking what's the use of a non-embodied AGI.
 Your quotation, dealing as it does with low-level input, is
 about embodied AGI.
 
 
  I believe non-embodied meant to refer to
 I/O fundamentally different
  from our own (especially considering a context of
 previous message in
  this conversation). What is a non-embodied AGI? AGI
 that doesn't
  exist?
 
 
 On second thought, maybe the term low-level
 input was confusing. I
 include things like text-only terminal or 3D vector
 graphics input or
 Internet connection or whatever other kind of interaction
 with the
 world in this concept. Low-level is relative to a model in
 the mind,
 it is a point where non-mind environment directly interacts
 with the
 model, on which additional levels of representation are
 grown within
 the mind, making that transition point the lowest level. I
 didn't mean
 to imply that input needs to be something like a noisy
 video stream or
 sense of touch (although I suspect it'll be helpful
 developmentally).
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] I Made a Mistake

2008-08-23 Thread Terren Suydam

Eric,

You lower the quality of this list with comments like that. It's the kind of 
thing that got people wondering a month ago whether moderation is necessary on 
this list. If we're all adults, moderation shouldn't be necessary.

Jim, do us all a favor and don't respond to that, as tempting as it may be.

Terren

--- On Sat, 8/23/08, Eric Burton [EMAIL PROTECTED] wrote:
 Stupid fundamentalist troll garbage
 
 On 8/22/08, Jim Bromer [EMAIL PROTECTED] wrote:
  I just discovered that I made a very obvious blunder
 on my theory
  about Logical Satisfiability last November.  It was a,
 what was I
  thinking, kind of error.  No sooner did I
 discover this error a
  couple of days ago, but I went and made a corollary
 blunder just as
  surprising.  The Lord's logic is awesome but I am
 definitely not one
  of his most gifted students.  In one sense I am
 starting over so I
  still don't know if I will be able to figure it
 out or not, but from
  my vantage point right now, the problem looks a whole
 lot easier.
  Jim Bromer
 
 
  ---
  agi
  Archives:
 https://www.listbox.com/member/archive/303/=now
  RSS Feed:
 https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] I Made a Mistake

2008-08-23 Thread Terren Suydam

No worries, that's why I heartily advocate doing exactly what you did, but not 
sending it.  It's a lesson I've learned the hard way more times than I care to 
admit.

--- On Sat, 8/23/08, Eric Burton [EMAIL PROTECTED] wrote:
 Thanks Terren, I shouldn't have got angry so fast. One
 thing I worry
 about constantly when going places or discussing anything
 is the
 quality of discourse.
 
 On 8/23/08, Terren Suydam [EMAIL PROTECTED]
 wrote:
 
  Eric,
 
  You lower the quality of this list with comments like
 that. It's the kind of
  thing that got people wondering a month ago whether
 moderation is necessary
  on this list. If we're all adults, moderation
 shouldn't be necessary.
 
  Jim, do us all a favor and don't respond to that,
 as tempting as it may be.
 
  Terren
 
  --- On Sat, 8/23/08, Eric Burton
 [EMAIL PROTECTED] wrote:
  Stupid fundamentalist troll garbage
 
  On 8/22/08, Jim Bromer [EMAIL PROTECTED]
 wrote:
   I just discovered that I made a very obvious
 blunder
  on my theory
   about Logical Satisfiability last November. 
 It was a,
  what was I
   thinking, kind of error.  No sooner did
 I
  discover this error a
   couple of days ago, but I went and made a
 corollary
  blunder just as
   surprising.  The Lord's logic is awesome
 but I am
  definitely not one
   of his most gifted students.  In one sense I
 am
  starting over so I
   still don't know if I will be able to
 figure it
  out or not, but from
   my vantage point right now, the problem looks
 a whole
  lot easier.
   Jim Bromer
  
  
   ---
   agi
   Archives:
  https://www.listbox.com/member/archive/303/=now
   RSS Feed:
  https://www.listbox.com/member/archive/rss/303/
   Modify Your Subscription:
   https://www.listbox.com/member/?;
   Powered by Listbox: http://www.listbox.com
  
 
 
  ---
  agi
  Archives:
 https://www.listbox.com/member/archive/303/=now
  RSS Feed:
 https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
 
 
 
  ---
  agi
  Archives:
 https://www.listbox.com/member/archive/303/=now
  RSS Feed:
 https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-23 Thread Terren Suydam

Just wanted to add something, to bring it back to feasibility of 
embodied/unembodied approaches. Using the definition of embodiment I described, 
it needs to be said that it is impossible to specify the goals of the agent, 
because in so doing, you'd be passing it information in an unembodied way. In 
other words, a fully-embodied agent must completely structure internally 
(self-organize) its model of the world, such as it is. Goals must be structured 
as well. Evolutionary approaches are the only means at our disposal for shaping 
the goal systems of fully-embodied agents, by providing in-built biases towards 
modeling the world in a way that is in alignment with our goals. That said, 
Friendly AI is impossible to guarantee for fully-embodied agents.

The question then becomes, is it necessary to implement full embodiment, in the 
sense I have described, to arrive at AGI. I think most in this forum will say 
that it's not. Most here say that embodiment (at least partial embodiment) 
would be useful but not necessary. 

OpenCog involves a partially embodied approach, for example, which I suppose is 
an attempt to get the best of both worlds - the experiential aspect of embodied 
senses combined with the precise specification of goals and knowledge, not to 
mention additional components that aim to provide things like natural language 
processing. 

The part I have difficulty understanding is how a system like OpenCog could 
hope to marry the information from each domain - the self-organized, emergent 
domain of embodied knowledge, and the externally-organized, given domain of 
specified knowledge. These two domains must necessarily involve different 
knowledge representations, since one emerges (self-organizes) at runtime. How 
does the cognitive architecture that processes the specified goals and 
knowledge dovetail with the constructions that emerge from the embodied senses? 
 Ben, any thoughts on that?

Terren

--- On Sat, 8/23/08, Terren Suydam [EMAIL PROTECTED] wrote:

 Yeah, that's where the misunderstanding is... low
 level input is too fuzzy a concept.
 
 I don't know if this is the accepted mainstream
 definition of embodiment, but this is how I see it. The
 thing that distinguishes an embodied agent from an
 unembodied one is whether the agent is given pre-structured
 input - that is, whether information outside the agent is
 directly available to the agent. A fully embodied agent does
 not have any access at all to its environment. It only has
 access to the outputs of its sensory apparatus.
 
 Obviously animal nervous systems are the inspiration here.
 For example, we have thermo-receptors in our skin that fire
 at different rates depending on the temperature. The
 interesting thing to note is that these receptors can be
 stimulated by things other than temperature, like the
 capsaicin in hot peppers. The reason that's important is
 because our experience of hotness is present only to the
 extent that our thermo-receptors fire, without regard to how
 they're stimulated. Likewise for the patterns we see
 when we rub our eyes for long enough - we're using
 physical pressure to stimulate photo-receptors.
 
 What all that reveals is that there is a boundary between
 the environment and the agent, and at that boundary,
 information does not cross. The interaction between the
 environment and sensory apparatus results in *perturbations*
 in the agent. The agent constructs its models based solely
 on the perturbations on its sensory apparatus. It
 doesn't know what the environment is and in fact has no
 access to it whatsoever.
 
 This is a key idea behind autopoiesis
 (http://en.wikipedia.org/wiki/Autopoiesis), which is a way
 to characterize the difference between living and non-living
 systems.
 
 So all text-based I/O fails this test of embodiment because
 the agent is not structuring the input. That modality is
 based on the premise that you can directly import knowledge
 into the agent, and that is an unembodied approach.
 
 Terren
 
 --- On Fri, 8/22/08, Vladimir Nesov
 [EMAIL PROTECTED] wrote:
 
  On Fri, Aug 22, 2008 at 5:49 PM, Vladimir Nesov
  [EMAIL PROTECTED] wrote:
   On Fri, Aug 22, 2008 at 5:35 PM, Terren Suydam
  [EMAIL PROTECTED] wrote:
  
   She's not asking about the kind of
 embodiment,
  she's asking what's the use of a non-embodied
 AGI.
  Your quotation, dealing as it does with low-level
 input, is
  about embodied AGI.
  
  
   I believe non-embodied meant to refer
 to
  I/O fundamentally different
   from our own (especially considering a context of
  previous message in
   this conversation). What is a non-embodied AGI?
 AGI
  that doesn't
   exist?
  
  
  On second thought, maybe the term low-level
  input was confusing. I
  include things like text-only terminal or 3D vector
  graphics input or
  Internet connection or whatever other kind of
 interaction
  with the
  world in this concept. Low-level is relative to a
 model in
  the mind,
  it is a point where non-mind

Re: [agi] The Necessity of Embodiment

2008-08-23 Thread Terren Suydam

comments below...

--- On Sat, 8/23/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 The last post by Eliezer provides handy imagery for this
 point (
 http://www.overcomingbias.com/2008/08/mirrors-and-pai.html
 ). You
 can't have an AI of perfect emptiness, without any
 goals at all,
 because it won't start doing *anything*, or anything
 right, unless the
 urge is already there (
 http://www.overcomingbias.com/2008/06/no-universally.html
 ). 

Of course, that's what the evolutionary process is for. You use selective 
pressure to shape the behavior of the agents. The way it works in my mind, you 
start out with very primitive intelligence and increase the difficulty of the 
simulation to get increasingly intelligent behavior.

 But you
 can have an AI that has a bootstrapping mechanism that
 tells it where
 to look for goal content, tells it to absorb it and embrace
 it.

Yes, but in this scenario, the AI does not structure the goals itself. It is 
not fully embodied. Of course, we will probably argue about how important that 
is. 

 Evolution has nothing to do with it, except in the sense
 that it was
 one process that implemented the bedrock of goal system,
 making a
 first step that initiated any kind of moral progress. But
 evolution
 certainly isn't an adequate way to proceed from now on.

I assume you make this assertion based on how much time/computation would be 
required, and the lack of control we have over the process. In other words, at 
the end of this process we can never have a provably friendly AI. We cannot 
dictate its morals, any more than we can dictate morals to our fellow humans.  

However, going down the path of provably friendly AI is fraught with its own 
concerns. Going into what those concerns are is a whole different topic, but 
for me that road is a dead end.

 Basically, non-embodied interaction as you described it is
 extracognitive interaction, workaround that doesn't
 comply with a
 protocol established by cognitive algorithm. If you can do
 that, fine,
 but cognitive algorithm is there precisely because we
 can't build a
 mature AI by hand, by directly reaching into the AGI's
 mind, it needs
 a subcognitive process that will assemble its cognition for
 us. It is
 basically the same problem with general intelligence and
 with
 Friendliness: you can neither assemble an AGI that already
 knows all
 the stuff and possesses human-level skills, nor an AGI that
 has proper
 humane goals. You can only create a metacognitive metamoral
 process
 that will collect both from the environment.

I'm not trying to wave a magic wand and pretend we can just create something 
out of thin air that will be intelligent. Of course there needs to be some 
underlying cognitive process... did something I say lead you to believe I 
thought otherwise?

I'm saying that we don't specify that process. We let it emerge through large 
numbers of generations of simulated evolution. Now that's going to be a very 
unpopular idea in this forum, but it comes out of what I think are valid 
philosophical criticisms of designed (or metacognitive/metamoral if you wish) 
intelligence.

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-22 Thread Terren Suydam

She's not asking about the kind of embodiment, she's asking what's the use of a 
non-embodied AGI. Your quotation, dealing as it does with low-level input, is 
about embodied AGI.

--- On Fri, 8/22/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
  Thanks Vlad, I read all that stuff plus other Eliezer
 papers. They don't
  answer my question: I am asking what is the use of a
 non-embodied AGI, given
  it would necessarily have a different goal system from
 that of humans, I'm
  not asking how to make any AGI friendly - that is
 extremely difficult.
 
 
 If AGI you are talking about is not going to be powerful
 enough, it is
 a completely different question. Weak optimization
 processes are the
 stuff we build our instrumental actions on, by creating
 contexts from
 which these processes can't (aren't supposed to)
 break out, and so
 exercise their local optimization in a way that is bound to
 lead to a
 different end.
 
 I don't see what relevance does the choice of
 embodiment has to
 anything, except for practical considerations during the
 earliest
 stages of development, when AGI is not yet able to model
 sufficiently
 high-level events in the environment. In today blog post, I
 concluded
 the long arc of posts that describes a high-level
 perspective on
 operation of AGI agent (holistic control), that
 emphasizes how a
 particular way in which input and output are organized
 (particular
 embodiment) is insignificant. A quote (
 http://causalityrelay.wordpress.com/2008/08/22/holistic-control/
 ):
 
  The operation of control algorithm is focused on the
 support of model of
  environment, not on action and perception. Action and
 perception are only
  peripheral (although indispensable) aspects of
 control, with low-level
  input binding the model of environment to reality at
 one tiny point,
  supplying new facts and showing the mistakes, and
 low-level output giving
  the model ability to participate in the causal web of
 environment.
 
 --
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Terren Suydam

Harry,

--- On Wed, 8/6/08, Harry Chesley [EMAIL PROTECTED] wrote:
 I'll take a stab at both of these...
 
 The Chinese Room to me simply states that understanding
 cannot be 
 decomposed into sub-understanding pieces. I don't see
 it as addressing 
 grounding, unless you believe that understanding can only
 come from the 
 outside world, and must become part of the system as atomic
 pieces of 
 understanding. I don't see any reason to think that,
 but proving it is 
 another matter -- proving negatives is always difficult.

The argument is only implicitly about the nature of understanding. It is 
explicit about the agent of understanding. It says that something that moves 
symbols around according to predetermined rules - if that's all it's doing - 
has no understanding. Implicitly, the assumption is that understanding must be 
grounded in experience, and a computer cannot be said to be experiencing 
anything.

It really helps here to understand what a computer is doing when it executes 
code, and the Chinese Room is an analogy to that which makes a computer's 
operation expressible in terms of human experience - specifically, the 
experience of incomprehensible symbols like Chinese ideograms. All a computer 
really does is apply rules determined in advance to manipulate patterns of 1's 
and 0's. No comprehension is necessary, and invoking that at any time is a 
mistake.

Fortunately, that does not rule out embodied AI designs in which the agent is 
simulated. The processor still has no understanding - it just facilitates the 
simulation.

 As to philosophy, I tend to think of it's relationship
 to AI as somewhat 
 the same as alchemy's relationship to chemistry. That
 is, it's one of 
 the origins of the field, and has some valid ideas, but it
 lacks the 
 hard science and engineering needed to get things actually
 working. This 
 is admittedly perhaps a naive view, and reflects the
 traditional 
 engineering distrust of the humanities. I state it not to
 be critical of 
 philosophy, but to give you an idea how some of us think of
 the area.

As an engineer who builds things everyday (in software), I can appreciate the 
*limits* of philosophy. Spending too much time in that domain can lead to all 
sorts of excesses of thought, castles in the sky, etc. However, any good 
engineer will tell you how important theory is in the sense of creating and 
validating design. And while the theory behind rocket science involves physics, 
chemistry, and fluid dynamics (and others no doubt), the theory of AI involves 
information theory, computer science, and philosophy of mind  knowledge, like 
it or not. If you want to be a good AI engineer, you better be comfortable with 
all of the above.

Terren

 Terren Suydam wrote:
  Abram,
 
  If that's your response then we don't actually
 agree. 
 
  I agree that the Chinese Room does not disprove strong
 AI, but I think it is a valid critique for purely logical or
 non-grounded approaches. Why do you think the critique fails
 on that level?  Anyone else who rejects the Chinese Room
 care to explain why?
 
  (I know this has been discussed ad nauseum, but that
 should only make it easier to point to references that
 clearly demolish the arguments. It should be noted however
 that relatively recent advances regarding complexity and
 emergence aren't quite as well hashed out with respect
 to the Chinese Room. In the document you linked to, mention
 of emergence didn't come until a 2002 reference
 attributed to Kurzweil.)
 
  If you can't explain your dismissal of the Chinese
 Room, it only reinforces my earlier point that some of you
 who are working on AI aren't doing your homework with
 the philosophy. It's ok to reject the Chinese Room, so
 long as you have arguments to do it (and if you do, I'm
 all ears!) But if you don't think the philosophy is
 important, then you're more than likely doing Cargo Cult
 AI.
 
  (http://en.wikipedia.org/wiki/Cargo_cult)
 
  Terren
 
  --- On Tue, 8/5/08, Abram Demski
 [EMAIL PROTECTED] wrote:
 

  From: Abram Demski [EMAIL PROTECTED]
  Subject: Re: [agi] Groundless reasoning --
 Chinese Room
  To: agi@v2.listbox.com
  Date: Tuesday, August 5, 2008, 9:49 PM
  Terren,
  I agree. Searle's responses are inadequate,
 and the
  whole thought
  experiment fails to prove his point. I think it
 also fails
  to prove
  your point, for the same reason.
 
  --Abram
 
  
 
 
 

 
 
  ---
  agi
  Archives:
 https://www.listbox.com/member/archive/303/=now
  RSS Feed:
 https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
 https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 

 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http

  1   2   >