[agi] The same old explosions?

2007-12-11 Thread Mike Tintner
Essentially, Richard  others are replaying the same old problems of 
computational explosions - see computational complexity in this history of 
cog. sci. review - no?


Mechanical Mind
Gilbert Harman
Mind as Machine: A History of Cognitive Science. Margaret A. Boden. Two 
volumes, xlviii + 1631 pp. Oxford University Press, 2006. $225.


The term cognitive science, which gained currency in the last half of the 
20th century, is used to refer to the study of cognition-cognitive 
structures and processes in the mind or brain, mostly in people rather than, 
say, rats or insects. Cognitive science in this sense has reflected a 
growing rejection of behaviorism in favor of the study of mind and human 
information processing. The field includes the study of thinking, 
perception, emotion, creativity, language, consciousness and learning. 
Sometimes it has involved writing (or at least thinking about) computer 
programs that attempt to model mental processes or that provide tools such 
as spreadsheets, theorem provers, mathematical-equation solvers and engines 
for searching the Web. The programs might involve rules of inference or 
productions, mental models, connectionist neural networks or other 
sorts of parallel constraint satisfaction approaches. Cognitive science so 
understood includes cognitive neuroscience, artificial intelligence (AI), 
robotics and artificial life; conceptual, linguistic and moral development; 
and learning in humans, other animals and machines.


click for full image and caption


Among those sometimes identifying themselves as cognitive scientists are 
philosophers, computer scientists, psychologists, linguists, engineers, 
biologists, medical researchers and mathematicians. Some individual 
contributors to the field have had expertise in several of these more 
traditional disciplines. An excellent example is the philosopher, 
psychologist and computer scientist Margaret Boden, who founded the School 
of Cognitive and Computing Sciences at the University of Sussex and is the 
author of a number of books, including Artificial Intelligence and Natural 
Man (1977) and The Creative Mind (1990). Boden has been active in cognitive 
science pretty much from the start and has known many of the other central 
participants.


In her latest book, the lively and interesting Mind as Machine: A History of 
Cognitive Science, the relevant machine is usually a computer, and the 
cognitive science is usually concerned with the sort of cognition that can 
be exhibited by a computer. Boden does not discuss other aspects of the 
subject, broadly conceived, such as the principles and parameters approach 
in contemporary linguistics or the psychology of heuristics and biases. 
Furthermore, she also puts to one side such mainstream developments in 
computer science as data mining and statistical learning theory. In the 
preface she characterizes the book as an essay expressing her view of 
cognitive science as a whole, a thumbnail sketch meant to be read entire 
rather than dipped into.


It is fortunate that Mind as Machine is highly readable, particularly 
because it contains 1,452 pages of text, divided into two very large 
volumes. Because the references and indices (which fill an additional 179 
pages) are at the end of the second volume, readers will need to have it on 
hand as they make their way through the first. Given that together these 
tomes weigh more than 7 pounds, this is not light reading!


Boden's goal, she says, is to show how cognitive scientists have tried to 
find computational or informational answers to frequently asked questions 
about the mind-what it is, what it does, how it works, how it evolved, and 
how it's even possible. How do our brains generate consciousness? Are 
animals or newborn babies conscious? Can machines be conscious? If not, why 
not? How is free will possible, or creativity? How are the brain and mind 
different? What counts as a language?


The first five chapters present the historical background of the field, 
delving into such topics as cybernetics and feedback, and discussing 
important figures such as René Descartes, Immanuel Kant, Charles Babbage, 
Alan Turing and John von Neumann, as well as Warren McCulloch and Walter 
Pitts, who in 1943 cowrote a paper on propositional calculus, Turing 
machines and neuronal synapses. Boden also goes into some detail about the 
situation in psychology and biology during the transition from behaviorism 
to cognitive science, which she characterizes as a revolution. The metaphor 
she employs is that of cognitive scientists entering the house of 
Psychology, whose lodgers at the time included behaviorists, Freudians, 
Gestalt psychologists, Piagetians, ethologists and personality theorists.


Chapter 6 introduces the founding personalities of cognitive science from 
the 1950s. George A. Miller, the first information-theoretic psychologist, 
wrote the widely cited paper The Magical Number Seven, Plus or Minus Two, 
in 

Re: [agi] The same old explosions?

2007-12-11 Thread Richard Loosemore

Mike Tintner wrote:
Essentially, Richard  others are replaying the same old problems of 
computational explosions - see computational complexity in this 
history of cog. sci. review - no?


No:  this is a misunderstanding of complexity unfortunately (cf the 
footnote on p1 of my AGIRI paper):  computational complexity refers to 
how computations scale up, which is not at all the same as the 
complexity issue, which is about whether or not a particular system 
can be explained.


To see the difference, imagine an algorithm that was good enough to be 
intelligent, but scaling it up to the size necessary for human-level 
intelligence would require a computer the size of a galaxy.  Nothing 
wrong with the algorithm, and maybe with a quantum computer it would 
actually work.  This algorithm would be suffering from a computational 
complexity problem.


By contrast, there might be proposed algorithms for iimplementing a 
human-level intelligence which will never work, no matter how much they 
are scaled up (indeed, they may actually deteriorate as they are scaled 
up).  If this was happening because the designers were not appreciating 
that they needed to make subtle and completely non-obvious changes in 
the algorithm, to get its high-level behavior to be what they wanted it 
to be, and if this were because intelligence requires 
complexity-generating processes inside the system, then this would be a 
complex systems problem.


Two completely different issues.


Richard Loosemore






Mechanical Mind
Gilbert Harman
Mind as Machine: A History of Cognitive Science. Margaret A. Boden. Two 
volumes, xlviii + 1631 pp. Oxford University Press, 2006. $225.


The term cognitive science, which gained currency in the last half of 
the 20th century, is used to refer to the study of cognition-cognitive 
structures and processes in the mind or brain, mostly in people rather 
than, say, rats or insects. Cognitive science in this sense has 
reflected a growing rejection of behaviorism in favor of the study of 
mind and human information processing. The field includes the study of 
thinking, perception, emotion, creativity, language, consciousness and 
learning. Sometimes it has involved writing (or at least thinking about) 
computer programs that attempt to model mental processes or that provide 
tools such as spreadsheets, theorem provers, mathematical-equation 
solvers and engines for searching the Web. The programs might involve 
rules of inference or productions, mental models, connectionist 
neural networks or other sorts of parallel constraint satisfaction 
approaches. Cognitive science so understood includes cognitive 
neuroscience, artificial intelligence (AI), robotics and artificial 
life; conceptual, linguistic and moral development; and learning in 
humans, other animals and machines.


click for full image and caption


Among those sometimes identifying themselves as cognitive scientists are 
philosophers, computer scientists, psychologists, linguists, engineers, 
biologists, medical researchers and mathematicians. Some individual 
contributors to the field have had expertise in several of these more 
traditional disciplines. An excellent example is the philosopher, 
psychologist and computer scientist Margaret Boden, who founded the 
School of Cognitive and Computing Sciences at the University of Sussex 
and is the author of a number of books, including Artificial 
Intelligence and Natural Man (1977) and The Creative Mind (1990). Boden 
has been active in cognitive science pretty much from the start and has 
known many of the other central participants.


In her latest book, the lively and interesting Mind as Machine: A 
History of Cognitive Science, the relevant machine is usually a 
computer, and the cognitive science is usually concerned with the sort 
of cognition that can be exhibited by a computer. Boden does not discuss 
other aspects of the subject, broadly conceived, such as the principles 
and parameters approach in contemporary linguistics or the psychology 
of heuristics and biases. Furthermore, she also puts to one side such 
mainstream developments in computer science as data mining and 
statistical learning theory. In the preface she characterizes the book 
as an essay expressing her view of cognitive science as a whole, a 
thumbnail sketch meant to be read entire rather than dipped into.


It is fortunate that Mind as Machine is highly readable, particularly 
because it contains 1,452 pages of text, divided into two very large 
volumes. Because the references and indices (which fill an additional 
179 pages) are at the end of the second volume, readers will need to 
have it on hand as they make their way through the first. Given that 
together these tomes weigh more than 7 pounds, this is not light reading!


Boden's goal, she says, is to show how cognitive scientists have tried 
to find computational or informational answers to frequently asked 
questions about the mind-what it is, what it 

RE: [agi] AGI and Deity

2007-12-11 Thread Ed Porter
Mike:

MIKE TINTNER# Science's autistic, emotionally deprived, insanely
rational nature in front of the supernatural (if it exists), and indeed the
whole world,  needs analysing just as much as the overemotional,
underrational fantasies of the religious about the supernatural.

ED PORTER# I like the metaphor of Science as Autistic.   It
emphasizes the emotional disconnect from human feeling science can have.

I feel that rationality has no purpose other than to serve human values and
feelings (once truly intelligent machines arrive on the scene that statement
might have to be modified).  As I think I have said on this list before,
without values to guide them, the chance you would think anything that has
anything to do with maintaining your own existence approaches zero as a
limit, because of the possible combinatorial explosion of possible thoughts
if they were not constrained by emotional guidance.

Therefore, from the human standpoint, the main use of science should be to
help serve our physical, emotional, and intellectual needs.

I agree that science will increasingly encroach upon many areas previous
considered the realm of the philosopher and priest.  It has been doing so
since at least the age of enlightenment, and it is continuing to do so, with
advances in cosmology, theoretical physics, bioscience, brain science¸ and
AGI.  

With the latter two we should pretty much understand the human soul within
several decades.

I hope we have the wisdom to use that new knowledge well.

Ed Porter


-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 10, 2007 11:07 PM
To: agi@v2.listbox.com
Subject: Re: [agi] AGI and Deity

Ed:I would add that there probably is something to the phenomenon that John
Rose is referring to, i.e., that faith seems to be valuable to many people.
Perhaps it is somewhat like owning a lottery ticket before its drawing.  It
can offer desired hope, even if the hope might be unrealistic.  But whatever
you think of the odds, it is relatively clear that religion does makes some
people's lives seem more meaningful to them.

You realise of course that what you're seeing on this and the 
singularitarian board, over and over, is basically the same old religious 
fantasies - the same yearning for the Second Coming - the same old search 
for salvation - only in a modern, postreligious form?

Everyone has the same basic questions about the nature of the world - 
everyone finds their own answers - which always in every case involve a 
mixture of faith and scepticism in the face of enormous mystery.

The business of science in the face of these questions is not to ignore 
them, and try and psychoanalyse away people's attempts at answers, as a 
priori weird or linked to a deficiency of this or that faculty.

The business of science is to start dealing with these questions - to find 
out if there is a God and what the hell that entails, - and not leave it 
up to philosophy.

Science's autistic, emotionally deprived, insanely rational nature in front 
of the supernatural (if it exists), and indeed the whole world,  needs 
analysing just as much as the overemotional, underrational fantasies of the 
religious about the supernatural.

Science has fled from the question of God just as it has fled from the 
soul - in plain parlance, the self deliberating all the time in you and 
me, producing these posts and all our dialogues - only that self, for sure, 
exists and there is no excuse for science's refusal to study it in action, 
whatsoever.

The religous 'see' too much; science is too heavily blinkered. But the walls

between them - between their metaphysical worldviews - are starting to 
crumble..



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74564905-9a5b41attachment: winmail.dat

RE: [agi] news bit: DRAM Appliance 10TB extended memory in one rack

2007-12-11 Thread Ed Porter
Dave,

Such large memories are cool, but of course AGI requires a lot of processing
power to go with the memory.  The price cited at the bottom of the below
article for 120GB wasn’t that much less than the price for which would
could by 128GB with 8 quad core opterons in one of the links you sent me
last week.

Ed Porter

-Original Message-
From: David Hart [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 10, 2007 11:40 PM
To: agi@v2.listbox.com
Subject: [agi] news bit: DRAM Appliance 10TB extended
memory in one rack

Hi,

Some news with interesting implications for future AGI
development, from http://www.theregister.co.uk/2007/12/10/amd_violin_memory/
- more at http://www.violin-memory.com/
10TB of DRAM? Why not?
By Ashlee Vance in Mountain View
mailto:[EMAIL PROTECTED] → More by this author
http://search.theregister.co.uk/?author=Ashlee%20Vance 
Published Monday 10th December 2007 17:06 GMT
AMD and Violin Memory have ignited a love affair
around Hypertransport that should result in what the industry technically
refers to as huge DRAM appliances being connected to Opteron-based servers.
Violin Memory Inc. had eluded us before today's
announcement, which is either the fault of the company's PR staff or our
lack of attention to e-mail. No matter. We've spotted this start-up now and
don't plan to let go because it's banging away at one of the more intriguing
bits of the server/storage game - MAS or memory attached storage.

The company sells a Violin 1010 unit that holds up
to 504GB of DRAM in a 2U box. Fill a rack, and you're looking at 10TB of
DRAM.
It should be noted that each appliance can support
up to 84 virtual modules as well. Customers can create 6GB modules and add
RAID-like functions between modules.
The DRAM approach to storage is, of course, very
expensive when compared to spinning disks, but does offer benefits such as
lower power consumption and higher performance. Most of the start-ups
dabbling in the MAS space - like Gear6
http://www.theregister.co.uk/2007/08/28/memory_appliance_gear6_cachefx/  -
zero in on the performance gains and aim their gear at any company with a
massive database. 
Now Violin plans to tap right into AMD's
Hypertransport technology to link these memory appliances with servers. The
cache coherency protocol of Hypertransport technology will enable several
processors to share extensive memory resources from one or more Violin
Memory Appliances. This extended memory model will enable these servers to
support much larger datasets, the companies said.
An AMD Opteron processor-based server connected to
a HyperTransport technology-enabled Violin Memory Appliance will have both
directly connected memory and Extended Memory resources. Directly connected
memory can be selected for bandwidth and latency while the Extended Memory
can be much larger and located in the Memory Appliance. Applications such as
large databases will benefit from the large-scale memory footprints enabled
through Extended Memory.
The two companies expect these new systems to arrive
by the second half of 2008.
Those of you who want to try Violin's gear now can
get a 120GB starter kit for $50,000. (r)

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:

http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74565758-991587attachment: winmail.dat

Re: [agi] AGI and Deity

2007-12-11 Thread Mark Waser

Hey Ben,

   Any chance of instituting some sort of moderation on this list?


- Original Message - 
From: Ed Porter [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, December 11, 2007 10:18 AM
Subject: RE: [agi] AGI and Deity


Mike:

MIKE TINTNER# Science's autistic, emotionally deprived, insanely
rational nature in front of the supernatural (if it exists), and indeed the
whole world,  needs analysing just as much as the overemotional,
underrational fantasies of the religious about the supernatural.

ED PORTER# I like the metaphor of Science as Autistic.   It
emphasizes the emotional disconnect from human feeling science can have.

I feel that rationality has no purpose other than to serve human values and
feelings (once truly intelligent machines arrive on the scene that statement
might have to be modified).  As I think I have said on this list before,
without values to guide them, the chance you would think anything that has
anything to do with maintaining your own existence approaches zero as a
limit, because of the possible combinatorial explosion of possible thoughts
if they were not constrained by emotional guidance.

Therefore, from the human standpoint, the main use of science should be to
help serve our physical, emotional, and intellectual needs.

I agree that science will increasingly encroach upon many areas previous
considered the realm of the philosopher and priest.  It has been doing so
since at least the age of enlightenment, and it is continuing to do so, with
advances in cosmology, theoretical physics, bioscience, brain science¸ and
AGI.

With the latter two we should pretty much understand the human soul within
several decades.

I hope we have the wisdom to use that new knowledge well.

Ed Porter


-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Monday, December 10, 2007 11:07 PM
To: agi@v2.listbox.com
Subject: Re: [agi] AGI and Deity

Ed:I would add that there probably is something to the phenomenon that John
Rose is referring to, i.e., that faith seems to be valuable to many people.
Perhaps it is somewhat like owning a lottery ticket before its drawing.  It
can offer desired hope, even if the hope might be unrealistic.  But whatever
you think of the odds, it is relatively clear that religion does makes some
people's lives seem more meaningful to them.

You realise of course that what you're seeing on this and the
singularitarian board, over and over, is basically the same old religious
fantasies - the same yearning for the Second Coming - the same old search
for salvation - only in a modern, postreligious form?

Everyone has the same basic questions about the nature of the world -
everyone finds their own answers - which always in every case involve a
mixture of faith and scepticism in the face of enormous mystery.

The business of science in the face of these questions is not to ignore
them, and try and psychoanalyse away people's attempts at answers, as a
priori weird or linked to a deficiency of this or that faculty.

The business of science is to start dealing with these questions - to find
out if there is a God and what the hell that entails, - and not leave it
up to philosophy.

Science's autistic, emotionally deprived, insanely rational nature in front
of the supernatural (if it exists), and indeed the whole world,  needs
analysing just as much as the overemotional, underrational fantasies of the
religious about the supernatural.

Science has fled from the question of God just as it has fled from the
soul - in plain parlance, the self deliberating all the time in you and
me, producing these posts and all our dialogues - only that self, for sure,
exists and there is no excuse for science's refusal to study it in action,
whatsoever.

The religous 'see' too much; science is too heavily blinkered. But the walls

between them - between their metaphysical worldviews - are starting to
crumble..



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74570767-eca623


Re: [agi] The same old explosions?

2007-12-11 Thread Benjamin Goertzel
Self-organizing complexity and computational complexity
are quite separate technical uses of the word complexity, though I
do think there
are subtle relationships.

As an example of a relationship btw the two kinds of complexity, look
at Crutchfield's
work on using formal languages to model the symbolic dynamics generated by
dynamical systems as they approach chaos.  He shows that as the parameter
values of a dynamical system approach those that induce a chaotic regime in
the system, the formal languages implicit in the symbolic-dynamics
representation
of the system's dynamics pass through more and more complex language classes.

And of course, recognizing a grammar in a more complex language class has
a higher computational complexity.

So, Crutchfield's work shows a connection btw self-organizing complexity and
computational complexity, via the medium of formal languages and symbolic
dynamics.

As another, more pertinent example, the Novamente design seeks to avoid
the combinatorial explosions implicit in each of its individual AI
learning/reasoning
components, via integrating these components together in an appropriate way.
This integration, via its impact on the overall system dynamics,
leads to a certain degree of complexity in the self-organizing-systems sense

-- Ben G

On Dec 11, 2007 10:09 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Mike Tintner wrote:
  Essentially, Richard  others are replaying the same old problems of
  computational explosions - see computational complexity in this
  history of cog. sci. review - no?

 No:  this is a misunderstanding of complexity unfortunately (cf the
 footnote on p1 of my AGIRI paper):  computational complexity refers to
 how computations scale up, which is not at all the same as the
 complexity issue, which is about whether or not a particular system
 can be explained.

 To see the difference, imagine an algorithm that was good enough to be
 intelligent, but scaling it up to the size necessary for human-level
 intelligence would require a computer the size of a galaxy.  Nothing
 wrong with the algorithm, and maybe with a quantum computer it would
 actually work.  This algorithm would be suffering from a computational
 complexity problem.

 By contrast, there might be proposed algorithms for iimplementing a
 human-level intelligence which will never work, no matter how much they
 are scaled up (indeed, they may actually deteriorate as they are scaled
 up).  If this was happening because the designers were not appreciating
 that they needed to make subtle and completely non-obvious changes in
 the algorithm, to get its high-level behavior to be what they wanted it
 to be, and if this were because intelligence requires
 complexity-generating processes inside the system, then this would be a
 complex systems problem.

 Two completely different issues.


 Richard Loosemore






  Mechanical Mind
  Gilbert Harman
  Mind as Machine: A History of Cognitive Science. Margaret A. Boden. Two
  volumes, xlviii + 1631 pp. Oxford University Press, 2006. $225.
 
  The term cognitive science, which gained currency in the last half of
  the 20th century, is used to refer to the study of cognition-cognitive
  structures and processes in the mind or brain, mostly in people rather
  than, say, rats or insects. Cognitive science in this sense has
  reflected a growing rejection of behaviorism in favor of the study of
  mind and human information processing. The field includes the study of
  thinking, perception, emotion, creativity, language, consciousness and
  learning. Sometimes it has involved writing (or at least thinking about)
  computer programs that attempt to model mental processes or that provide
  tools such as spreadsheets, theorem provers, mathematical-equation
  solvers and engines for searching the Web. The programs might involve
  rules of inference or productions, mental models, connectionist
  neural networks or other sorts of parallel constraint satisfaction
  approaches. Cognitive science so understood includes cognitive
  neuroscience, artificial intelligence (AI), robotics and artificial
  life; conceptual, linguistic and moral development; and learning in
  humans, other animals and machines.
 
  click for full image and caption
 
 
  Among those sometimes identifying themselves as cognitive scientists are
  philosophers, computer scientists, psychologists, linguists, engineers,
  biologists, medical researchers and mathematicians. Some individual
  contributors to the field have had expertise in several of these more
  traditional disciplines. An excellent example is the philosopher,
  psychologist and computer scientist Margaret Boden, who founded the
  School of Cognitive and Computing Sciences at the University of Sussex
  and is the author of a number of books, including Artificial
  Intelligence and Natural Man (1977) and The Creative Mind (1990). Boden
  has been active in cognitive science pretty much from the start and has
  known many of 

Re: [agi] None of you seem to be able ...

2007-12-11 Thread James Ratcliff
*I just want to jump in here and say I appreciate the content of this post as 
opposed to many of the posts of late which were just name calling and 
bickering... hope to see more content instead.*

Richard Loosemore [EMAIL PROTECTED] wrote: Ed Porter wrote:
 Jean-Paul,
 
 Although complexity is one of the areas associated with AI where I have less
 knowledge than many on the list, I was aware of the general distinction you
 are making.  
 
 What I was pointing out in my email to Richard Loosemore what that the
 definitions in his paper Complex Systems, Artificial Intelligence and
 Theoretical Psychology, for irreducible computability and global-local
 interconnect themselves are not totally clear about this distinction, and
 as a result, when Richard says that those two issues are an unavoidable part
 of AGI design that must be much more deeply understood before AGI can
 advance, by the more loose definitions which would cover the types of
 complexity involved in large matrix calculations and the design of a massive
 supercomputer, of course those issues would arise in AGI design, but its no
 big deal because we have a long history of dealing with them.
 
 But in my email to Richard I said I was assuming he was not using this more
 loose definitions of these words, because if he were, they would not present
 the unexpected difficulties of the type he has been predicting.  I said I
 though he was dealing with more the potentially unruly type of complexity, I
 assume you were talking about.
 
 I am aware of that type of complexity being a potential problem, but I have
 designed my system to hopefully control it.  A modern-day well functioning
 economy is complex (people at the Santa Fe Institute often cite economies as
 examples of complex systems), but it is often amazingly unchaotic
 considering how loosely it is organized and how many individual entities it
 has in it, and how many transitions it is constantly undergoing.  Unsually,
 unless something bangs on it hard (such as having the price of a major
 commodity all of a sudden triple), it has a fair amount of stability, while
 constantly creating new winners and losers (which is a productive form of
 mini-chaos).  Of course in the absence of regulation it is naturally prone
 to boom and bust cycles.  

Ed,

I now understand that you have indeed heard of complex systems before, 
but I must insist that in your summary above you have summarized what 
they are in such a way that completely contradicts what they are!

A complex system such as the economy can and does have stable modes in 
which it appears to be stable.  This does not constradict the complexity 
at all.  A system is not complex because it is unstable.

I am struggling here, Ed.  I want to go on to explain exactly what I 
mean (and what complex systems theorists mean) but I cannot see a way to 
do it without writing half a book this afternoon.

Okay, let me try this.

Imagine that we got a bunch of computers and connected them with a 
network that allowed each one to talk to (say) the ten nearest machines.

Imagine that each one is running a very simple program:  it keeps a 
handful of local parameters (U, V, W, X, Y) and it updates the values of 
its own parameters according to what the neighboring machines are doing 
with their parameters.

How does it do the updating?  Well, imagine some really messy and 
bizarre algorithm that involves looking at the neighbors' values, then 
using them to cross reference each other, and introduce delays and 
gradients and stuff.

On the face of it, you might think that the result will be that the U V 
W X Y values just show a random sequence of fluctuations.

Well, we know two things about such a system.

1) Experience tells us that even though some systems like that are just 
random mush, there are some (a noticeably large number in fact) that 
have overall behavior that shows 'regularities'.  For example, much to 
our surprise we might see waves in the U values.  And every time two 
waves hit each other, a vortex is created for exactly 20 minutes, then 
it stops.  I am making this up, but that is the kind of thing that could 
happen.

2) The algorithm is so messy that we cannot do any math to analyse and 
predict the behavior of the system.  All we can do is say that we have 
absolutely no techniques that will allow us to mathematical progress on 
the problem today, and we do not know if at ANY time in future history 
there will be a mathematics that will cope with this system.

What this means is that the waves and vortices we observed cannot be 
explained in the normal way.  We see them happening, but we do not 
know why they do.  The bizzare algorithm is the low level mechanism 
and the waves and vortices are the high level behavior, and when I say 
there is a Global-Local Disconnect in this system, all I mean is that 
we are completely stuck when it comes to explaining the high level in 
terms of the low level.

Believe me, it is childishly easy 

Re: [agi] The same old explosions?

2007-12-11 Thread Richard Loosemore


Mike,

You are talking about two different occurrences of a computational 
explosion here, so we need to distinguish them.


One is a computational explosion that occurs at design time:   this is 
when a researcher gets an algorithm to do something on a toy problem, 
but then they figure out how the algorithm scales when it is scaled up 
to a full size problem and discover that it will just need too much 
computing power.  This explosion doesn't happen in the AGI, it happens 
in the calculations done by the AGI designer.


The second type of explosion might occur in an actual working system 
(although strictly speaking this would not be called a computational 
explosion so much as a screw up).  If some AGI designer inserts an 
algorithm that, say, requires the system to engage in an (almost) 
infinitely long calculation to make a decision at some point, and if the 
programmer allows the system to start this calculation and then wait for 
it to end, then the system will hang.


AI and Cog Sci have not been obsessed with computational explosions: 
it is just a fact that any model that suffers from one is dumb, and 
there are many that do.


They have no connection to rational algorithms.  Can happen in any 
kind of systems.  (Happens in Microsoft Windows all the time, and if 
that's rational I'll eat the entire town of Redmond, WA.)


it is certainly true that some style of computation are more prone to 
hanging that others.  But really it is pretty straightforward matter to 
write algorithms in such a way that this is not a problem:  it may slow 
some algorithms down a bit, but that is not a fundamental issue.


For what it's worth, my system does indeed stay well away from 
situations in which it might get locked up.  It is always happy to stop 
what it's doing and go for a drink.


But remember, all this is about hanging or livelock, not about the 
design problem.



Richard Loosemore





Mike Tintner wrote:
Thanks. But one way and another, although there are different 
variations, cog sci and AI have been obsessed with computational 
explosions? Ultimately, it seems to me, these are all the problems of 
algorithms - of a rigid, rational approach and system - which inevitably 
get stuck in dealing with real world situations, that don't fit or are 
too computationally demanding for their models. (And can you *guarantee* 
that your particular complex approach isn't going to run into its own 
explosions?)


These explosions never occur, surely, in the human brain. For at least 
two reasons.


Crucially, the brain has a self which can stop any computation or train 
of thought and say:  bugger this - what's the point? -  I'm off for a 
drink. An essential function. In all seriousness.


Secondly, the brain doesn't follow closed algorithms, anyway, as we were 
discussing. And it doesn't have a single but rather always has 
conflicting models. (I can't remember whether it was John or s.o. else 
recently who said I've learned that I can live with conflicting 
models/worldviews).



Richard: Mike Tintner wrote:
Essentially, Richard  others are replaying the same old problems of 
computational explosions - see computational complexity in this 
history of cog. sci. review - no?


No:  this is a misunderstanding of complexity unfortunately (cf the
footnote on p1 of my AGIRI paper):  computational complexity refers to
how computations scale up, which is not at all the same as the
complexity issue, which is about whether or not a particular system
can be explained.

To see the difference, imagine an algorithm that was good enough to be
intelligent, but scaling it up to the size necessary for human-level
intelligence would require a computer the size of a galaxy.  Nothing
wrong with the algorithm, and maybe with a quantum computer it would
actually work.  This algorithm would be suffering from a computational
complexity problem.

By contrast, there might be proposed algorithms for iimplementing a
human-level intelligence which will never work, no matter how much they
are scaled up (indeed, they may actually deteriorate as they are scaled
up).  If this was happening because the designers were not appreciating
that they needed to make subtle and completely non-obvious changes in
the algorithm, to get its high-level behavior to be what they wanted it
to be, and if this were because intelligence requires
complexity-generating processes inside the system, then this would be a
complex systems problem.

Two completely different issues.


Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74586829-bb45d1


Re: [agi] None of you seem to be able ...

2007-12-11 Thread James Ratcliff
However, part of the key to intelligence is **self-tuning**.

I believe that if an AGI system is built the right way, it can effectively
tune its own parameters, hence adaptively managing its own complexity.

I agree with Ben here, isnt one of the core concepts of AGI the ability to 
modify its behavior and to learn?

This will have to be done with a large amount of self-tuning, as we will not be 
changing parameters for every action, that wouldnt be efficient.  (this part 
does not require actual self-code writing just yet)

Its more a matter of finding out a way to guide the AGI in changing the 
parameters, checking the changes and reflecting back over the changes to see if 
they are effective for future events.

What is needed at some point is being able to converse at a high level with the 
AGI, and correcting their behaviour, such as Dont touch that, cause it will 
have a bad effect and having the AGI do all of the parameter changing and link 
building and strengthening/weakening necessary in its memory.  It may do this 
in a very complex way and may effect many parts of its systems, but by multiple 
reinforcement we should be able to guide the overall behaviour if not all of 
the parameters directly.

James Ratcliff


Benjamin Goertzel [EMAIL PROTECTED] wrote:  Conclusion:  there is a danger 
that the complexity that even Ben agrees
 must be present in AGI systems will have a significant impact on our
 efforts to build them.  But the only response to this danger at the
 moment is the bare statement made by people like Ben that I do not
 think that the danger is significant.  No reason given, no explicit
 attack on any component of the argument I have given, only a statement
 of intuition, even though I have argued that intuition cannot in
 principle be a trustworthy guide here.

But Richard, your argument ALSO depends on intuitions ...

I'll try, though, to more concisely frame the reason I think your argument
is wrong.

I agree that AGI systems contain a lot of complexity in the dynamical-
systems-theory sense.

And I agree that tuning all the parameters of an AGI system externally
is likely to be intractable, due to this complexity.

However, part of the key to intelligence is **self-tuning**.

I believe that if an AGI system is built the right way, it can effectively
tune its own parameters, hence adaptively managing its own complexity.

Now you may say there's a problem here: If AGI component A2 is to
tune the parameters of AGI component A1, and A1 is complex, then
A2 has got to also be complex ... and who's gonna tune its parameters?

So the answer has got to be that: To effectively tune the parameters
of an AGI component of complexity X, requires an AGI component of
complexity a bit less than X.  Then one can build a self-tuning AGI system,
if one does the job right.

Now, I'm not saying that Novamente (for instance) is explicitly built
according to this architecture: it doesn't have N components wherein
component A_N tunes the parameters of component A_(N+1).

But in many ways, throughout the architecture, it relies on this sort of
fundamental logic.

Obviously it is not the case that every system of complexity X can
be parameter-tuned by a system of complexity less than X.  The question
however is whether an AGI system can be built of such components.
I suggest the answer is yes -- and furthermore suggest that this is
pretty much the ONLY way to do it...

Your intuition is that this is not possible, but you don't have a proof
of this...

And yes, I realize the above argument of mine is conceptual only -- I haven't
given a formal definition of complexity.  There are many, but that would
lead into a mess of math that I don't have time to deal with right now,
in the context of answering an email...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


   
-
Be a better friend, newshound, and know-it-all with Yahoo! Mobile.  Try it now.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74588401-fe7760

Re: [agi] The same old explosions?

2007-12-11 Thread Richard Loosemore

Benjamin Goertzel wrote:

Self-organizing complexity and computational complexity
are quite separate technical uses of the word complexity, though I
do think there
are subtle relationships.

As an example of a relationship btw the two kinds of complexity, look
at Crutchfield's
work on using formal languages to model the symbolic dynamics generated by
dynamical systems as they approach chaos.  He shows that as the parameter
values of a dynamical system approach those that induce a chaotic regime in
the system, the formal languages implicit in the symbolic-dynamics
representation
of the system's dynamics pass through more and more complex language classes.


This is true:  you can find connections between the two usages, even 
though they start out being in principle different.


The Crutchfield work sounds like a good illustration of the complexity 
= edge of chaos idea.




And of course, recognizing a grammar in a more complex language class has
a higher computational complexity.

So, Crutchfield's work shows a connection btw self-organizing complexity and
computational complexity, via the medium of formal languages and symbolic
dynamics.

As another, more pertinent example, the Novamente design seeks to avoid
the combinatorial explosions implicit in each of its individual AI
learning/reasoning
components, via integrating these components together in an appropriate way.
This integration, via its impact on the overall system dynamics,
leads to a certain degree of complexity in the self-organizing-systems sense


Indeed:  that being one of the ways that complexity creeps in.  All AI 
systems have to allow for the fact that some mechanisms have to be told 
to time out and submit their best guess, for example, and when that 
happens the overall behavior of the system becomes a good deal more 
subtly related to its design spec.




Richard Loosemore




-- Ben G

On Dec 11, 2007 10:09 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

Mike Tintner wrote:

Essentially, Richard  others are replaying the same old problems of
computational explosions - see computational complexity in this
history of cog. sci. review - no?

No:  this is a misunderstanding of complexity unfortunately (cf the
footnote on p1 of my AGIRI paper):  computational complexity refers to
how computations scale up, which is not at all the same as the
complexity issue, which is about whether or not a particular system
can be explained.

To see the difference, imagine an algorithm that was good enough to be
intelligent, but scaling it up to the size necessary for human-level
intelligence would require a computer the size of a galaxy.  Nothing
wrong with the algorithm, and maybe with a quantum computer it would
actually work.  This algorithm would be suffering from a computational
complexity problem.

By contrast, there might be proposed algorithms for iimplementing a
human-level intelligence which will never work, no matter how much they
are scaled up (indeed, they may actually deteriorate as they are scaled
up).  If this was happening because the designers were not appreciating
that they needed to make subtle and completely non-obvious changes in
the algorithm, to get its high-level behavior to be what they wanted it
to be, and if this were because intelligence requires
complexity-generating processes inside the system, then this would be a
complex systems problem.

Two completely different issues.


Richard Loosemore



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74589437-aa2865


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-11 Thread James Ratcliff
Here's a basic abstract I did last year I think:

http://www.falazar.com/AI/AAAI05_Student_Abtract_James_Ratcliff.pdf

Would like to work with others on a full fledged Reprensentation system that 
could use these kind of techniques 
I hacked this together by myself, so I know a real team could put this kind of 
stuff to much better use.

James


Ed Porter [EMAIL PROTECTED] wrote:   I   James,
   
  Do you have any description or examples of you results.  
   
  This is something I have been telling people for years.   That you should be 
able to extract a significant amount (but probably far from all) world 
knowledge by scanning large corpora of text.  I would love to see how well it 
actually works for a given size of corpora, and for a given level of 
algorithmic sophistication.
   
  Ed Porter
   
  -Original Message-
 From: James Ratcliff [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, December 06, 2007 4:51 PM
 To: agi@v2.listbox.com
 Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]
   
  Richard,
   What is your specific complaint about the 'viability of the framework'?
 
 
 Ed,
   This line of data gathering is very interesting to me as well, though I 
found quickly that using all web sources quickly devolved into insanity.
 By using scanned text novels, I was able to extract lots of relational 
information on a range of topics. 
With a well defined ontology system, and some human overview, a large 
amount of information can be extracted and many probabilities learned.
 
 James
 
 
 Ed Porter [EMAIL PROTECTED] wrote:
  
 RICHARD LOOSEMORE=
 You are implicitly assuming a certain framework for solving the problem of 
representing knowledge ... and then all your discussion is about whether or not 
it is feasible to implement that framework (to overcome various issues to do 
with searches that have to be done within that framework).
 
 But I am not challenging the implementation issues, I am challenging the 
viability of the framework itself.
 
 JAMES--- What e
 
 
 ED PORTER= So what is wrong with my framework? What is wrong with a
 system of recording patterns, and a method for developing compositions and
 generalities from those patterns, in multiple hierarchical levels, and for
 indicating the probabilities of certain patterns given certain other pattern
 etc? 
 
 I know it doesn't genuflect before the alter of complexity. But what is
 wrong with the framework other than the fact that it is at a high level and
 thus does not explain every little detail of how to actually make an AGI
 work?
 
 
 
 RICHARD LOOSEMORE= These models you are talking about are trivial
 exercises in public 
 relations, designed to look really impressive, and filled with hype 
 designed to attract funding, which actually accomplish very little.
 
 Please, Ed, don't do this to me. Please don't try to imply that I need 
 to open my mind any more. Th implication seems to be that I do not 
 understand the issues in enough depth, and need to do some more work to 
 understand you points. I can assure you this is not the case.
 
 
 
 ED PORTER= Shastri's Shruiti is a major piece of work. Although it is
 a highly simplified system, for its degree of simplification it is amazingly
 powerful. It has been very helpful to my thinking about AGI. Please give
 me some excuse for calling it trivial exercise in public relations. I
 certainly have not published anything as important. Have you?
 
 The same for Mike Collins's parsers which, at least several years ago I was
 told by multiple people at MIT was considered one of the most accurate NL
 parsers around. Is that just a trivial exercise in public relations? 
 
 With regard to Hecht-Nielsen's work, if it does half of what he says it does
 it is pretty damned impressive. It is also a work I think about often when
 thinking how to deal with certain AI problems. 
 
 Richard if you insultingly dismiss such valid work as trivial exercises in
 public relations it sure as hell seems as if either you are quite lacking
 in certain important understandings -- or you have a closed mind -- or both.
 
 
 
 Ed Porter
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
  
 
 
 ___
 James Ratcliff - http://falazar.com
 Looking for something...


-
  
  Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now.

-
  
  This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
  
  
-
 This list is sponsored by AGIRI: 

RE: [agi] AGI and Deity

2007-12-11 Thread John G. Rose
From: Joshua Cowan [mailto:[EMAIL PROTECTED]
 
 It's interesting that the field of memetics is moribund (ex. the
 Journal
 of Memetics hasn't published in two years) but the meme of memetics is
 alive
 and well. I wonder, do any of the AGI researchers find the concept of
 Memes
 useful in describing how their proposed AGIs would acquire or transfer
 information?
 

Not sure if it is moribund. Maybe they have discovered that Cultural
Informational Transfer may have more non-genetically aligned properties than
was originally claimed? There is overlap with other fields as well, so
contention exists. But the concepts are there, they are not going away, the
medium is much different and enhanced with computers and internet.

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74594746-23d62b


Re: [agi] None of you seem to be able ...

2007-12-11 Thread Richard Loosemore

James Ratcliff wrote:

 However, part of the key to intelligence is **self-tuning**.

 I believe that if an AGI system is built the right way, it can effectively
 tune its own parameters, hence adaptively managing its own complexity.

I agree with Ben here, isnt one of the core concepts of AGI the ability 
to modify its behavior and to learn?


That might sound like a good way to proceed, but now consider this.

Suppose that the AGI is designed with a symbol system in which the 
symbols are very much mainstream-style symbols, and one aspect of them 
is that there are truth-values associated with the statements that use 
those symbols (as in I like cats, t=0.9).


Now suppose that the very fact that truth values were being *explicitly* 
represented and manipulated by the system was causing it to run smack 
bang into the Complex Systems Problem.


In other words, suppose that you cannot get that kind of design to work 
because when it scales up the whole truth-value maintenance mechanism 
just comes apart.


Suppose, further, that the only AGI systems that really do work are ones 
in which the symbols never use truth values but use other stuff (for 
which there is no interpretation) and that the thing we call a truth 
value is actually the result of an operator that can be applied to a 
bunch of connected symbols.  This [truth-value = external operator] idea 
is fundamentally different from [truth-value = internal parameter] idea, 
obviously.


Now here is my problem:  how would parameter-tuning ever cause that 
first AGI design to realise that it had to abandon one bit of its 
architecture and redesign itself?


Surely this is more than parameter tuning?  There is no way it could 
simply stop working and completely redesign all of its internal 
architecture to not use the t-values, and make the operators etc etc.!


So here is the rub:  if the CSP does cause this kind of issue (and that 
is why I invented the CSP idea in the first place, because it was 
precisely those kinds of architectural issues that seemed wrong), then 
parameter tuning will never be good enough, it will have to be a huge 
and very serious new approach to making our AGI designs flexible at the 
design level.



Does that make sense?




Richard Loosemore








This will have to be done with a large amount of self-tuning, as we will 
not be changing parameters for every action, that wouldnt be efficient.  
(this part does not require actual self-code writing just yet)


Its more a matter of finding out a way to guide the AGI in changing the 
parameters, checking the changes and reflecting back over the changes to 
see if they are effective for future events.


What is needed at some point is being able to converse at a high level 
with the AGI, and correcting their behaviour, such as Dont touch that, 
cause it will have a bad effect and having the AGI do all of the 
parameter changing and link building and strengthening/weakening 
necessary in its memory.  It may do this in a very complex way and may 
effect many parts of its systems, but by multiple reinforcement we 
should be able to guide the overall behaviour if not all of the 
parameters directly.


James Ratcliff


*/Benjamin Goertzel [EMAIL PROTECTED]/* wrote:

  Conclusion: there is a danger that the complexity that even Ben
agrees
  must be present in AGI systems will have a significant impact on our
  efforts to build them. But the only response to this danger at the
  moment is the bare statement made by people like Ben that I do not
  think that the danger is significant. No reason given, no explicit
  attack on any component of the argument I have given, only a
statement
  of intuition, even though I have argued that intuition cannot in
  principle be a trustworthy guide here.

But Richard, your argument ALSO depends on intuitions ...

I'll try, though, to more concisely frame the reason I think your
argument
is wrong.

I agree that AGI systems contain a lot of complexity in the dynamical-
systems-theory sense.

And I agree that tuning all the parameters of an AGI system externally
is likely to be intractable, due to this complexity.

However, part of the key to intelligence is **self-tuning**.

I believe that if an AGI system is built the right way, it can
effectively
tune its own parameters, hence adaptively managing its own complexity.

Now you may say there's a problem here: If AGI component A2 is to
tune the parameters of AGI component A1, and A1 is complex, then
A2 has got to also be complex ... and who's gonna tune its parameters?

So the answer has got to be that: To effectively tune the parameters
of an AGI component of complexity X, requires an AGI component of
complexity a bit less than X. Then one can build a self-tuning AGI
system,
if one does the job right.

Now, I'm not saying that Novamente (for instance) is explicitly built

Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-11 Thread James Ratcliff
irrationality  - is used to describe thinking and actions which are, or appear 
to be, less useful or logical than the other alternatives.
and rational would be the opposite of that.

This line of thinking is more concerned with the behaviour of the entities, 
which requires Goal orienting and other things.

An irrational being is NOT working effectively towards the goal according to 
this.  This may be necessary in order to determine new routes, unique solutions 
to a problem, and according to the description will be included in most AGI's I 
have heard described so far.

The other definition which seems to be in the air around here is 
irrational - acting without reason or logic.

An entity that acts without reason or logic entirely is a totally random being, 
will choose to do something for no reason, and will not ever find any goals or 
solutions without accidentily hitting them.

In AGI terms, any entity given multiple equally rewarding alternative paths to 
a goal may randomly select any of them.
This may be considered acting without reason, as there was no real basis for 
choosing 1 as opposed to 2, but it also may be very reasonable, as given any 
situation where either path can be chosen, choosing one is reasonable.  
(choosing no path at that point would indeed be irrational and pointless)

I havnt seen any solutions proposed that require any real level of acting 
without reason  and neural nets and others are all reasonable, though the 
reasoning may be complex and hidden from us, or hard to understand.

The example given previously about the computer system that changes its 
thinking in the middle of discovering a solution, is not irrational, as it is 
just contuing to follow its rules, it can still change those rules as it 
allows, and may have very good reason for doing so.

James Ratcliff

Mike Tintner [EMAIL PROTECTED] wrote:  Richard: Mike,
 I think you are going to have to be specific about what you mean by 
 irrational because you mostly just say that all the processes that could 
 possibly exist in computers are rational, and I am wondering what else is 
 there that irrational could possibly mean.  I have named many processes 
 that seem to me to fit the irrational definition, but without being too 
 clear about it you have declared them all to be just rational, so now I 
 have no idea what you can be meaning by the word.

Richard,

Er, it helps to read my posts. From my penultimate post to you:

If a system can change its approach and rules of reasoning at literally any 
step of
problem-solving, then it is truly crazy/ irrational (think of a crazy
path). And it will be capable of producing all the human irrationalities
that I listed previously - like not even defining or answering the problem.
It will by the same token have the capacity to be truly creative, because it
will ipso facto be capable of lateral thinking at any step of
problem-solving. Is your system capable of that? Or anything close? Somehow
I doubt it, or you'd already be claiming the solution to both AGI and
computational creativity.

A rational system follows a set of rules in solving a problem  (which can 
incl. rules that self-modify according to metarules) ;  a creative, 
irrational system can change/break/create any and all rules (incl. 
metarules) at any point of solving a problem  -  the ultimate, by 
definition, in adaptivity. (Much like you, and indeed all of us, change the 
rules of engagement much of the time in our discussions here).

Listen, no need to reply - because you're obviously not really interested. 
To me that's ironic, though, because this is absolutely the most central 
issue there is in AGI. But no matter.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


   
-
Looking for last minute shopping deals?  Find them fast with Yahoo! Search.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74598181-2b0ae5

Re: [agi] None of you seem to be able ...

2007-12-11 Thread Mike Tintner


Richard: Suppose, further, that the only AGI systems that really do work 
are ones
in which the symbols never use truth values but use other stuff (for 
which there is no interpretation) and that the thing we call a truth 
value is actually the result of an operator that can be applied to a 
bunch of connected symbols.  This [truth-value = external operator] idea 
is fundamentally different from [truth-value = internal parameter] idea, 
obviously.


I almost added to my last post that another reason the brain never seizes up 
is that its concepts ( its entire representational operations) are 
open-ended trees, relatively ill-defined and ill-structured, and therefore 
endlessly open to reinterpretation.  Supergeneral concepts like Go away, 
Come here, put this over there, or indeed is that true? enable it to 
be flexible and creatively adaptive, especially if it gets stuck - and find 
other ways, for example,  to go come, put or deem as true etc.


Is this something like what you are on about? 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74601069-e39ad4


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-11 Thread Ed Porter
James,

 

I read your paper.  Your project seems right on the mark.  It provides a
domain-limited example of the general type of learning algorithm that will
probably be the central learning algorithm of AGI, i.e., finding patterns,
and hierarchies of patterns in the AGI's experience in a largely
unsupervised manner.

 

The application of the type of learning algorithm to text makes sense
because, with the web, it is one of the easiest types of experience to get
in large volumes.  It is very much the type of project I have been
advocating for years.  When I first heard of the Google project to put
millions of books into digital form, I assumed it was for exactly such
purposes, and told multiple people so.  (Ditto for the CMU million book
project.)  It seems to be the conventional wisdom that Google is not using
its vast resources for such an obvious purpose, but I wouldn't be so sure.

 

It seems to me that fiction books, at an estimated average length of 300
pages at 300 words/page, would only have about 100K words each, so that 600
of them would only be about 60 Million words, which is amazingly small for
learning from corpora studies.  That you were able to learn so much from so
little is encouraging, but it would really be interesting to see such a
project done on very large corpora, 10 or 100s of billions of words.  It
would be interesting to see how much of human common sense (and expertise)
they could, and could not, derive.

 

Ed Porter

-Original Message-
From: James Ratcliff [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 11, 2007 11:26 AM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

 

Here's a basic abstract I did last year I think:

http://www.falazar.com/AI/AAAI05_Student_Abtract_James_Ratcliff.pdf

Would like to work with others on a full fledged Reprensentation system that
could use these kind of techniques 
I hacked this together by myself, so I know a real team could put this kind
of stuff to much better use.

James


Ed Porter [EMAIL PROTECTED] wrote:

James,

 

Do you have any description or examples of you results.  

 

This is something I have been telling people for years.   That you should be
able to extract a significant amount (but probably far from all) world
knowledge by scanning large corpora of text.  I would love to see how well
it actually works for a given size of corpora, and for a given level of
algorithmic sophistication.

 

Ed Porter

 

-Original Message-
From: James Ratcliff [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 4:51 PM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

 

Richard,
  What is your specific complaint about the 'viability of the framework'?


Ed,
  This line of data gathering is very interesting to me as well, though I
found quickly that using all web sources quickly devolved into insanity.
By using scanned text novels, I was able to extract lots of relational
information on a range of topics. 
   With a well defined ontology system, and some human overview, a large
amount of information can be extracted and many probabilities learned.

James


Ed Porter [EMAIL PROTECTED] wrote:


RICHARD LOOSEMORE=
You are implicitly assuming a certain framework for solving the problem of
representing knowledge ... and then all your discussion is about whether or
not it is feasible to implement that framework (to overcome various issues
to do with searches that have to be done within that framework).

But I am not challenging the implementation issues, I am challenging the
viability of the framework itself.

JAMES--- What e


ED PORTER= So what is wrong with my framework? What is wrong with a
system of recording patterns, and a method for developing compositions and
generalities from those patterns, in multiple hierarchical levels, and for
indicating the probabilities of certain patterns given certain other pattern
etc? 

I know it doesn't genuflect before the alter of complexity. But what is
wrong with the framework other than the fact that it is at a high level and
thus does not explain every little detail of how to actually make an AGI
work?



RICHARD LOOSEMORE= These models you are talking about are trivial
exercises in public 
relations, designed to look really impressive, and filled with hype 
designed to attract funding, which actually accomplish very little.

Please, Ed, don't do this to me. Please don't try to imply that I need 
to open my mind any more. Th implication seems to be that I do not 
understand the issues in enough depth, and need to do some more work to 
understand you points. I can assure you this is not the case.



ED PORTER= Shastri's Shruiti is a major piece of work. Although it is
a highly simplified system, for its degree of simplification it is amazingly
powerful. It has been very helpful to my thinking about AGI. Please give
me some excuse for calling it trivial 

Re: [agi] None of you seem to be able ...

2007-12-11 Thread James Ratcliff
James: Either of these systems described will have a Complexity Problem, any 
AGI will because it is a very complex system.  
System 1  I dont believe is strictly practical, as few Truth values can be 
stored locally directly to the frame.  More realistic is there may be a 
temporary value such as:
  I like cats  t=0.9
Which is calculated from some other backing facts, such as 
I said I like cats. t=1.0
I like Rosemary (a cat) t=0.8

then parameter tuning will never be good enough, it will have to be a huge and 
very serious new approach to making our AGI designs flexible at the design 
level.
System 2, though it uses unnamed parameters, would still need to determine 
these temporary values.  Any representation system must have parameter tuning 
in some form.

Either of these systems has the same problem though, of updating the 
information, such as 
Seen: I dont like Ganji (a cat) both systems must update their representation 
to update with this new information.

Neither a symbol-system nor a neural network (closest you mean by system 2?) 
has been shown able to scale up to a larger system needed for an AGI, but 
neither has been shown ineffective I dont believe either.

Whether a system explicity or implicitly stores the information I believe you 
must be able to ask it the reasoning behind any thought process.  This can be 
done with either system, and may give a very long answer, but once you get a 
system that makes decicions and cannot explain its reasoning, that is a very 
scary thought, and it is truly acting irrationaly as I see it.

While you cant extract a small portion of the representation from system 1 or 
two outside of the whole, you must be able to print out the calculated values 
that a Frame type system shows.

James

Richard Loosemore [EMAIL PROTECTED] wrote: James Ratcliff wrote:
  However, part of the key to intelligence is **self-tuning**.
 
  I believe that if an AGI system is built the right way, it can effectively
  tune its own parameters, hence adaptively managing its own complexity.
 
 I agree with Ben here, isnt one of the core concepts of AGI the ability 
 to modify its behavior and to learn?

That might sound like a good way to proceed, but now consider this.

System 1: Suppose that the AGI is designed with a symbol system in which the 
symbols are very much mainstream-style symbols, and one aspect of them is that 
there are truth-values associated with the statements that use those symbols 
(as in I like cats, t=0.9).

Now suppose that the very fact that truth values were being *explicitly* 
represented and manipulated by the system was causing it to run smack bang into 
the Complex Systems Problem.

In other words, suppose that you cannot get that kind of design to work because 
when it scales up the whole truth-value maintenance mechanism just comes apart.


System 2: Suppose, further, that the only AGI systems that really do work are 
ones in which the symbols never use truth values but use other stuff (for 
which there is no interpretation) and that the thing we call a truth value is 
actually the result of an operator that can be applied to a bunch of connected 
symbols.  This [truth-value = external operator] idea is fundamentally 
different from [truth-value = internal parameter] idea, obviously.

Now here is my problem:  how would parameter-tuning ever cause that first AGI 
design to realise that it had to abandon one bit of its architecture and 
redesign itself?

Surely this is more than parameter tuning?  There is no way it could imply stop 
working and completely redesign all of its internal 
architecture to not use the t-values, and make the operators etc etc.!

So here is the rub:  if the CSP does cause this kind of issue (and that is why 
I invented the CSP idea in the first place, because it was precisely those 
kinds of architectural issues that seemed wrong), then parameter tuning will 
never be good enough, it will have to be a huge and very serious new approach 
to making our AGI designs flexible at the design level.


Does that make sense?




Richard Loosemore








 This will have to be done with a large amount of self-tuning, as we will 
 not be changing parameters for every action, that wouldnt be efficient.  
 (this part does not require actual self-code writing just yet)
 
 Its more a matter of finding out a way to guide the AGI in changing the 
 parameters, checking the changes and reflecting back over the changes to 
 see if they are effective for future events.
 
 What is needed at some point is being able to converse at a high level 
 with the AGI, and correcting their behaviour, such as Dont touch that, 
 cause it will have a bad effect and having the AGI do all of the 
 parameter changing and link building and strengthening/weakening 
 necessary in its memory.  It may do this in a very complex way and may 
 effect many parts of its systems, but by multiple reinforcement we 
 should be able to guide the overall behaviour if not all of the 

RE: [agi] AGI and Deity

2007-12-11 Thread John G. Rose
 From: Charles D Hixson [mailto:[EMAIL PROTECTED]
 The evidence in favor of an external god of any traditional form is,
 frankly, a bit worse than unimpressive. It's lots worse. This doesn't
 mean that gods don't exist, merely that they (probably) don't exist in
 the hardware of the universe. I see them as a function of the software
 of the entities that use language. Possibly they exist in a muted form
 in most pack animals, or most animals that have protective adults when
 they are infants.
 
 To me it appears that people believe in gods for the same reasons that
 they believe in telepathy. I.e., evidence back before they could speak
 clearly indicated that the adults could transfer thoughts from one to
 another. This shaped a basic layer of beliefs that was later buried
 under later additions, but never refuted. When one learned language, one
 learned how to transfer thoughts ... but it was never tied back into the
 original belief, because what was learned didn't match closely enough to
 the original model of what was happening. Analogously, when one is an
 infant the adult that cares for one is seen as the all powerful
 protector. Pieces of this image become detached memories within the
 mind, and are not refuted when a more accurate and developed model of
 the actual parents is created. These hidden memories are the basis
 around which the idea of a god is created.
 
 Naturally, this is just my model of what is happening. Other
 possibilities exist. But if I am to consider them seriously, they need
 to match the way the world operates as I understand it. They don't need
 to predict the same mechanism, but they need to predict the same events.
 
 E.g., I consider Big Bang cosmology a failed explanation. It's got too
 many ad hoc pieces. But it successfully explains most things that are
 observed, and is consistent with relativity and quantum theory.
 (Naturally, as they were used in developing it...but nevertheless
 important.) And relativity and quantum theory themselves are failures,
 because both are needed to explain that which is observable, but they
 contradict each other in certain details. But they are successful
 failures! Similar commentary applies to string theory, but with
 differences. (Too many ad hoc parameters!)
 
 Any god that is proposed must be shown to be consistent with the
 observed phenomena. The Deists managed to come up with one that would do
 the job, but he never became very popular. Few others have even tried,
 except with absurdly evident special pleading. Generally I'd be more
 willing to accept Chariots of the Gods as a true account.
 
 And as for moral principles... I've READ the Bible. The basic moral
 principle that it pushes is We are the chosen people. Kill the
 stranger, steal his property, and enslave his servants! It requires
 selective reading to come up with anything else, though I admit that
 other messages are also in there, if you read selectively. Especially
 during the periods when the Jews were in one captivity or another.
 (I.e., if you are weak, preach mercy, but if you are strong show none.)
 During the later times the Jews were generally under the thumb of one
 foreign power or another, so they started preaching mercy.
 

One of the things about gods is that they are representations for what the
believers don't know and understand. Gods change over time as our knowledge
changes over time. That is ONE of the properties of them. The move from
polytheistic to monotheistic beliefs is a way to centralize these unknowns
for efficiency.

You could build AGI and label the unknowns with gods. You honestly could.
Magic happens here and combinatorial explosion regions could be labeled as
gods. Most people on this email list would frown at doing that but I say it
is totally possible and might be a very extremely efficient way of
conquering certain cognitive engineering issues. And I'm sure some on this
list have already thought about doing that.

John


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74607326-c9be15


RE: [agi] AGI and Deity

2007-12-11 Thread Ed Porter
John,

You implied there might be a very extremely efficient way of conquering
certain cognitive engineering issues by using religion in AGIs.

Obviously any powerful AGI that deals with a complex and uncertain world
like ours would have to have belief systems, but it is not clear to me their
would be any benefit in them being religious in any sense that Dawkins is
not.

So, since you are a smart guy, perhaps you are seeing something I do not.
Could you please fill me in?

Ed Porter

-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 11, 2007 12:43 PM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity

 From: Charles D Hixson [mailto:[EMAIL PROTECTED]
 The evidence in favor of an external god of any traditional form is,
 frankly, a bit worse than unimpressive. It's lots worse. This doesn't
 mean that gods don't exist, merely that they (probably) don't exist in
 the hardware of the universe. I see them as a function of the software
 of the entities that use language. Possibly they exist in a muted form
 in most pack animals, or most animals that have protective adults when
 they are infants.
 
 To me it appears that people believe in gods for the same reasons that
 they believe in telepathy. I.e., evidence back before they could speak
 clearly indicated that the adults could transfer thoughts from one to
 another. This shaped a basic layer of beliefs that was later buried
 under later additions, but never refuted. When one learned language, one
 learned how to transfer thoughts ... but it was never tied back into the
 original belief, because what was learned didn't match closely enough to
 the original model of what was happening. Analogously, when one is an
 infant the adult that cares for one is seen as the all powerful
 protector. Pieces of this image become detached memories within the
 mind, and are not refuted when a more accurate and developed model of
 the actual parents is created. These hidden memories are the basis
 around which the idea of a god is created.
 
 Naturally, this is just my model of what is happening. Other
 possibilities exist. But if I am to consider them seriously, they need
 to match the way the world operates as I understand it. They don't need
 to predict the same mechanism, but they need to predict the same events.
 
 E.g., I consider Big Bang cosmology a failed explanation. It's got too
 many ad hoc pieces. But it successfully explains most things that are
 observed, and is consistent with relativity and quantum theory.
 (Naturally, as they were used in developing it...but nevertheless
 important.) And relativity and quantum theory themselves are failures,
 because both are needed to explain that which is observable, but they
 contradict each other in certain details. But they are successful
 failures! Similar commentary applies to string theory, but with
 differences. (Too many ad hoc parameters!)
 
 Any god that is proposed must be shown to be consistent with the
 observed phenomena. The Deists managed to come up with one that would do
 the job, but he never became very popular. Few others have even tried,
 except with absurdly evident special pleading. Generally I'd be more
 willing to accept Chariots of the Gods as a true account.
 
 And as for moral principles... I've READ the Bible. The basic moral
 principle that it pushes is We are the chosen people. Kill the
 stranger, steal his property, and enslave his servants! It requires
 selective reading to come up with anything else, though I admit that
 other messages are also in there, if you read selectively. Especially
 during the periods when the Jews were in one captivity or another.
 (I.e., if you are weak, preach mercy, but if you are strong show none.)
 During the later times the Jews were generally under the thumb of one
 foreign power or another, so they started preaching mercy.
 

One of the things about gods is that they are representations for what the
believers don't know and understand. Gods change over time as our knowledge
changes over time. That is ONE of the properties of them. The move from
polytheistic to monotheistic beliefs is a way to centralize these unknowns
for efficiency.

You could build AGI and label the unknowns with gods. You honestly could.
Magic happens here and combinatorial explosion regions could be labeled as
gods. Most people on this email list would frown at doing that but I say it
is totally possible and might be a very extremely efficient way of
conquering certain cognitive engineering issues. And I'm sure some on this
list have already thought about doing that.

John


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74654897-672ca9attachment: winmail.dat

RE: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-11 Thread Matt Mahoney
--- Jean-Paul Van Belle [EMAIL PROTECTED] wrote:

 Hi Matt, Wonderful idea, now it will even show the typical human trait of
 lying...when i ask it do you still love me? most answers in its database
 will have Yes as an answer  but when i ask it 'what's my name?' it'll call
 me John?

My proposed message posting service allows anyone to contribute to its
knowledge base, just like Wikipedia, so it could certainly contain some false
or useless information.  However, the number of peers that keep a copy of a
message will depend on the number of peers that accept it according to the
peers' policies, which are set individually by their owners.  The network
provides an incentive for peers to produce useful information so that other
peers will accept it.  Thus, useful and truthful information is more likely to
be propagated.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74671775-73001c


Re: [agi] None of you seem to be able ...

2007-12-11 Thread Richard Loosemore


Well, this wasn't quite what I was pointing to:  there will always be a 
need for parameter tuning.  That goes without saying.


The point was that if an AGI developer were to commit to system 1, they 
are never going to get to the (hypothetical) system 2 by anything as 
trivial as parameter tuning.  Therefore parameter tuning is useless for 
curing the complex systems problem.


That is why I do not accept that parameter tuning is an adequate 
response to the problem.




Richard Loosemore



James Ratcliff wrote:
James: Either of these systems described will have a Complexity Problem, 
any AGI will because it is a very complex system. 
System 1  I dont believe is strictly practical, as few Truth values can 
be stored locally directly to the frame.  More realistic is there may be 
a temporary value such as:

  I like cats  t=0.9
Which is calculated from some other backing facts, such as
I said I like cats. t=1.0
I like Rosemary (a cat) t=0.8

 then parameter tuning will never be good enough, it will have to be a 
huge and very serious new approach to making our AGI designs flexible 
at the design level.
System 2, though it uses unnamed parameters, would still need to 
determine these temporary values.  Any representation system must have 
parameter tuning in some form.


Either of these systems has the same problem though, of updating the 
information, such as
Seen: I dont like Ganji (a cat) both systems must update their 
representation to update with this new information.


Neither a symbol-system nor a neural network (closest you mean by system 
2?) has been shown able to scale up to a larger system needed for an 
AGI, but neither has been shown ineffective I dont believe either.


Whether a system explicity or implicitly stores the information I 
believe you must be able to ask it the reasoning behind any thought 
process.  This can be done with either system, and may give a very long 
answer, but once you get a system that makes decicions and cannot 
explain its reasoning, that is a very scary thought, and it is truly 
acting irrationaly as I see it.


While you cant extract a small portion of the representation from system 
1 or two outside of the whole, you must be able to print out the 
calculated values that a Frame type system shows.


James

*/Richard Loosemore [EMAIL PROTECTED]/* wrote:

James Ratcliff wrote:
  However, part of the key to intelligence is **self-tuning**.
 
  I believe that if an AGI system is built the right way, it can
effectively
  tune its own parameters, hence adaptively managing its own
complexity.
 
  I agree with Ben here, isnt one of the core concepts of AGI the
ability
  to modify its behavior and to learn?

That might sound like a good way to proceed, but now consider this.

System 1: Suppose that the AGI is designed with a symbol system in
which the symbols are very much mainstream-style symbols, and one
aspect of them is that there are truth-values associated with the
statements that use those symbols (as in I like cats, t=0.9).

Now suppose that the very fact that truth values were being
*explicitly* represented and manipulated by the system was causing
it to run smack bang into the Complex Systems Problem.

In other words, suppose that you cannot get that kind of design to
work because when it scales up the whole truth-value maintenance
mechanism just comes apart.


System 2: Suppose, further, that the only AGI systems that really do
work are ones in which the symbols never use truth values but use
other stuff (for which there is no interpretation) and that the
thing we call a truth value is actually the result of an operator
that can be applied to a bunch of connected symbols. This
[truth-value = external operator] idea is fundamentally different
from [truth-value = internal parameter] idea, obviously.

Now here is my problem: how would parameter-tuning ever cause that
first AGI design to realise that it had to abandon one bit of its
architecture and redesign itself?

Surely this is more than parameter tuning? There is no way it could
imply stop working and completely redesign all of its internal
architecture to not use the t-values, and make the operators etc etc.!

So here is the rub: if the CSP does cause this kind of issue (and
that is why I invented the CSP idea in the first place, because it
was precisely those kinds of architectural issues that seemed
wrong), then parameter tuning will never be good enough, it will
have to be a huge and very serious new approach to making our AGI
designs flexible at the design level.


Does that make sense?




Richard Loosemore








  This will have to be done with a large amount of self-tuning, as
we will
  not be changing parameters for every action, that wouldnt be
efficient.
  (this part does not require actual self-code 

An information theoretic measure of reinforcement (was RE: [agi] AGI and Deity)

2007-12-11 Thread Matt Mahoney
--- John G. Rose [EMAIL PROTECTED] wrote:
 Is an AGI really going to feel pain or is it just going to be some numbers?
 I guess that doesn't have a simple answer. The pain has to be engineered
 well for it to REALLY understand it. 

An agent capable of reinforcement learning has an upper bound on the amount of
pleasure or pain it can experience in a lifetime, in an information theoretic
sense.  If an agent responds to input X with output Y, followed by
reinforcement R, then we say that R is a positive reinforcement (pleasure,
R0) if it increases the probability P(Y|X) and negative reinforcement (pain,
R0) if it decreases P(Y|X).  Let S1 be the state of the agent before R, and
S2 be the state afterwards.  We may define the bound:

  |R| = K(S2|S1)

where K is Kolmogorov complexity, the length of the shortest program that
outputs an encoding of S2 given S1 as input.  This definition is intuitive in
that the greater the reinforcement, the greater the change in behavior of the
agent.  Also, it is consistent with the belief that higher animals (like
humans) have greater capacity to feel pleasure and pain than lower animals
(like insects) that have simpler mental states.

We must use the absolute value of R because the behavior X - Y could be
learned using either positive reinforcement (rewarding X - Y), negative
reinforcement (penalizing X - not Y), or by neutral methods such as classical
conditioning (presenting X and Y together).

If you accept this definition, then an agent cannot feel more accumulated
pleasure or pain in its lifetime than K(S(death)|S(birth)).  A simple program
like autobliss ( http://www.mattmahoney.net/autobliss.txt ) could not
experience more than 256 bits of reinforcement, whereas a human could
experience 10^9 bits according to cognitive models of long term memory.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74724148-5841d4


Re: An information theoretic measure of reinforcement (was RE: [agi] AGI and Deity)

2007-12-11 Thread Richard Loosemore

Matt Mahoney wrote:

--- John G. Rose [EMAIL PROTECTED] wrote:

Is an AGI really going to feel pain or is it just going to be some numbers?
I guess that doesn't have a simple answer. The pain has to be engineered
well for it to REALLY understand it. 


An agent capable of reinforcement learning has an upper bound on the amount of
pleasure or pain it can experience in a lifetime, in an information theoretic
sense.  If an agent responds to input X with output Y, followed by
reinforcement R, then we say that R is a positive reinforcement (pleasure,
R0) if it increases the probability P(Y|X) and negative reinforcement (pain,
R0) if it decreases P(Y|X).  Let S1 be the state of the agent before R, and
S2 be the state afterwards.  We may define the bound:

  |R| = K(S2|S1)

where K is Kolmogorov complexity, the length of the shortest program that
outputs an encoding of S2 given S1 as input.  This definition is intuitive in
that the greater the reinforcement, the greater the change in behavior of the
agent.  Also, it is consistent with the belief that higher animals (like
humans) have greater capacity to feel pleasure and pain than lower animals
(like insects) that have simpler mental states.

We must use the absolute value of R because the behavior X - Y could be
learned using either positive reinforcement (rewarding X - Y), negative
reinforcement (penalizing X - not Y), or by neutral methods such as classical
conditioning (presenting X and Y together).

If you accept this definition, then an agent cannot feel more accumulated
pleasure or pain in its lifetime than K(S(death)|S(birth)).  A simple program
like autobliss ( http://www.mattmahoney.net/autobliss.txt ) could not
experience more than 256 bits of reinforcement, whereas a human could
experience 10^9 bits according to cognitive models of long term memory.



I have to say that this is only one interpretation of what it would mean 
for an AGI to experience something, and I for one believe it has no 
validity at all.  It is purely a numeric calculation that makes no 
reference to what pain (or any other kind of subjective experience) 
actually is.


Sorry, but this is such a strong point of disagreement that I have to go 
on record.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74764705-216ca2


RE: [agi] AGI and Deity

2007-12-11 Thread Matt Mahoney
What do you call the computer that simulates what you perceive to be the
universe?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74807053-cea06f


RE: [agi] AGI and Deity

2007-12-11 Thread John G. Rose
Ed,

It's a very complicated subject and requires a certain theoretical mental
background and somewhat unbiased mindset. Though a biased mindset, for
example a person, who is religious, could use the theory to propel their
religion into post humanity - maybe a good idea to help preserve humanity -
or should that be left up to atheists, who knows.

What I mean by conquering cognitive engineering issues I'm just looking for
parallels in the development and evolution of human intelligence and its
symbiotic relationship with religion and deities. You have to understand
what cognitive functions deities contribute and facilitate in the human mind
and the civilized set of minds (and perhaps proto and pre human as well as
non-human cognition - which is highly speculative and relatively unknown).
What are the deitical and religious contributions to cognition and knowledge
and how do they facilitate and enable intelligence? Are they actually
REQUIRED in some form or another? Again - Are they required for the
evolution of human intelligence and for engineering general artificial
intelligence? Wouldn't demonstrating that make a guy like Dawkins do some
SERIOUS backpedaling :-) 

The viewpoint of gods representing unknowns is just one aspect of the thing.
Keep in mind that there are other aspects. But from the informational
perspective a god function as a concept and system of concepts aggregated
and representing a highly adaptive and communal entity, incorporated within
a knowledge and perceptual framework, with inference weighting spread across
informational density, adding open endedness as a crutch, functioning as an
altruistic confidence assistor, blah blah, a god(s) function modeled from
its loosly isomorphic systems representation in human deities might be used
to accomplish the same cognitive things(as well as others), especially
representing unknown in a systematic, controllable and actually in its own
distributed and intelligent way. There are benefits.

Also a major benefit is that it would be a common channel of unknown
operative substrate that hooks into human belief networks. 

John

_
From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 11, 2007 11:25 AM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity


John,

You implied there might be a very extremely efficient way
of conquering certain cognitive engineering issues by using religion in
AGIs.

Obviously any powerful AGI that deals with a complex and
uncertain world like ours would have to have belief systems, but it is not
clear to me their would be any benefit in them being religious in any
sense that Dawkins is not.

So, since you are a smart guy, perhaps you are seeing
something I do not.  Could you please fill me in?

Ed Porter

-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 11, 2007 12:43 PM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity

 From: Charles D Hixson [mailto:[EMAIL PROTECTED]
 The evidence in favor of an external god of any
traditional form is,
 frankly, a bit worse than unimpressive. It's lots worse.
This doesn't
 mean that gods don't exist, merely that they (probably)
don't exist in
 the hardware of the universe. I see them as a function of
the software
 of the entities that use language. Possibly they exist in
a muted form
 in most pack animals, or most animals that have protective
adults when
 they are infants.
 
 To me it appears that people believe in gods for the same
reasons that
 they believe in telepathy. I.e., evidence back before they
could speak
 clearly indicated that the adults could transfer thoughts
from one to
 another. This shaped a basic layer of beliefs that was
later buried
 under later additions, but never refuted. When one learned
language, one
 learned how to transfer thoughts ... but it was never tied
back into the
 original belief, because what was learned didn't match
closely enough to
 the original model of what was happening. Analogously,
when one is an
 infant the adult that cares for one is seen as the all
powerful
 protector. Pieces of this image become detached memories
within the
 mind, and are not refuted when a more accurate and
developed model of
 the actual parents is created. These hidden memories are
the basis
 around which the idea of a god is created.

Re: An information theoretic measure of reinforcement (was RE: [agi] AGI and Deity)

2007-12-11 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 I have to say that this is only one interpretation of what it would mean 
 for an AGI to experience something, and I for one believe it has no 
 validity at all.  It is purely a numeric calculation that makes no 
 reference to what pain (or any other kind of subjective experience) 
 actually is.

I would like to hear your definition of pain and/or negative reinforcement. 
Can you answer the question of whether a machine (say, an AGI or an uploaded
human brain) can feel pain?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74819484-690b4f


Re: [agi] Worst case scenario

2007-12-11 Thread Matt Mahoney
--- Bryan Bishop [EMAIL PROTECTED] wrote:

 On Monday 10 December 2007, Matt Mahoney wrote:
  The worst case scenario is that AI wipes out all life on earth, and
  then itself, although I believe at least the AI is likely to survive.
 
 http://lifeboat.com/ex/ai.shield

SIAI has not yet solved the friendliness problem.  I posted my views earlier
at  http://www.mattmahoney.net/singularity.html  To summarize, friendliness is
not a stable goal once computers start creating smarter versions of
themselves.  Recursive self improvement is an experimental, competitive,
evolutionary process that favors rapid reproduction and acquisition of
computing resources, not service to humans.

 Re: how much computing power is needed for ai. My worst-case scenario 
 accounts for nearly any finite computing power, via the production of 
 semiconductant silicon wafer tech. Now, if the dx on the number of 
 nodes is too low, we may have to start making factories that build 
 factories that build factories that build factories, etc. etc., which 
 would exponentially increase the rate of production of computational 
 nodes, and supposedly there is in fact some finite limit of 
 computational bruteforce required, yes?

A human brain sized neural network requires about 10^15 bits of memory and
10^16 operations per second.  The Internet already has enough computing power
to simulate a few thousand brains.  The threshold for a singularity is to
surpass the collective intelligence of all 10^10 human brains on Earth.

Moore's law allows you to estimate when this will happen, but keep in mind
that to double the number of components in a computer, you must also double
their reliability.  In a fault tolerant network, the second requirement is
dropped, so the process is faster.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74837064-aa09b8


RE: [agi] AGI and Deity

2007-12-11 Thread Ed Porter
John,

For a reply of its short length, given the subject, it was quite helpful in
letting me know the type of things you were talking about.

Thank you.

Ed Porter

-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 11, 2007 2:41 PM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity

Ed,

It's a very complicated subject and requires a certain theoretical mental
background and somewhat unbiased mindset. Though a biased mindset, for
example a person, who is religious, could use the theory to propel their
religion into post humanity - maybe a good idea to help preserve humanity -
or should that be left up to atheists, who knows.

What I mean by conquering cognitive engineering issues I'm just looking for
parallels in the development and evolution of human intelligence and its
symbiotic relationship with religion and deities. You have to understand
what cognitive functions deities contribute and facilitate in the human mind
and the civilized set of minds (and perhaps proto and pre human as well as
non-human cognition - which is highly speculative and relatively unknown).
What are the deitical and religious contributions to cognition and knowledge
and how do they facilitate and enable intelligence? Are they actually
REQUIRED in some form or another? Again - Are they required for the
evolution of human intelligence and for engineering general artificial
intelligence? Wouldn't demonstrating that make a guy like Dawkins do some
SERIOUS backpedaling :-) 

The viewpoint of gods representing unknowns is just one aspect of the thing.
Keep in mind that there are other aspects. But from the informational
perspective a god function as a concept and system of concepts aggregated
and representing a highly adaptive and communal entity, incorporated within
a knowledge and perceptual framework, with inference weighting spread across
informational density, adding open endedness as a crutch, functioning as an
altruistic confidence assistor, blah blah, a god(s) function modeled from
its loosly isomorphic systems representation in human deities might be used
to accomplish the same cognitive things(as well as others), especially
representing unknown in a systematic, controllable and actually in its own
distributed and intelligent way. There are benefits.

Also a major benefit is that it would be a common channel of unknown
operative substrate that hooks into human belief networks. 

John

_
From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 11, 2007 11:25 AM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity


John,

You implied there might be a very extremely efficient way
of conquering certain cognitive engineering issues by using religion in
AGIs.

Obviously any powerful AGI that deals with a complex and
uncertain world like ours would have to have belief systems, but it is not
clear to me their would be any benefit in them being religious in any
sense that Dawkins is not.

So, since you are a smart guy, perhaps you are seeing
something I do not.  Could you please fill me in?

Ed Porter

-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 11, 2007 12:43 PM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity

 From: Charles D Hixson [mailto:[EMAIL PROTECTED]
 The evidence in favor of an external god of any
traditional form is,
 frankly, a bit worse than unimpressive. It's lots worse.
This doesn't
 mean that gods don't exist, merely that they (probably)
don't exist in
 the hardware of the universe. I see them as a function of
the software
 of the entities that use language. Possibly they exist in
a muted form
 in most pack animals, or most animals that have protective
adults when
 they are infants.
 
 To me it appears that people believe in gods for the same
reasons that
 they believe in telepathy. I.e., evidence back before they
could speak
 clearly indicated that the adults could transfer thoughts
from one to
 another. This shaped a basic layer of beliefs that was
later buried
 under later additions, but never refuted. When one learned
language, one
 learned how to transfer thoughts ... but it was never tied
back into the
 original belief, because what was learned didn't match
closely enough to
 the original model of what was happening. Analogously,
when one is an
 infant the adult that cares for one is seen as the 

Re: [agi] Worst case scenario

2007-12-11 Thread Bob Mottram
On 11/12/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
  http://lifeboat.com/ex/ai.shield

That's quite amusing.  Safeguarding humanity against dancing robots.

I don't believe that technology is something you can run away from, in
a space lifeboat or any other sort of refuge.  You just have to try to
get along with it and perhaps shape its course if you can.

 SIAI has not yet solved the friendliness problem.

I've always had problems with the concept of friendliness spoken
about by folks from SIAI.  It seems like a very ill-defined concept.
What does friendly to humanity really mean?  It seems to mean a lot
of different things to a lot of different people (observer relative).


 A human brain sized neural network requires about 10^15 bits of memory and
 10^16 operations per second.

Direct comparisons between computing speed and brain activity I also
find problematic.  People often quote numbers like this without having
any idea how they were arrived at.  As far as I can discern all roads
lead back to Moravec, who based his figures upon the retina observing
a TV screen, and admitted that this was a very wobbly estimate
potentially subject to a wide margin of error.  I think until the
essential function of a neuron is known it's really hard to make
direct comparisons between what computers do and what brains do.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74974493-d340fd


[agi] AGI-08 - Call for Participation

2007-12-11 Thread Bruce Klein

The First Conference on Artificial General Intelligence (AGI-08)
March 1-3, 2008 at Memphis, Tennessee, USA
Early Registration Deadline: January 31, 2008
Conference Website: http://www.agi-08.org

Artificial General Intelligence (AGI) research focuses on the original 
and ultimate goal of AI --- to create intelligence as a whole. AGI seeks 
to create software or hardware systems that are generally intelligent 
in roughly the same sense that humans are, rather than being specialized 
problem-solvers such as most of the systems currently studied in the AI 
field.


Current research in the AGI field is vigorous and diverse, exploring a 
wide range of possible paths, including theoretical and experimental 
computer science, cognitive science, neuroscience, and innovative 
interdisciplinary methodologies.


AGI-08 is the very first international conference in this emerging field 
of science and engineering. The conference is organized with the 
cooperation of AAAI, and welcomes researchers and students in all 
relevant disciplines.


Different from conventional conferences, AGI-08 is planned to be 
intensively discussion oriented. All the research papers accepted for 
publication in the Proceedings (49 papers total) will be available in 
advance online, so that attendees may arrive prepared to discuss the 
relevant issues with the authors and each other. The sessions of the 
conference will be organized to facilitate open and informed 
intellectual exchange on themes of common interest.


Besides the technical discussions, time will also be scheduled at AGI-08 
for an exploratory discussion of possible ways to work toward the 
formation of a more cohesive AGI research community -- including future 
conferences, publications, organizations, etc.


After the two-and-half day conference, there will be a half day workshop 
on the broader implications of AGI technology, including ethical, 
sociological and futurological considerations.


Yours,

Organizing Committee, AGI-08
http://www.agi-08.org/organizing.php

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75008043-801947


Re: [agi] Worst case scenario

2007-12-11 Thread Bryan Bishop
On Tuesday 11 December 2007, Matt Mahoney wrote:
 --- Bryan Bishop [EMAIL PROTECTED] wrote:
  Re: how much computing power is needed for ai. My worst-case
  scenario accounts for nearly any finite computing power, via the
  production of semiconductant silicon wafer tech.

 A human brain sized neural network requires about 10^15 bits of
 memory and 10^16 operations per second.  The Internet already has
 enough computing power to simulate a few thousand brains.  The

Yes, but how much of that computing power is accessible to you? Probably 
very little at the moment, and even if you had the penetration of the 
likes of YouTube and other massive websites, you're still only getting 
a fraction of the computational power of the internet. Again, 
worst-case: we have to make our own factories. 

 threshold for a singularity is to surpass the collective intelligence
 of all 10^10 human brains on Earth.

I am not so sure that the goal of making ai is the same as making a 
singularity. But this is probably less relevant.

 Moore's law allows you to estimate when this will happen, but keep in

Or you can make it happen yourself. Make your own fabs. Get the computer 
nodes you need. Write the software to take advantage of millions of 
nodes all at once. etc. 

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75034625-49cfcc


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-11 Thread Vladimir Nesov
On Dec 11, 2007 7:26 PM, James Ratcliff [EMAIL PROTECTED] wrote:
 Here's a basic abstract I did last year I think:

 http://www.falazar.com/AI/AAAI05_Student_Abtract_James_Ratcliff.pdf

 Would like to work with others on a full fledged Reprensentation system that
 could use these kind of techniques
 I hacked this together by myself, so I know a real team could put this kind
 of stuff to much better use.



 James

Do you have any particular path in mind to put this kind of thing to
work? Finding patterns is fine, and somewhat inevitable, but what are
those ontologies good for, and why?

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75044005-87874a


Re: An information theoretic measure of reinforcement (was RE: [agi] AGI and Deity)

2007-12-11 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:
I have to say that this is only one interpretation of what it would mean 
for an AGI to experience something, and I for one believe it has no 
validity at all.  It is purely a numeric calculation that makes no 
reference to what pain (or any other kind of subjective experience) 
actually is.


I would like to hear your definition of pain and/or negative reinforcement. 
Can you answer the question of whether a machine (say, an AGI or an uploaded

human brain) can feel pain?


When I get a chance to finish my consciousness paper.  The question of 
what it is is quite complex.  I'll get back to this later.


But most people are agreed that just having an algorithm avoid a state 
is not equivalent to pain.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75048359-9a2e59


Re: [agi] None of you seem to be able ...

2007-12-11 Thread Richard Loosemore

James Ratcliff wrote:
What I dont see then, is anywhere where System 2 ( a neural net?) is 
better than system 1, or where it avoids the complexity issues.


I was just giving an example of the degree of flexibility required - the 
exact details of this example are not important.


My point was that dealing with the complex systems problem requires you 
to explore an extremely lareg range of *architectural* choices, and 
there is no way that these could be explored by parameter tuning (at 
least the way that this phrase is being used here).


What I am devising is a systematic way to parameterize those 
architectural choices, but that is orders of magnitude more 
sophisticated than the kind of paramter tuning that Ben (and others) 
would talk about.



I dont have a goal of system 2 from system one yet.


And I can't parse this sentence.




Richard Loosemore




James

*/Richard Loosemore [EMAIL PROTECTED]/* wrote:


Well, this wasn't quite what I was pointing to: there will always be a
need for parameter tuning. That goes without saying.

The point was that if an AGI developer were to commit to system 1, they
are never going to get to the (hypothetical) system 2 by anything as
trivial as parameter tuning. Therefore parameter tuning is useless for
curing the complex systems problem.

That is why I do not accept that parameter tuning is an adequate
response to the problem.



Richard Loosemore



James Ratcliff wrote:
  James: Either of these systems described will have a Complexity
Problem,
  any AGI will because it is a very complex system.
  System 1 I dont believe is strictly practical, as few Truth
values can
  be stored locally directly to the frame. More realistic is there
may be
  a temporary value such as:
  I like cats t=0.9
  Which is calculated from some other backing facts, such as
  I said I like cats. t=1.0
  I like Rosemary (a cat) t=0.8
 
  then parameter tuning will never be good enough, it will have to
be a
  huge and very serious new approach to making our AGI designs
flexible
  at the design level.
  System 2, though it uses unnamed parameters, would still need to
  determine these temporary values. Any representation system must
have
  parameter tuning in some form.
 
  Either of these systems has the same problem though, of updating the
  information, such as
  Seen: I dont like Ganji (a cat) both systems must update their
  representation to update with this new information.
 
  Neither a symbol-system nor a neural network (closest you mean by
system
  2?) has been shown able to scale up to a larger system needed for an
  AGI, but neither has been shown ineffective I dont believe either.
 
  Whether a system explicity or implicitly stores the information I
  believe you must be able to ask it the reasoning behind any thought
  process. This can be done with either system, and may give a very
long
  answer, but once you get a system that makes decicions and cannot
  explain its reasoning, that is a very scary thought, and it is truly
  acting irrationaly as I see it.
 
  While you cant extract a small portion of the representation from
system
  1 or two outside of the whole, you must be able to print out the
  calculated values that a Frame type system shows.
 
  James
 
  */Richard Loosemore /* wrote:
 
  James Ratcliff wrote:
   However, part of the key to intelligence is **self-tuning**.
  
   I believe that if an AGI system is built the right way, it can
  effectively
   tune its own parameters, hence adaptively managing its own
  complexity.
  
   I agree with Ben here, isnt one of the core concepts of AGI the
  ability
   to modify its behavior and to learn?
 
  That might sound like a good way to proceed, but now consider this.
 
  System 1: Suppose that the AGI is designed with a symbol system in
  which the symbols are very much mainstream-style symbols, and one
  aspect of them is that there are truth-values associated with the
  statements that use those symbols (as in I like cats, t=0.9).
 
  Now suppose that the very fact that truth values were being
  *explicitly* represented and manipulated by the system was causing
  it to run smack bang into the Complex Systems Problem.
 
  In other words, suppose that you cannot get that kind of design to
  work because when it scales up the whole truth-value maintenance
  mechanism just comes apart.
 
 
  System 2: Suppose, further, that the only AGI systems that really do
  work are ones in which the symbols never use truth values but use
  other stuff (for which there is no interpretation) and that the
  thing we call a truth value is actually the result of an 

Re: [agi] Worst case scenario

2007-12-11 Thread Matt Mahoney
--- Bob Mottram [EMAIL PROTECTED] wrote:
  SIAI has not yet solved the friendliness problem.
 
 I've always had problems with the concept of friendliness spoken
 about by folks from SIAI.  It seems like a very ill-defined concept.
 What does friendly to humanity really mean?  It seems to mean a lot
 of different things to a lot of different people (observer relative).

Eliezer S. Yudkowsky has written a fairly precise definition, but again this
is not a solution.  http://www.singinst.org/upload/CEV.html

We should not ignore the problem, but that is precisely what we are doing. 
Once machines are smarter than us, there will be an intelligence explosion and
humans will have no control over it.  It will be directed by evolution.

  A human brain sized neural network requires about 10^15 bits of memory and
  10^16 operations per second.
 
 Direct comparisons between computing speed and brain activity I also
 find problematic.  People often quote numbers like this without having
 any idea how they were arrived at.  As far as I can discern all roads
 lead back to Moravec, who based his figures upon the retina observing
 a TV screen, and admitted that this was a very wobbly estimate
 potentially subject to a wide margin of error.  I think until the
 essential function of a neuron is known it's really hard to make
 direct comparisons between what computers do and what brains do.

My estimate is based on 10^11 neurons, 10^15 synapses, and an information rate
of 10 bits per second per axon.  The number of synapses per neuron is based on
studies by the IBM Blue Brain project (8000 synapses per neuron in mouse
cortex).  In most neural models, the information carrying signal is the firing
rate, not the individual pulses.

But you are right that it is a crude estimate.  Cognitive studies of long term
memory estimate 10^9 bits.  That is also about the quantity of language input
by an average adult since birth.

One reason for the wide uncertainty (10^6) is that we don't really understand
how the brain works.  Another is that machines are going to be doing different
tasks.  Their purpose is not to behave like humans, but to serve humans (at
least initially).  Many of those tasks (like arithmetic) don't take a lot of
computing power.

The message posting service I have proposed does not address friendliness at
all.  It should be benign as long as it can't reprogram the peers.  I can't
guarantee that won't happen because peers can be arbitrarily configured by
their owners.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75050384-f1d45d


Re: [agi] AGI and Deity

2007-12-11 Thread Charles D Hixson

John G. Rose wrote:

From: Charles D Hixson [mailto:[EMAIL PROTECTED]
The evidence in favor of an external god of any traditional form is,
frankly, a bit worse than unimpressive. It's lots worse. This doesn't
mean that gods don't exist, merely that they (probably) don't exist in
the hardware of the universe. I see them as a function of the software
of the entities that use language. Possibly they exist in a muted form
in most pack animals, or most animals that have protective adults when
they are infants.

To me it appears that people believe in gods for the same reasons that
they believe in telepathy. I.e., evidence back before they could speak
clearly indicated that the adults could transfer thoughts from one to
another. This shaped a basic layer of beliefs that was later buried
under later additions, but never refuted. When one learned language, one
learned how to transfer thoughts ... but it was never tied back into the
original belief, because what was learned didn't match closely enough to
the original model of what was happening. Analogously, when one is an
infant the adult that cares for one is seen as the all powerful
protector. Pieces of this image become detached memories within the
mind, and are not refuted when a more accurate and developed model of
the actual parents is created. These hidden memories are the basis
around which the idea of a god is created.

Naturally, this is just my model of what is happening. Other
possibilities exist. But if I am to consider them seriously, they need
to match the way the world operates as I understand it. They don't need
to predict the same mechanism, but they need to predict the same events.

E.g., I consider Big Bang cosmology a failed explanation. It's got too
many ad hoc pieces. But it successfully explains most things that are
observed, and is consistent with relativity and quantum theory.
(Naturally, as they were used in developing it...but nevertheless
important.) And relativity and quantum theory themselves are failures,
because both are needed to explain that which is observable, but they
contradict each other in certain details. But they are successful
failures! Similar commentary applies to string theory, but with
differences. (Too many ad hoc parameters!)

Any god that is proposed must be shown to be consistent with the
observed phenomena. The Deists managed to come up with one that would do
the job, but he never became very popular. Few others have even tried,
except with absurdly evident special pleading. Generally I'd be more
willing to accept Chariots of the Gods as a true account.

And as for moral principles... I've READ the Bible. The basic moral
principle that it pushes is We are the chosen people. Kill the
stranger, steal his property, and enslave his servants! It requires
selective reading to come up with anything else, though I admit that
other messages are also in there, if you read selectively. Especially
during the periods when the Jews were in one captivity or another.
(I.e., if you are weak, preach mercy, but if you are strong show none.)
During the later times the Jews were generally under the thumb of one
foreign power or another, so they started preaching mercy.




One of the things about gods is that they are representations for what the
believers don't know and understand. Gods change over time as our knowledge
changes over time. That is ONE of the properties of them. The move from
polytheistic to monotheistic beliefs is a way to centralize these unknowns
for efficiency.

You could build AGI and label the unknowns with gods. You honestly could.
Magic happens here and combinatorial explosion regions could be labeled as
gods. Most people on this email list would frown at doing that but I say it
is totally possible and might be a very extremely efficient way of
conquering certain cognitive engineering issues. And I'm sure some on this
list have already thought about doing that.

John

  
But the traditional gods didn't represent the unknowns, but rather the 
knowns.  A sun god rose every day and set every night in a regular 
pattern.  Other things which also happened in this same regular pattern 
were adjunct characteristics of the sun go.   Or look at some of their 
names, carefully:  Aphrodite, she who fucks.  I.e., the characteristic 
of all Woman that is embodied in eros.  (Usually the name isn't quite 
that blatant.)


Gods represent the regularities of nature, as embodied in our mental 
processes without the understanding of how those processes operated.  
(Once the processes started being understood, the gods became less 
significant.)


Sometimes there were chance associations...and these could lead to 
strange transformations of myth when things became more understood.  In 
Sumeria the goddess of love was associated with (identified with) the 
evening star and the god of war was associated with (identified with) 
the morning star.  When knowledge of astronomy advanced it was realized 
that those two were 

Re: [agi] Worst case scenario

2007-12-11 Thread Matt Mahoney

--- Bryan Bishop [EMAIL PROTECTED] wrote:

 On Tuesday 11 December 2007, Matt Mahoney wrote:
  --- Bryan Bishop [EMAIL PROTECTED] wrote:
   Re: how much computing power is needed for ai. My worst-case
   scenario accounts for nearly any finite computing power, via the
   production of semiconductant silicon wafer tech.
 
  A human brain sized neural network requires about 10^15 bits of
  memory and 10^16 operations per second.  The Internet already has
  enough computing power to simulate a few thousand brains.  The
 
 Yes, but how much of that computing power is accessible to you? Probably 
 very little at the moment, 

As you read this message, your retina is compressing 10^10 bits per second
down to about 10^7.  Then your visual cortex and hippocampus is cutting it
down to about 10 bits per second.  All that massive computing power is being
used to pick out the tiny bit of useful information from all the clutter.

 and even if you had the penetration of the 
 likes of YouTube and other massive websites, you're still only getting 
 a fraction of the computational power of the internet. Again, 
 worst-case: we have to make our own factories. 

Worst case is self replicating factories. 
http://en.wikipedia.org/wiki/Grey_goo

  threshold for a singularity is to surpass the collective intelligence
  of all 10^10 human brains on Earth.
 
 I am not so sure that the goal of making ai is the same as making a 
 singularity. But this is probably less relevant.

It's not.  The singularity is a side effect of AI.  I really don't think the
extinction of the human race is something we are striving for.  But it may be
for some, because it will be replaced with something better, for some meanings
of better.  The question boils down to whether by copying your memories you
become the godlike intelligence that replaces humanity.  That question hinges
on the existence of consciousness.  Logically it does not, but belief in
consciousness and fear of death is hardwired into every human brain by
evolution.

  Moore's law allows you to estimate when this will happen, but keep in
 
 Or you can make it happen yourself. Make your own fabs. Get the computer 
 nodes you need. Write the software to take advantage of millions of 
 nodes all at once. etc. 

I have described how the software would work.  It is well within our
technology to write it.  The hardware will follow.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75055472-b0f1d1


Re: An information theoretic measure of reinforcement (was RE: [agi] AGI and Deity)

2007-12-11 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- Richard Loosemore [EMAIL PROTECTED] wrote:
  I have to say that this is only one interpretation of what it would mean 
  for an AGI to experience something, and I for one believe it has no 
  validity at all.  It is purely a numeric calculation that makes no 
  reference to what pain (or any other kind of subjective experience) 
  actually is.
  
  I would like to hear your definition of pain and/or negative
 reinforcement. 
  Can you answer the question of whether a machine (say, an AGI or an
 uploaded
  human brain) can feel pain?
 
 When I get a chance to finish my consciousness paper.  The question of 
 what it is is quite complex.  I'll get back to this later.
 
 But most people are agreed that just having an algorithm avoid a state 
 is not equivalent to pain.

Call it utility if you like, but it is clearly a numeric quantity.  If you
prefer A to B and B to C, then clearly you will prefer A to C.  You can make
rational choices between, say, 2 of A or 1 of B.

You could relate utility to money, but money is a nonlinear scale.  A dollar
will make some people happier than others, and a million dollars will not make
you a million times happier than one dollar.  Money also has no utility to
babies, animals, and machines, all of which can be trained through
reinforcement learning.  So if you can propose an alternative to bits as a
measure of utility, I am interested to hear about it.

I don't believe that the ability to feel pleasure and pain depends on
consciousness.  That is just a circular definition. 
http://en.wikipedia.org/wiki/Philosophical_zombie


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75059022-0fd637


[agi] CyberLover passing Turing Test

2007-12-11 Thread Dennis Gorelik
http://blog.pmarca.com/2007/12/checking-in-on.html
===
If CyberLover works as described, it will qualify as one of the first
computer programs ever written that is actually passing the Turing Test. 
===


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75061956-f23a41


Re: [agi] CyberLover passing Turing Test

2007-12-11 Thread Bryan Bishop
On Tuesday 11 December 2007, Dennis Gorelik wrote:
 If CyberLover works as described, it will qualify as one of the first
 computer programs ever written that is actually passing the Turing
 Test.

I thought the Turing Test involved fooling/convincing judges, not 
clueless men hoping to get some action?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75065128-644ffb


RE: [agi] AGI-08 - Call for Participation

2007-12-11 Thread Ed Porter
Bruce,

The following is a good idea Different from conventional conferences,
AGI-08 is planned to be intensively discussion oriented. All the research
papers accepted for 
publication in the Proceedings (49 papers total) will be available in 
advance online, so that attendees may arrive prepared to discuss the 
relevant issues with the authors and each other.

Ed Porter

-Original Message-
From: Bruce Klein [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 11, 2007 4:51 PM
To: agi@v2.listbox.com
Subject: [agi] AGI-08 - Call for Participation 

The First Conference on Artificial General Intelligence (AGI-08)
March 1-3, 2008 at Memphis, Tennessee, USA
Early Registration Deadline: January 31, 2008
Conference Website: http://www.agi-08.org

Artificial General Intelligence (AGI) research focuses on the original 
and ultimate goal of AI --- to create intelligence as a whole. AGI seeks 
to create software or hardware systems that are generally intelligent 
in roughly the same sense that humans are, rather than being specialized 
problem-solvers such as most of the systems currently studied in the AI 
field.

Current research in the AGI field is vigorous and diverse, exploring a 
wide range of possible paths, including theoretical and experimental 
computer science, cognitive science, neuroscience, and innovative 
interdisciplinary methodologies.

AGI-08 is the very first international conference in this emerging field 
of science and engineering. The conference is organized with the 
cooperation of AAAI, and welcomes researchers and students in all 
relevant disciplines.

Different from conventional conferences, AGI-08 is planned to be 
intensively discussion oriented. All the research papers accepted for 
publication in the Proceedings (49 papers total) will be available in 
advance online, so that attendees may arrive prepared to discuss the 
relevant issues with the authors and each other. The sessions of the 
conference will be organized to facilitate open and informed 
intellectual exchange on themes of common interest.

Besides the technical discussions, time will also be scheduled at AGI-08 
for an exploratory discussion of possible ways to work toward the 
formation of a more cohesive AGI research community -- including future 
conferences, publications, organizations, etc.

After the two-and-half day conference, there will be a half day workshop 
on the broader implications of AGI technology, including ethical, 
sociological and futurological considerations.

Yours,

Organizing Committee, AGI-08
http://www.agi-08.org/organizing.php

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75073609-2424c8attachment: winmail.dat

Re[4]: [agi] Do we need massive computational capabilities?

2007-12-11 Thread Dennis Gorelik
Matt,

 You can feed it with text. Then AGI would simply parse text [and
 optionally - Google it].
 
 No need for massive computational capabilities.

 Not when you can just use Google's 10^6 CPU cluster and its database with 10^9
 human contributors.

That's one of my points: our current civilization gives AGI researcher
ability to build AGI prototype on single PC using existing
civilization's achievements.

Human being cannot be intelligent without surrounding society anyway.
We all would loose our mind in less than 10 years if we are totally
separated from other intelligent systems.

Intelligence simply cannot function fully independently.


Bottom line: when building AGI - we should focus on building member
for our current civilization. Not fully independent intelligent
system.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75096267-b51b43


Re[2]: [agi] CyberLover passing Turing Test

2007-12-11 Thread Dennis Gorelik
Bryan,

 If CyberLover works as described, it will qualify as one of the first
 computer programs ever written that is actually passing the Turing
 Test.

 I thought the Turing Test involved fooling/convincing judges, not 
 clueless men hoping to get some action?

In my taste, testing with clueless judges is more appropriate
approach. It makes test less biased.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75096924-6d69b3