Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-22 Thread Eugen Leitl
On Wed, Mar 21, 2007 at 08:57:01PM -0700, David Clark wrote:

 I put up with 1 person out of all the thousands of emails I get who insisted
 on sending standard text messages as a attachment.  Because of virus

Um, no. Mine are standard http://rfc.net/rfc2015.html digitally signed
messages. If your MUA displays them as an attachement, then it is buggy.
A (somewhat inflammatory) FAQ is here 
http://kmself.home.netcom.com/Rants/gpg-signed-mail.html

 infections, I had normally set all emails with attachments to automatically
 get put in the garbage can.  I had to stop that so I could read your emails
 for the past 2 years.

Thanks for doing that.
 
 You have a lot of nerve, indeed.  I made a number of arguments in my email
 about your conclusions (supported I might add by no arguments) and you

Most of my conclusions are rather speculative, but I do have arguments for
some of them. I'm quite ready to offer them. However, nontrivial (and
anything less wouldn't do) mails take a lot of time which I currently do
not have. Because of this I tend to postpone such difficult mails, and deal
with easier mails (such as basic quoting netiquette) out of sequence.
All to frequently, however, such things gets postponed until they fall off
the stack. Sorry for that, but my time is not infinite.

 respond by pointing me to how to post email URL's.  Your arrogance surely
 exceeds your intelligence.

I'm quite sure of that.
 
 -- David Clark
 
 - Original Message - 
 From: Eugen Leitl [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Wednesday, March 21, 2007 2:04 PM
 Subject: Re: [agi] Fwd: Numenta Newsletter: March 20, 2007
 
 
  On Wed, Mar 21, 2007 at 10:47:35AM -0700, David Clark wrote:
 
   In my previous email, I mistakenly edited out the part from Yan King Yin
 and
   it looks like the We know that logic is easy was attributed to him
 when it
   was actually a quote of Eugen Leitl.
  
   Sorry for my mistake.
 
  It's not your mistake. It's the mistake of those who choose to ignore
 
  http://www.netmeister.org/news/learn2quote.html
 
  It is really a great idea to use plaintext posting and set standard
  quoting in your MUA. For those with braindamaged MUAs there are
  workarounds like
 
  http://home.in.tum.de/~jain/software/outlook-quotefix/
 
  -- 
  Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
  __
  ICBM: 48.07100, 11.36820http://www.ativel.com
  8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303
 
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-22 Thread rooftop8000

 
  You're just hoping you only have to do one thing so you can forget about
  all the other stuff that is required. 
 
 No. I don't think that other stuff required can be done. This is
 the same reason I don't subscribe to SENS. I thought this was unlikely
 when I was a 15 year old, and I still think it's unlikely as a 40 year old.
  
  And if i could pick things that wouldn't be needed in a seed AI, it would 
  be 
  real-world vision and motor skills. I agree that understanding movement and 
 
 Learning from the environment takes navigation in and active manipulation 
 of the environment. The G in AGI doesn't stand for domain-specific.

Yes, but software doesn't need to see or walk around because it lives
inside a computer. Why aren't you putting in echo-location or knowing
how to flaps wings? (I think those things can be useful
to have, but i don't see how they are crucial in your seed-AI)


  
  I still think most of this AGI will have to coded by hand, and it will
 
 I don't think this is doable by mere humans. This is a few orders of magnitude
 below of what the maximum complexity ceiling is (tools only take you that 
 far). 
 If AI is numerics, Fortran+MPI would be enough. C would be arguably less 
 painful.
 If AI is not numerics, you're equally screwed, whether this is Lisp, Erlang, 
 Ruby or Fortran.
 

I like to think it's possible



 

8:00? 8:25? 8:40? Find a flick in no time 
with the Yahoo! Search movie showtime shortcut.
http://tools.search.yahoo.com/shortcuts/#news

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-22 Thread DEREK ZAHN

Rooftop8000 writes:

Yes, but software doesn't need to see or walk around because it lives 
inside a computer. Why aren't you putting in echo-location or knowing how 
to flaps wings?


In my opinion, those would be viable things to include in a proto-AGI.  They 
don't lead as directly to conceptualizations which help communicate with 
human beings but they could be used for conceptual bootstrapping.


It seems that humans (and other animals, to the extent you believe they 
think) are made by building a general structure capable of learning the 
nature of the universe, with evolutionarily-discovered optimizations that 
reduce the learning time.  It is not hard to see how grounding that 
process in perception provides a natural sequence of concepts -- spatial 
regularities leading to spatial relationships and simple mathematics; 
temporal regularities leading to causality, and so on.  From there, 
analogical reasoning mechanisms have a broad base of material to work from.


If such a learner tries to start with less primitive input -- for example, 
feeding it text from the web --it's not as clear (to me at least) how it can 
grab on to the primitive elements and build on them.  Is there a path from 
letter sequences to their meaning that can reasonably be learned?
I have been following your discussions on this list for some time and 
finally decided to say something.  Although I am not an AGI professional at 
this time, I like to study the issues in my spare time and think about how 
smart machines could be built.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-21 Thread Eugen Leitl
On Wed, Mar 21, 2007 at 06:12:45PM +0800, YKY (Yan King Yin) wrote:

Hi Eugen,  This opinion is *biased* by placing too much emphasis on
sensory / vision.  I tried to build such a vision-centric AGI a couple

We know that logic is easy. People only had to learn to deal with
it evolutionary only recently, and computers can do serial symbol 
string transformations quite rapidly. Already computer-assisted
proofs have transformed a branch of mathematics into an empirical
science.

Building world/self models in realtime from noisy, incomplete and inconsistent
data takes a lot of processing, and parallel processing. For some reason
traditionally AI considered the logic/mathematics/formal domain for hard,
and vision as easy. It has turned out exactly the other way round.
Minsky thought porting SHRDLU to the real world was a minor task.
Navigation and realtime control, especially cooperatively is hard.

We've desintegrated into discussion minutiae (which programming language, etc.)
but the implicit plan is to build a minimal seed that can bootstrap by
extracting knowledge from its environment. The seed must be open-ended,
as in adapting itself to the problem domain. I think vision is a reasonable
first problem domain, because insects can do it quite well. You can presume
that a machine which has bootstrapped to master vision will find logic a
piece of cake. Not necessarily the other way round. I understand some
consider self-modification a specific problem domain, so a system capable
of targeted self-inspection and self-modification can self-modify itself
adaptively to a given task, any given task. I think there is absolutely
no evidence this is doable, and in fact there is some evidence this is
a Damn Hard problem.

Do you think this is arbitrary and unreasonable?

of years back, and found that it has severe deficiencies when it comes
to *symbolic* and logical aspects of cognition.  If you spend some
time thinking about the latter domains, you'd likely change your
mind.  But the current status of neuroscience is such that vision is
the most understood aspect of the brain, so the vision-centric view of
AGI is prevalent among people with a strong neuroscience background.

I think there's merit in recapitulating the capabilities as they arose
evolutionary. We're arguably below insect level now, both in capabilities,
and from the computational potential of the current hardware. 

It's best to learn to walk before trying to win the sprinter Olympics, no?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-21 Thread Ben Goertzel

Chuck Esterbrook wrote:

On 3/20/07, Ben Goertzel [EMAIL PROTECTED] wrote:

I would certainly expect that a mature Novamente system would be able to
easily solve this kind of
invariant recognition problem.  However, just because a human toddler
can solve this sort of problem easily, doesn't
mean a toddler level AGI should be able to solve it equally easily.
Different specific modalities will
come more naturally to different intelligences, and humans are
particularly visual in focus...


I generally agree, but wanted to ask this: Shouldn't AGIs be visual in
focus because we are? We want AGIs to help us with various tasks many
of which will require looking at diagrams, illustrations and pictures.
And that's just the static material.


Eventually, yeah, a useful AGI should be able to process visual info,
just like it should be able to understand human language.

But I feel that the strong focus on vision that characterizes much
AI work today (especially AI work with a neuroscience foundation)
generally tends to lead in the wrong direction, because vision
processing in humans is carried out largely by fairly specialized
structures and processes (albeit in combination with more general-
purpose structures and processes).  So, one can easily progress 
incrementally

toward better and better vision processing systems, via better and
better emulating the specialized component of human vision processing,
without touching the general-understanding-based component...

Of course, the same dynamic happens across all areas of AI
(creating specialized rather than general methods being a better
way to get impressive, demonstrable incremental progress), but it happens
particularly acutely with vision

Gary Lynch, in the late 80's, made some strong arguments as to why
olfaction might in some ways be a better avenue to cognition than vision.
Walter Freeman's work on the neuroscience of olfaction is inspired by
this same idea.

One point is that vision processing has an atypically hierarchical 
structure in the

human brain.  Olfaction OTOH seems to work more based on attractors
and nonlinear dynamics (cf Freeman's work), sorta like a fancier Hopfield
net (w/asymmetric weights thus leading to non fixed point attractors).  The
focus on vision has led many researchers to overly focus on the hierarchal
aspect rather than the attractor aspect, whereas both aspects obviously 
play

a bit role in cognition.




I guess I worry about the applicability... Would a blind AGI really be
able to find more effective treatments for heart disease, cancer and
aging?

IMO vision is basically irrelevant to these biomedical research tasks.

Direct sensory connections to biomedical lab equipment would
be more useful ;-)




Regarding Numenta, they tout irrespective of scale, distortion and
noise and they chose a visual demonstration, so it seems that at
least their AGI work is deserving of Kevin's criticism. 


I agree.  Poggio's recent work on vision processing using brain models
currently seems more impressive than Numenta's, in terms of combining

-- far greater fidelity as a brain simulation
-- far better performance as an image processing system

But, the Numenta architecture is more general and may be used very
interestingly in future, who knows...

-- Ben



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-21 Thread Kingma, D.P.

FYI...After reading Hawkins book I actually believe that his ideas may
indeed underlie a future AGI system...but they need to be fleshed out in
much greater detail...

Cheers,
K



Their current concept implementation did not change substantially since
their first proof-of-concept implementation. That was two years ago. Their
solution of invariance is extracting groups spatial and temporal patterns,
but still only works for very very trivial problems.

Instead of pushing through this needless platform, they should return to
theory. Currently, its simply unimpressive.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-21 Thread Eugen Leitl
On Wed, Mar 21, 2007 at 08:21:57AM -0400, Ben Goertzel wrote:

 Eventually, yeah, a useful AGI should be able to process visual info,
 just like it should be able to understand human language.

Being able to learn to see and to learn to hear, yes? How much
of it do you expect to be hardwired?

E.g. part of what cochlea does directly in hardware is a Fourier
transform. Do you expect to start with that as a prepositioned
building block, or let the system figure out the appropriate
transformation on its own?

 But I feel that the strong focus on vision that characterizes much
 AI work today (especially AI work with a neuroscience foundation)
 generally tends to lead in the wrong direction, because vision
 processing in humans is carried out largely by fairly specialized
 structures and processes (albeit in combination with more general-

There's a reason for that, it's in the law of optics and the
kind of structures a camera sees in the world. Vision (or an equivalent
high-bandwidth channel, direct depth perception by TOF, or LIDAR,
whatever) is a basic instrument for knowledge extraction.

You can skip on that by making the agent directly sense the simulation
grid (externalising the representation), but you'd have to abandon that
if your system does its first steps in the real world.

 purpose structures and processes).  So, one can easily progress 
 incrementally
 toward better and better vision processing systems, via better and
 better emulating the specialized component of human vision processing,
 without touching the general-understanding-based component...

Only parts of the visual processing pathway are hardwired (not really,
and it's not a linear thing at all), and of course the upper stages use
every trick the neocortex can muster. So, no, I don't think you can
trivialize vision, or postpone it.
 
 Of course, the same dynamic happens across all areas of AI
 (creating specialized rather than general methods being a better
 way to get impressive, demonstrable incremental progress), but it happens
 particularly acutely with vision
 
 Gary Lynch, in the late 80's, made some strong arguments as to why
 olfaction might in some ways be a better avenue to cognition than vision.
 Walter Freeman's work on the neuroscience of olfaction is inspired by
 this same idea.

The bit rate you get from olfaction is really low. Yes, you can sense
gradients, and if you code everything with volatile carriers you can
recognize about everything. What I don't like about olfaction is that
it's evolutionary even older than vision, and directly wired to attention
allocation (emotion) processes. It's like vision, only without the
advantages, and even more hardwired.
 
 One point is that vision processing has an atypically hierarchical 
 structure in the
 human brain.  Olfaction OTOH seems to work more based on attractors
 and nonlinear dynamics (cf Freeman's work), sorta like a fancier Hopfield
 net (w/asymmetric weights thus leading to non fixed point attractors).  The
 focus on vision has led many researchers to overly focus on the hierarchal
 aspect rather than the attractor aspect, whereas both aspects obviously 
 play
 a bit role in cognition.

Makes sense.
 
 Direct sensory connections to biomedical lab equipment would
 be more useful ;-)

It would be interesting to see which sensory modalities are optimal
e.g. for medical voxelsets. One could wire a 3d retina on the voxelset,
or let the system look at the space/time domains directly, and let
it build its own processing and representation.
 
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-21 Thread Bob Mottram

I think making direct comparisons between computational power and the animal
kingdom has always been a questionable exercise, but I generally agree with
trying to tackle problems in a similar order that evolution did, because
evolution needed to find incremental solutions.

I've long wanted to build an intelligent robot, and by that I mean *really*
intelligent, not just faking it for a few minutes with some Eliza-like
program.  I would like to develop systems for abstract reasoning and concept
formation, but unless the robot can actually see and experience its world
these reasoning skills are totally useless (unless of course it were a
super-arrogant robot - I'm so smart I don't even need to be able to see,
because in my state of singularitarian enlightenment I know the complete
state of the universe!).  So I got stuck on the vision problem, and there I
have remained to this day.



On 21/03/07, Eugen Leitl [EMAIL PROTECTED] wrote:


I think there's merit in recapitulating the capabilities as they arose
evolutionary. We're arguably below insect level now, both in capabilities,
and from the computational potential of the current hardware.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-21 Thread Bob Mottram

On 21/03/07, Ben Goertzel [EMAIL PROTECTED] wrote:


* use a combination of lidar and camera input
* write code that took this combined input to make a 3D contour map of
the perceived surfaces in the world
* use standard math transforms to triangulate this contour map
* use some AI heuristics (with feedback from the more general AI
routines) to approximate sets of these little
triangles by larger polygons
* finally, feed these larger polygons into the polygon vision module
we have designed for NM in a sim-world
context




This is very much the traditional machine vision approach, described by
Moravec and others and used with some success recently in the DARPA Grand
Challenge.  I'm also following the same approach which is a very
straightforward application of standard engineering techniques.  The
logistics of doing this are quite complicated, involving camera calibration,
correspondence matching and probabilistic spatial modelling and I think the
sheer complexity (and drudgery) of the programming task is the reason why
few people have ever attempted to do this so far.  Being able to create
large scale voxel models which can be maintained in a computationally
efficient manner suitable for real time use also involves some fancy
algorithms.

I would agree that where things start to become interesting are at the
polygon level, but you still need to maintain an underlying voxel model of
space because you can't calculate probability distributions accurately using
polygons alone.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-21 Thread Rafael C.P.

I've read the whitepaper on HTM's and NuPIC. It seems more of a
marketing strategy to attract laypeople, since I can't see anything it
can solve that a NN (a recurrent and well designed/evolved one with a
little preprocessing of input) can't.

On 3/21/07, Chuck Esterbrook [EMAIL PROTECTED] wrote:

recognition ... irrespective of scale, distortion and noise sounds
pretty interesting. Are these capabilities outside of current NNs? I'm
familiar with NNs ignoring noise, but not scale. But my NN
investigations are several years old...

I wonder if distortion includes any degree of rotation. I don't have
time for the demo this night.

-- Forwarded message --
From: Numenta, Inc. [EMAIL PROTECTED]
Date: Mar 20, 2007 6:30 PM
Subject: Numenta Newsletter: March 20, 2007
To: [EMAIL PROTECTED]




Numenta, Inc. Newsletter
March 20, 2007

Dear newsletter subscriber:

I am pleased to make several announcements regarding the Numenta
Platform for Intelligent Computing (NuPIC).

First, we now have available on our web site a demonstration program
using the Pictures example which enables recognition of simple line
drawings irrespective of scale, distortion and noise. This demo is a
Windows(r) client application, and is very easy to use and install.
The Windows program accesses NuPIC running on the Numenta Linux
cluster in California to perform the recognition task, so you do not
need to have Linux installed on your local machine. However, you must
be connected live to the Internet in order to use the demo. You can
find the demo here:
http://www.numenta.com/about-numenta/technology/pictures-demo.php

The purpose of this demo is to give you a sense of the capabilities of
the platform without having to install the platform itself. And, the
demo is a lot of fun to use! (Note that the complete platform only is
available on Linux and MacOS today - see below for more information
about NuPIC and Windows.) The demo is particularly well-suited for
non-engineers, as well as being a good starting point for those
engineers who want to get a feel for the platform before going through
the full installation process.

Second, several developers have commented in our forums expressing
interest in a Windows development environment. I'd like to let you
know that we are working on this request, and you will see us respond
in two phases. We shortly will have information on better packaging of
a virtual Linux machine running under Windows. In addition, we plan to
add the ability to run NuPIC on native Windows, but that will take
longer. Those of you who have subscribed to the developer newsletter
will receive further details shortly.

Finally, I wanted to let you know that we are pleased with the
response to our launch. We've had nearly 2000 people download the
platform, and have been excited to watch the forum and wiki sections
become active. We're just at the beginning of an exciting new era, and
want to thank you for your early and enthusiastic participation.

Donna Dubinsky
CEO, Numenta









This email was sent to [EMAIL PROTECTED],
by [EMAIL PROTECTED]

Update Profile/Email Address
http://ui.constantcontact.com/d.jsp?p=oom=1101575916857ea=chuckesterbrook%40yahoo.comse=336t=1101584457428lang=enreason=F

Instant removal with SafeUnsubscribe(TM)
http://ui.constantcontact.com/d.jsp?p=unm=1101575916857ea=chuckesterbrook%40yahoo.comse=336t=1101584457428lang=enreason=F

Privacy Policy:
http://ui.constantcontact.com/roving/CCPrivacyPolicy.jsp


Powered by
Constant Contact(R)
www.constantcontact.com



Numenta, Inc. | 1010 El Camino Real | Suite 380 | Menlo Park | CA | 94052

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




--
=
Rafael C.P.
=

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-21 Thread David Clark
- Original Message - 
From: Eugen Leitl [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, March 21, 2007 4:32 AM
Subject: Re: [agi] Fwd: Numenta Newsletter: March 20, 2007


 On Wed, Mar 21, 2007 at 06:12:45PM +0800, YKY (Yan King Yin) wrote:

 We know that logic is easy. People only had to learn to deal with
 it evolutionary only recently, and computers can do serial symbol
 string transformations quite rapidly. Already computer-assisted
 proofs have transformed a branch of mathematics into an empirical
 science.

Your first 3 points aren't without debate.  I have seen no general/good
logic based system yet, have you?  computer-assisted proofs is a
specialized domain and doesn't make a dent in AGI IMO.  Another example
might be chess programs but few would say that is AGI.

 Building world/self models in realtime from noisy, incomplete and
inconsistent
 data takes a lot of processing, and parallel processing. For some reason
 traditionally AI considered the logic/mathematics/formal domain for hard,
 and vision as easy. It has turned out exactly the other way round.
 Minsky thought porting SHRDLU to the real world was a minor task.
 Navigation and realtime control, especially cooperatively is hard.

Regardless of how easy Minsky or others thought porting from small
artificial domains to the real thing would be, that is hardly the point.  I
agree 100% that Navigation and realtime control is a hard problem but so
are higher level systems like language and semantic learning.  AGI needs
both and needs them to communicate intelligently.

 We've desintegrated into discussion minutiae (which programming language,
etc.)
 but the implicit plan is to build a minimal seed that can bootstrap by
 extracting knowledge from its environment. The seed must be open-ended,
 as in adapting itself to the problem domain. I think vision is a
reasonable
 first problem domain, because insects can do it quite well. You can
presume
 that a machine which has bootstrapped to master vision will find logic a
 piece of cake. Not necessarily the other way round. I understand some
 consider self-modification a specific problem domain, so a system capable
 of targeted self-inspection and self-modification can self-modify itself
 adaptively to a given task, any given task. I think there is absolutely
 no evidence this is doable, and in fact there is some evidence this is
 a Damn Hard problem.

It's funny how many times you have mentioned parallel hardware and other
esoteric minutiae of a very low level but I for one, have enjoyed that info
very much.  I too, have worked with robotics, sonar range finders, micro
controllers etc.  I appreciate how hard it is to deal with real world data,
but that is hardly all that an AGI should be capable of.  What evidence or
experiments do you have to substantiate that a machine which has
bootstrapped to master vision will find logic a piece of cake?  To dismiss
whole quadrants of AI development with just a conclusion, doesn't seem very
tolerant or warranted from the current data.  The ability of
self-inspection and self-modification is a prerequisite IMO to creating a
system that doesn't need to be hand coded in it's entirety by human
programmers.  It is possible that some relatively small amount of code could
be used and all the smarts could be into the data but history has shown IMO
that as the complexity of the data goes up, sub-languages are created to
make sense of the data. As the level of interpreter goes up, the speed falls
precipitously.  Data and programs have been shown in many ways to be
interchangeable with some problems going one way and others going the other.
Do you believe that some tiny (relative to the size of the AGI) algorithm
can be found that will magically create all the systems required for AGI
intelligence?

I am not sure what you mean exactly by no evidence.  I have made programs
that create other programs (on the fly) for almost 20 years.  The chances
any of these generated programs would be the same is highly unlikely.  I
don't presume to stretch that into implying I have created a program to
produce programs for any given task but I don't think your (or my) brain
can do that either.

 Do you think this is arbitrary and unreasonable?

I would say your conclusions are arbitrary and unreasonable even if I am
happy that you are thinking hard about areas of the AGI problem that
desperately need attention.

 I think there's merit in recapitulating the capabilities as they arose
 evolutionary. We're arguably below insect level now, both in capabilities,
 and from the computational potential of the current hardware.

Why does our silicon based hardware always have to be compared with carbon
based units?  Computers don't have to have the requirement that they
contain all the information to reproduce themselves as humans do.  Our AGIs
are not limited to a single building block, like human DNA and the brain
synapse.  A single AGI could contain any combination of Von

Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-21 Thread David Clark
In my previous email, I mistakenly edited out the part from Yan King Yin and
it looks like the We know that logic is easy was attributed to him when it
was actually a quote of Eugen Leitl.

Sorry for my mistake.

-- David Clark

- Original Message - 
From: David Clark [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, March 21, 2007 10:33 AM
Subject: Re: [agi] Fwd: Numenta Newsletter: March 20, 2007


 - Original Message - 
 From: Eugen Leitl [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Wednesday, March 21, 2007 4:32 AM
 Subject: Re: [agi] Fwd: Numenta Newsletter: March 20, 2007


  On Wed, Mar 21, 2007 at 06:12:45PM +0800, YKY (Yan King Yin) wrote:
 
  We know that logic is easy. People only had to learn to deal with
  it evolutionary only recently, and computers can do serial symbol
  string transformations quite rapidly. Already computer-assisted
  proofs have transformed a branch of mathematics into an empirical
  science.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-21 Thread rooftop8000

 
 We've desintegrated into discussion minutiae (which programming language, 
 etc.)
 but the implicit plan is to build a minimal seed that can bootstrap by
 extracting knowledge from its environment. The seed must be open-ended,
 as in adapting itself to the problem domain. I think vision is a reasonable
 first problem domain, because insects can do it quite well. You can presume
 that a machine which has bootstrapped to master vision will find logic a
 piece of cake. 

Trying to make a seed AI is the same as hoping to win the lottery. 
You're just hoping you only have to do one thing so you can forget about
all the other stuff that is required. 

And if i could pick things that wouldn't be needed in a seed AI, it would be 
real-world vision and motor skills. I agree that understanding movement and 
diagrams and figures is essential to thought, but why would a computer program 
need to do recognize
a picture of a chair or a picture of a horse or be able
to track a flying bird in the sky? I don't think that's required
for most problems. I also don't see how you get to all other thoughts from 
there?
(Not that it can't be useful to have in your system..)

Not necessarily the other way round. I understand some
 consider self-modification a specific problem domain, so a system capable
 of targeted self-inspection and self-modification can self-modify itself
 adaptively to a given task, any given task. I think there is absolutely
 no evidence this is doable, and in fact there is some evidence this is
 a Damn Hard problem.

I agree. you can only do some minor self-modification if you don't fully
understand your inner workings/code. 

I still think most of this AGI will have to coded by hand, and it will
be a lot of software engineering and not the romantic seed AI or minimal
subset of 10 perfect algorithms... Seems like people don't seem to 
want to put in all the energy and keep looking for a quick solution




 

Get your own web address.  
Have a HUGE year through Yahoo! Small Business.
http://smallbusiness.yahoo.com/domains/?p=BESTDEAL

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-21 Thread Eugen Leitl
On Wed, Mar 21, 2007 at 12:00:09PM -0700, rooftop8000 wrote:

 Trying to make a seed AI is the same as hoping to win the lottery. 

Winning the lottery is an unbiased stochastical process. Darwinian
co-evolution is a highly biased stochastical process. Seeds are one-way
hashes (morphogenetic code expands the small seed into a structure
which is appropriately positioned (environment-shaped) to extract
knowledge from the environment (aka doting parents). Such seeds
can be relatively tiny, see fertilized human eggs (the womb
does not seem to contribute noticeable amounts of complexity). Hence they 
contain
far less complexity than an adult, winning which by stochastical process,
unbiased or otherwise, takes terrible odds.

 You're just hoping you only have to do one thing so you can forget about
 all the other stuff that is required. 

No. I don't think that other stuff required can be done. This is
the same reason I don't subscribe to SENS. I thought this was unlikely
when I was a 15 year old, and I still think it's unlikely as a 40 year old.
 
 And if i could pick things that wouldn't be needed in a seed AI, it would be 
 real-world vision and motor skills. I agree that understanding movement and 

Learning from the environment takes navigation in and active manipulation 
of the environment. The G in AGI doesn't stand for domain-specific.

 diagrams and figures is essential to thought, but why would a computer 
 program 
 need to do recognize
 a picture of a chair or a picture of a horse or be able
 to track a flying bird in the sky? I don't think that's required
 for most problems. I also don't see how you get to all other thoughts from 
 there?
 (Not that it can't be useful to have in your system..)
 
 Not necessarily the other way round. I understand some
  consider self-modification a specific problem domain, so a system capable
  of targeted self-inspection and self-modification can self-modify itself
  adaptively to a given task, any given task. I think there is absolutely
  no evidence this is doable, and in fact there is some evidence this is
  a Damn Hard problem.
 
 I agree. you can only do some minor self-modification if you don't fully
 understand your inner workings/code. 

I have reasons to suspect that a system can't understand its inner workings
well enough to do radical tweaks. Well, we can (in theory) mushroom our cortex 
by
a minimal genetic tweak. That's a trivial modification, which doesn't 
reengineer the microarchitecture. Live brain surgery on self or a single
copy doesn't strike me as a particularly robust approach. Add a population
of copies, and a voting selection or an external unbiased evaluator, and 
you're already in Darwin/Lamarck country.
 
 I still think most of this AGI will have to coded by hand, and it will

I don't think this is doable by mere humans. This is a few orders of magnitude
below of what the maximum complexity ceiling is (tools only take you that far). 
If AI is numerics, Fortran+MPI would be enough. C would be arguably less 
painful.
If AI is not numerics, you're equally screwed, whether this is Lisp, Erlang, 
Ruby or Fortran.

 be a lot of software engineering and not the romantic seed AI or minimal

To clarify, I'm only interested in ~human equivalent general AI, and only in
co-evolution from a reasonable seed pool in a superrealtime virtual
environment heavily skewed towards problem-solution as fitness function
as a design principle. The only reason for this is that it looks as if
all other approaches are sterile. You're of course quite welcome to prove
me wrong by delivering a working product.

 subset of 10 perfect algorithms... Seems like people don't seem to 
 want to put in all the energy and keep looking for a quick solution

My estimate is several % of yearly GNP for several decades for a likely success
by above design mechanism. If you call that a quick solution, many will
disagree.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-21 Thread Eugen Leitl
On Wed, Mar 21, 2007 at 10:47:35AM -0700, David Clark wrote:

 In my previous email, I mistakenly edited out the part from Yan King Yin and
 it looks like the We know that logic is easy was attributed to him when it
 was actually a quote of Eugen Leitl.
 
 Sorry for my mistake.

It's not your mistake. It's the mistake of those who choose to ignore

http://www.netmeister.org/news/learn2quote.html

It is really a great idea to use plaintext posting and set standard
quoting in your MUA. For those with braindamaged MUAs there are
workarounds like

http://home.in.tum.de/~jain/software/outlook-quotefix/

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-21 Thread David Clark
I put up with 1 person out of all the thousands of emails I get who insisted
on sending standard text messages as a attachment.  Because of virus
infections, I had normally set all emails with attachments to automatically
get put in the garbage can.  I had to stop that so I could read your emails
for the past 2 years.

You have a lot of nerve, indeed.  I made a number of arguments in my email
about your conclusions (supported I might add by no arguments) and you
respond by pointing me to how to post email URL's.  Your arrogance surely
exceeds your intelligence.

-- David Clark

- Original Message - 
From: Eugen Leitl [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, March 21, 2007 2:04 PM
Subject: Re: [agi] Fwd: Numenta Newsletter: March 20, 2007


 On Wed, Mar 21, 2007 at 10:47:35AM -0700, David Clark wrote:

  In my previous email, I mistakenly edited out the part from Yan King Yin
and
  it looks like the We know that logic is easy was attributed to him
when it
  was actually a quote of Eugen Leitl.
 
  Sorry for my mistake.

 It's not your mistake. It's the mistake of those who choose to ignore

 http://www.netmeister.org/news/learn2quote.html

 It is really a great idea to use plaintext posting and set standard
 quoting in your MUA. For those with braindamaged MUAs there are
 workarounds like

 http://home.in.tum.de/~jain/software/outlook-quotefix/

 -- 
 Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
 __
 ICBM: 48.07100, 11.36820http://www.ativel.com
 8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-20 Thread Kevin Cramer
I tested this and it is very very poor at invariant recognition.  I am
surprised they released this given how bad it actually is.  As an example I
drew a small A in the bottom left corner of their draw area.  The program
returns the top 5 guesses on what you drew.  The letter A was not even in
the top 5, much less being the first best guess...

Back to the drawing board for this fundamental problem that no one has
solved...including anyone on this list.  And I can say with certainty that
until it is that AGI will not come to pass. 

Cheers,
Kevin



Sapio Sciences, LLC 
Innovative Solutions for Complex Genetics 
2391 Mayfield Street, Suite 201 
York, PA 17402 

Direct: 717.870.7928 
Main: 301.576.2729 
Fax: 301.576.4155 
http://www.sapiosciences.com 

 


-Original Message-
From: Chuck Esterbrook [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 20, 2007 11:17 PM
To: agi@v2.listbox.com
Subject: [agi] Fwd: Numenta Newsletter: March 20, 2007

recognition ... irrespective of scale, distortion and noise sounds pretty
interesting. Are these capabilities outside of current NNs? I'm familiar
with NNs ignoring noise, but not scale. But my NN investigations are several
years old...

I wonder if distortion includes any degree of rotation. I don't have time
for the demo this night.

-- Forwarded message --
From: Numenta, Inc. [EMAIL PROTECTED]
Date: Mar 20, 2007 6:30 PM
Subject: Numenta Newsletter: March 20, 2007
To: [EMAIL PROTECTED]




Numenta, Inc. Newsletter
March 20, 2007

Dear newsletter subscriber:

I am pleased to make several announcements regarding the Numenta Platform
for Intelligent Computing (NuPIC).

First, we now have available on our web site a demonstration program using
the Pictures example which enables recognition of simple line drawings
irrespective of scale, distortion and noise. This demo is a
Windows(r) client application, and is very easy to use and install.
The Windows program accesses NuPIC running on the Numenta Linux cluster in
California to perform the recognition task, so you do not need to have Linux
installed on your local machine. However, you must be connected live to the
Internet in order to use the demo. You can find the demo here:
http://www.numenta.com/about-numenta/technology/pictures-demo.php

The purpose of this demo is to give you a sense of the capabilities of the
platform without having to install the platform itself. And, the demo is a
lot of fun to use! (Note that the complete platform only is available on
Linux and MacOS today - see below for more information about NuPIC and
Windows.) The demo is particularly well-suited for non-engineers, as well as
being a good starting point for those engineers who want to get a feel for
the platform before going through the full installation process.

Second, several developers have commented in our forums expressing interest
in a Windows development environment. I'd like to let you know that we are
working on this request, and you will see us respond in two phases. We
shortly will have information on better packaging of a virtual Linux machine
running under Windows. In addition, we plan to add the ability to run NuPIC
on native Windows, but that will take longer. Those of you who have
subscribed to the developer newsletter will receive further details shortly.

Finally, I wanted to let you know that we are pleased with the response to
our launch. We've had nearly 2000 people download the platform, and have
been excited to watch the forum and wiki sections become active. We're just
at the beginning of an exciting new era, and want to thank you for your
early and enthusiastic participation.

Donna Dubinsky
CEO, Numenta










This email was sent to [EMAIL PROTECTED], by [EMAIL PROTECTED]

Update Profile/Email Address
http://ui.constantcontact.com/d.jsp?p=oom=1101575916857ea=chuckesterbrook%
40yahoo.comse=336t=1101584457428lang=enreason=F

Instant removal with SafeUnsubscribe(TM)
http://ui.constantcontact.com/d.jsp?p=unm=1101575916857ea=chuckesterbrook%
40yahoo.comse=336t=1101584457428lang=enreason=F

Privacy Policy:
http://ui.constantcontact.com/roving/CCPrivacyPolicy.jsp


Powered by
Constant Contact(R)
www.constantcontact.com



Numenta, Inc. | 1010 El Camino Real | Suite 380 | Menlo Park | CA | 94052

-
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-20 Thread Ben Goertzel

Kevin Cramer wrote:

I tested this and it is very very poor at invariant recognition.  I am
surprised they released this given how bad it actually is.  As an example I
drew a small A in the bottom left corner of their draw area.  The program
returns the top 5 guesses on what you drew.  The letter A was not even in
the top 5, much less being the first best guess...

Back to the drawing board for this fundamental problem that no one has
solved...including anyone on this list.  And I can say with certainty that
until it is that AGI will not come to pass. 
  


I agree that any reasonably powerful AGI that has been given visual 
sensors since its childhood
will be able to solve this kind of visual invariant recognition problem 
easily.


However, I wouldn't say that this is a prerequisite for human-level AGI: 
some AGI's could simply
not be aware of visual stimuli, existing e.g. in a world of mathematics 
or quantum-level data, etc.


Novamente for example doesn't deal with low-level vision

I would certainly expect that a mature Novamente system would be able to 
easily solve this kind of
invariant recognition problem.  However, just because a human toddler 
can solve this sort of problem easily, doesn't
mean a toddler level AGI should be able to solve it equally easily.  
Different specific modalities will
come more naturally to different intelligences, and humans are 
particularly visual in focus...


-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-20 Thread Chuck Esterbrook

On 3/20/07, Ben Goertzel [EMAIL PROTECTED] wrote:

I would certainly expect that a mature Novamente system would be able to
easily solve this kind of
invariant recognition problem.  However, just because a human toddler
can solve this sort of problem easily, doesn't
mean a toddler level AGI should be able to solve it equally easily.
Different specific modalities will
come more naturally to different intelligences, and humans are
particularly visual in focus...


I generally agree, but wanted to ask this: Shouldn't AGIs be visual in
focus because we are? We want AGIs to help us with various tasks many
of which will require looking at diagrams, illustrations and pictures.
And that's just the static material.

I guess I worry about the applicability... Would a blind AGI really be
able to find more effective treatments for heart disease, cancer and
aging?

Regarding Numenta, they tout irrespective of scale, distortion and
noise and they chose a visual demonstration, so it seems that at
least their AGI work is deserving of Kevin's criticism. Still, I give
them credit for breaking ground and pushing forward. It may turn into
something yet. And we need more AGI efforts, not less.

-Chuck

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-20 Thread Kevin Cramer
Chuck,

I did not mean to poopoo their efforts..only that they should not make such
grand claims that they aren't even close to achieving.  

FYI...After reading Hawkins book I actually believe that his ideas may
indeed underlie a future AGI system...but they need to be fleshed out in
much greater detail...

Cheers,
K 



Sapio Sciences, LLC 
Innovative Solutions for Complex Genetics 
2391 Mayfield Street, Suite 201 
York, PA 17402 

Direct: 717.870.7928 
Main: 301.576.2729 
Fax: 301.576.4155 
http://www.sapiosciences.com 

 


-Original Message-
From: Chuck Esterbrook [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, March 21, 2007 12:20 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

On 3/20/07, Ben Goertzel [EMAIL PROTECTED] wrote:
 I would certainly expect that a mature Novamente system would be able 
 to easily solve this kind of invariant recognition problem.  However, 
 just because a human toddler can solve this sort of problem easily, 
 doesn't mean a toddler level AGI should be able to solve it equally 
 easily.
 Different specific modalities will
 come more naturally to different intelligences, and humans are 
 particularly visual in focus...

I generally agree, but wanted to ask this: Shouldn't AGIs be visual in focus
because we are? We want AGIs to help us with various tasks many of which
will require looking at diagrams, illustrations and pictures.
And that's just the static material.

I guess I worry about the applicability... Would a blind AGI really be able
to find more effective treatments for heart disease, cancer and aging?

Regarding Numenta, they tout irrespective of scale, distortion and noise
and they chose a visual demonstration, so it seems that at least their AGI
work is deserving of Kevin's criticism. Still, I give them credit for
breaking ground and pushing forward. It may turn into something yet. And we
need more AGI efforts, not less.

-Chuck

-
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303