Re: [agi] Anyone going to the Singularity Summit?

2010-08-11 Thread Steve Richfield
Bryan,

*I'm interested!*

Continuing...

On Tue, Aug 10, 2010 at 11:27 AM, Bryan Bishop kanz...@gmail.com wrote:

 On Tue, Aug 10, 2010 at 6:25 AM, Steve Richfield wrote:

 Note my prior posting explaining my inability even to find a source of
 used mice for kids to use in high-school anti-aging experiments, all while
 university labs are now killing their vast numbers of such mice. So long as
 things remain THIS broken, anything that isn't part of the solution simply
 becomes a part of the very big problem, AIs included.


 You might be inerested in this- I've been putting together an
 adopt-a-lab-rat program that is actually an adoption program for lab mice.


... then it is an adopt-a-mouse program?

I don't know if you are a *Pinky and the Brain* fan, but calling your
project something like *The Pinky Project* would be catchy.

In some cases mice that are used as a control group in experiments are then
 discarded at the end of the program because, honestly, their lifetime is
 over more or less, so the idea is that some people might be interested in
 adopting these mice.


I had several discussions with the folks at the U of W whose job it was to
euthanize those mice. Their worries seemed to center in two areas:
1.  Financial liability, e.g. a mouse bites a kid, whose finger becomes
infected and...
2.  Social liability, e.g. some kids who are torturing them put their videos
on the Internet.

Of course, you can also just pony up the $15 and get one from Jackson Labs.


Not the last time I checked. They are very careful NOT to sell them to
exactly the same population that I intend to supply them to - high-school
kids. I expect that if I became a middleman, that they would simply stop
selling to me. Even I would have a hard time purchasing them, because they
only sell to genuine LABS.

I haven't fully launced adopt-a-lab-rat yet because I am still trying to
 figure out how to avoid ending up in a situation where I have hundreds of
 rats and rodents running around my apartment and I get the short end of the
 stick (oops).


*What is your present situation and projections? How big a volume could you
supply? What are their approximate ages? Do they have really good
documentation? Were they used in any way that might compromise anti-aging
experiments, e.g. raised in a nicer-than-usual-laboratory environment? Do
you have any liability concerns as discussed above?
*

Mice in the wild live ~4 years. Lab mice live ~2 years. If you take a young
lab mouse and do everything you can to extend its life, you can approach 4
years. If you take an older lab mouse and do everything you can, you double
the REMAINDER of their life, e.g. starting with a one-year-old mouse, you
could get it to live ~3 years. How much better (or worse) than this you do
is the basis for judging by the Methuselah Mouse people.

Hence, really good documentation is needed to establish when they were born,
and when they left a laboratory environment. Tattoos or tags link the mouse
to the paperwork. If I/you/we are to get kids to compete to develop better
anti-aging methods, the mice need to be documented well enough to PROVE
beyond a shadow of a doubt that they did what they claimed they did.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread David Jones
This seems to be an overly simplistic view of AGI from a mathematician. It's
kind of funny how people over emphasize what they know or depend on their
current expertise too much when trying to solve new problems.

I don't think it makes sense to apply sanitized and formal mathematical
solutions to AGI. What reason do we have to believe that the problems we
face when developing AGI are solvable by such formal representations? What
reason do we have to think we can represent the problems as an instance of
such mathematical problems?

We have to start with the specific problems we are trying to solve, analyze
what it takes to solve them, and then look for and design a solution.
Starting with the solution and trying to hack the problem to fit it is not
going to work for AGI, in my opinion. I could be wrong, but I would need
some evidence to think otherwise.

Dave

On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer jimbro...@gmail.com wrote:

 You probably could show that a sophisticated mathematical structure would
 produce a scalable AGI program if is true, using contemporary mathematical
 models to simulate it.  However, if scalability was completely dependent on
 some as yet undiscovered mathemagical principle, then you couldn't.

 For example, I think polynomial time SAT would solve a lot of problems with
 contemporary AGI.  So I believe this could be demonstrated on a simulation.
 That means, that I could demonstrate effective AGI that works so long as the
 SAT problems are easily solved.  If the program reported that a complicated
 logical problem could not be solved, the user could provide his insight into
 the problem at those times to help with the problem.  This would not work
 exactly as hoped, but by working from there, I believe that I would be able
 to determine better ways to develop such a program so it would work better -
 if my conjecture about the potential efficacy of polynomial time SAT for AGI
 was true.

 Jim Bromer

 On Mon, Aug 9, 2010 at 6:11 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Mon, Aug 9, 2010 at 4:57 PM, John G. Rose johnr...@polyplexic.comwrote:

  -Original Message-
  From: Jim Bromer [mailto:jimbro...@gmail.com]
 
   how would these diverse examples
  be woven into highly compressed and heavily cross-indexed pieces of
  knowledge that could be accessed quickly and reliably, especially for
 the
  most common examples that the person is familiar with.

 This is a big part of it and for me the most exciting. And I don't think
 that this subsystem would take up millions of lines of code either.
 It's
 just that it is a *very* sophisticated and dynamic mathematical structure
 IMO.

 John



 Well, if it was a mathematical structure then we could start developing
 prototypes using familiar mathematical structures.  I think the structure
 has to involve more ideological relationships than mathematical.  For
 instance you can apply a idea to your own thinking in a such a way that you
 are capable of (gradually) changing how you think about something.  This
 means that an idea can be a compression of some greater change in your own
 programming.  While the idea in this example would be associated with a
 fairly strong notion of meaning, since you cannot accurately understand the
 full consequences of the change it would be somewhat vague at first.  (It
 could be a very precise idea capable of having strong effect, but the
 details of those effects would not be known until the change had
 progressed.)

 I think the more important question is how does a general concept be
 interpreted across a range of different kinds of ideas.  Actually this is
 not so difficult, but what I am getting at is how are sophisticated
 conceptual interrelations integrated and resolved?
 Jim




*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-11 Thread Steve Richfield
Ben,

Genescient has NOT paralleled human mating habits that would predictably
shorten life. They have only started from a point well beyond anything
achievable in the human population, and gone on from there. Hence, while
their approach may find some interesting things, it is unlikely to find the
things that are now killing our elderly population.

Continuing...

On Tue, Aug 10, 2010 at 11:59 AM, Ben Goertzel b...@goertzel.org wrote:




 I should dredge up and forward past threads with them. There are some
 flaws in their chain of reasoning, so that it won't be all that simple to
 sort the few relevant from the many irrelevant mutations. There is both a
 huge amount of noise, and irrelevant adaptations to their environment and
 their treatment.


 They have evolved many different populations in parallel, using the same
 fitness criterion.  This provides powerful noise filtering


Multiple measurements improve the S/N ratio by the square root of the number
of measurements. Hence, if they were to develop 100 parallel populations,
they could expect to improve their S/N ratio by 10:1. They haven't done 100
parallel populations, and they need much better than 10:1 improvement to the
S/N ratio.

Of course, this is all aside from the fact that their signal is wrong
because of the different mating habits.


 Even when the relevant mutations are eventually identified, it isn't clear
 how that will map to usable therapies for the existing population.


 yes, that's a complex matter



 Further, most of the things that kill us operate WAY too slowly to affect
 fruit flies, though there are some interesting dual-affecting problems.


 Fruit flies get all the  major ailments that kill people frequently, except
 cancer.  heart disease, neurodegenerative disease, respiratory problems,
 immune problems, etc.


Curiously, the list of conditions that they DO exhibit appears to be the
SAME list as people with reduced body temperatures exhibit. This suggests
simply correcting elderly people's body temperatures as they crash. Then,
where do we go from there?

Note that as you get older, your risk of contracting cancer rises
dramatically - SO dramatically that the odds of you eventually contracting
it are ~100%. Meanwhile, the risks of the other diseases DECREASE as you get
older past a certain age, so if you haven't contracted them by ~80, then you
probably never will contract them.

Scientific American had an article a while back about people in Israel who
are 100 years old. At ~100, your risk of dieing during each following year
DECREASES with further advancing age!!! This strongly suggests some
early-killers, that if you somehow escape them, you can live for quite a
while. Our breeding practices would certainly invite early-killers. Of
course, only a very tiny segment of the population lives to be 100.


 As I have posted in the past, what we have here in the present human
 population is about the equivalent of a fruit fly population that was bred
 for the shortest possible lifespan.


 Certainly not.


??? Not what?


 We have those fruit fly populations also, and analysis of their genetics
 refutes your claim ;p ...


Where? References? The last I looked, all they had in addition to their
long-lived groups were uncontrolled control groups, and no groups bred only
from young flies.

In any case, since the sociology of humans is SO much different than that of
fruit flies, and breeding practices interact so much with sociology, e.g.
the bright colorings of birds, beards (that I have commented on before),
etc. In short, I would expect LOTS of mutations from young-bread groups, but
entirely different mutations in people than in fruit flies.

I suspect that there is LOTS more information in the DNA of healthy people
100 than there is in any population of fruit flies. Perhaps, data from
fruit flies could then be used to reduce the noise from the limited human
population who lives to be 100? Anyway, if someone has thought this whole
thing out, I sure haven't seen it. Sure there is probably lots to be learned
from genetic approaches, but Genescient's approach seems flawed by its
simplicity.

The challenge here is as always. The value of such research to us is VERY
high, yet there is no meaningful funding. If/when an early AI becomes
available to help in such efforts, there simply won't be any money available
to divert it away from defense (read that: offense) work.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-11 Thread Ben Goertzel
 We have those fruit fly populations also, and analysis of their genetics
 refutes your claim ;p ...


 Where? References? The last I looked, all they had in addition to their
 long-lived groups were uncontrolled control groups, and no groups bred only
 from young flies.



Michael rose's UCI lab has evolved flies specifically for short lifespan,
but the results may not be published yet...



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread Jim Bromer
David,
I am not a mathematician although I do a lot
of computer-related mathematical work of course.  My remark was directed
toward John who had suggested that he thought that there is some
sophisticated mathematical sub system that would (using my words here)
provide such a substantial benefit to AGI that its lack may be at the core
of the contemporary problem.  I was saying that unless this required
mathemagic then a scalable AGI system demonstrating how effective this kind
of mathematical advancement could probably be simulated using contemporary
mathematics.  This is not the same as saying that AGI is solvable by
sanitized formal representations any more than saying that your message is a
sanitized formal statement because it was dependent on a lot of computer
mathematics in order to send it.  In other words I was challenging John at
that point to provide some kind of evidence for his view.

I then went on to say, that for example, I think that fast SAT solutions
would make scalable AGI possible (that is, scalable up to a point that is
way beyond where we are now), and therefore I believe that I could create a
simulation of an AGI program to demonstrate what I am talking about.  (A
simulation is not the same as the actual thing.)

I didn't say, nor did I imply, that the mathematics would be all there is to
it.  I have spent a long time thinking about the problems of applying formal
and informal systems to 'real world' (or other world) problems and the
application of methods is a major part of my AGI theories.  I don't expect
you to know all of my views on the subject but I hope you will keep this in
mind for future discussions.
Jim Bromer

On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.com wrote:

 This seems to be an overly simplistic view of AGI from a mathematician.
 It's kind of funny how people over emphasize what they know or depend on
 their current expertise too much when trying to solve new problems.

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve, analyze
 what it takes to solve them, and then look for and design a solution.
 Starting with the solution and trying to hack the problem to fit it is not
 going to work for AGI, in my opinion. I could be wrong, but I would need
 some evidence to think otherwise.

 Dave

   On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer jimbro...@gmail.comwrote:

   You probably could show that a sophisticated mathematical structure
 would produce a scalable AGI program if is true, using contemporary
 mathematical models to simulate it.  However, if scalability was
 completely dependent on some as yet undiscovered mathemagical principle,
 then you couldn't.

 For example, I think polynomial time SAT would solve a lot of problems
 with contemporary AGI.  So I believe this could be demonstrated on a
 simulation.  That means, that I could demonstrate effective AGI that works
 so long as the SAT problems are easily solved.  If the program reported that
 a complicated logical problem could not be solved, the user could provide
 his insight into the problem at those times to help with the problem.  This
 would not work exactly as hoped, but by working from there, I believe that I
 would be able to determine better ways to develop such a program so it would
 work better - if my conjecture about the potential efficacy of polynomial
 time SAT for AGI was true.

 Jim Bromer

 On Mon, Aug 9, 2010 at 6:11 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Mon, Aug 9, 2010 at 4:57 PM, John G. Rose johnr...@polyplexic.comwrote:

  -Original Message-
  From: Jim Bromer [mailto:jimbro...@gmail.com]
 
   how would these diverse examples
  be woven into highly compressed and heavily cross-indexed pieces of
  knowledge that could be accessed quickly and reliably, especially for
 the
  most common examples that the person is familiar with.

 This is a big part of it and for me the most exciting. And I don't think
 that this subsystem would take up millions of lines of code either.
 It's
 just that it is a *very* sophisticated and dynamic mathematical
 structure
 IMO.

 John



 Well, if it was a mathematical structure then we could start developing
 prototypes using familiar mathematical structures.  I think the structure
 has to involve more ideological relationships than mathematical.  For
 instance you can apply a idea to your own thinking in a such a way that you
 are capable of (gradually) changing how you think about something.  This
 means that an idea can be a compression of some greater change in your own
 programming.  While the idea in this example would be associated with a
 fairly strong notion of meaning, 

Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread David Jones
Jim,

Fair enough. My apologies then. I just often see your posts on SAT or other
very formal math problems and got the impression that you thought this was
at the core of AGI's problems and that pursuing a fast solution to
NP-complete problems is the best way to solve it. At least, that was my
impression. So, my thought was that such formal methods don't seem to be a
complete solution at all and other factors, such as uncertainty, could make
such formal solutions ineffective or unusable. Which is why I said it's
important to analyze the requirements of the problem and then apply a
solution.

Dave

On Wed, Aug 11, 2010 at 1:02 PM, Jim Bromer jimbro...@gmail.com wrote:

 David,
 I am not a mathematician although I do a lot
 of computer-related mathematical work of course.  My remark was directed
 toward John who had suggested that he thought that there is some
 sophisticated mathematical sub system that would (using my words here)
 provide such a substantial benefit to AGI that its lack may be at the core
 of the contemporary problem.  I was saying that unless this required
 mathemagic then a scalable AGI system demonstrating how effective this kind
 of mathematical advancement could probably be simulated using contemporary
 mathematics.  This is not the same as saying that AGI is solvable by
 sanitized formal representations any more than saying that your message is a
 sanitized formal statement because it was dependent on a lot of computer
 mathematics in order to send it.  In other words I was challenging John at
 that point to provide some kind of evidence for his view.

 I then went on to say, that for example, I think that fast SAT solutions
 would make scalable AGI possible (that is, scalable up to a point that is
 way beyond where we are now), and therefore I believe that I could create a
 simulation of an AGI program to demonstrate what I am talking about.  (A
 simulation is not the same as the actual thing.)

 I didn't say, nor did I imply, that the mathematics would be all there is
 to it.  I have spent a long time thinking about the problems of applying
 formal and informal systems to 'real world' (or other world) problems and
 the application of methods is a major part of my AGI theories.  I don't
 expect you to know all of my views on the subject but I hope you will keep
 this in mind for future discussions.
 Jim Bromer

 On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote:

 This seems to be an overly simplistic view of AGI from a mathematician.
 It's kind of funny how people over emphasize what they know or depend on
 their current expertise too much when trying to solve new problems.

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve,
 analyze what it takes to solve them, and then look for and design a
 solution. Starting with the solution and trying to hack the problem to fit
 it is not going to work for AGI, in my opinion. I could be wrong, but I
 would need some evidence to think otherwise.

 Dave

   On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer jimbro...@gmail.comwrote:

   You probably could show that a sophisticated mathematical structure
 would produce a scalable AGI program if is true, using contemporary
 mathematical models to simulate it.  However, if scalability was
 completely dependent on some as yet undiscovered mathemagical principle,
 then you couldn't.

 For example, I think polynomial time SAT would solve a lot of problems
 with contemporary AGI.  So I believe this could be demonstrated on a
 simulation.  That means, that I could demonstrate effective AGI that works
 so long as the SAT problems are easily solved.  If the program reported that
 a complicated logical problem could not be solved, the user could provide
 his insight into the problem at those times to help with the problem.  This
 would not work exactly as hoped, but by working from there, I believe that I
 would be able to determine better ways to develop such a program so it would
 work better - if my conjecture about the potential efficacy of polynomial
 time SAT for AGI was true.

 Jim Bromer

 On Mon, Aug 9, 2010 at 6:11 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Mon, Aug 9, 2010 at 4:57 PM, John G. Rose 
 johnr...@polyplexic.comwrote:

  -Original Message-
  From: Jim Bromer [mailto:jimbro...@gmail.com]
 
   how would these diverse examples
  be woven into highly compressed and heavily cross-indexed pieces of
  knowledge that could be accessed quickly and reliably, especially for
 the
  most common examples that the person is familiar with.

 This is a big part of it and for me the most exciting. And I don't
 think
 that this 

Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread Jim Bromer
On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.com wrote:

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve, analyze
 what it takes to solve them, and then look for and design a solution.
 Starting with the solution and trying to hack the problem to fit it is not
 going to work for AGI, in my opinion. I could be wrong, but I would need
 some evidence to think otherwise.



I agree that disassociated theories have not proved to be very successful at
AGI, but then again what has?

I would use a mathematical method that gave me the number or percentage of
True cases that satisfy a propositional formula as a way to check the
internal logic of different combinations of logic-based conjectures.  Since
methods that can do this with logical variables for any logical system that
goes (a little) past 32 variables are feasible the potential of this method
should be easy to check (although it would hit a rather low ceiling of
scalability).  So I do think that logic and other mathematical methods would
help in true AGI programs.  However, the other major problem, as I see it,
is one of application. And strangely enough, this application problem is so
pervasive, that it means that you cannot even develop artificial opinions!
You can program the computer to jump on things that you expect it to see,
and you can program it to create theories about random combinations of
objects, but how could you have a true opinion without child-level
judgement?

This may sound like frivolous philosophy but I think it really shows that
the starting point isn't totally beyond us.

Jim Bromer


On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.com wrote:

 This seems to be an overly simplistic view of AGI from a mathematician.
 It's kind of funny how people over emphasize what they know or depend on
 their current expertise too much when trying to solve new problems.

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve, analyze
 what it takes to solve them, and then look for and design a solution.
 Starting with the solution and trying to hack the problem to fit it is not
 going to work for AGI, in my opinion. I could be wrong, but I would need
 some evidence to think otherwise.

 Dave

   On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer jimbro...@gmail.comwrote:

   You probably could show that a sophisticated mathematical structure
 would produce a scalable AGI program if is true, using contemporary
 mathematical models to simulate it.  However, if scalability was
 completely dependent on some as yet undiscovered mathemagical principle,
 then you couldn't.

 For example, I think polynomial time SAT would solve a lot of problems
 with contemporary AGI.  So I believe this could be demonstrated on a
 simulation.  That means, that I could demonstrate effective AGI that works
 so long as the SAT problems are easily solved.  If the program reported that
 a complicated logical problem could not be solved, the user could provide
 his insight into the problem at those times to help with the problem.  This
 would not work exactly as hoped, but by working from there, I believe that I
 would be able to determine better ways to develop such a program so it would
 work better - if my conjecture about the potential efficacy of polynomial
 time SAT for AGI was true.

 Jim Bromer

 On Mon, Aug 9, 2010 at 6:11 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Mon, Aug 9, 2010 at 4:57 PM, John G. Rose johnr...@polyplexic.comwrote:

  -Original Message-
  From: Jim Bromer [mailto:jimbro...@gmail.com]
 
   how would these diverse examples
  be woven into highly compressed and heavily cross-indexed pieces of
  knowledge that could be accessed quickly and reliably, especially for
 the
  most common examples that the person is familiar with.

 This is a big part of it and for me the most exciting. And I don't think
 that this subsystem would take up millions of lines of code either.
 It's
 just that it is a *very* sophisticated and dynamic mathematical
 structure
 IMO.

 John



 Well, if it was a mathematical structure then we could start developing
 prototypes using familiar mathematical structures.  I think the structure
 has to involve more ideological relationships than mathematical.  For
 instance you can apply a idea to your own thinking in a 

[agi] Scalable vs Diversifiable

2010-08-11 Thread Mike Tintner
Isn't it time that people started adopting true AGI criteria?

The universal endlessly repeated criterion here that a system must be capable 
of being scaled up is a narrow AI criterion.

The proper criterion is diversifiable. If your system can say navigate a 
DARPA car through a grid of city streets, it's AGI if it's diversifiable - or 
rather can diversify itself - if it can then navigate its way through a forest, 
or a strange maze - without being programmed anew. A system is AGI if it can 
diversify from one kind of task/activity to another different kind - as humans 
and animals do - without being additionally programmed . Scale is irrelevant 
and deflects attention from the real problem.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Scalable vs Diversifiable

2010-08-11 Thread Jim Bromer
I don't feel that a non-programmer can actually define what true AGI
criteria would be.  The problem is not just oriented around a consumer
definition of a goal, because it involves a fundamental comprehension of the
tools available to achieve that goal.  I appreciate your idea that AGI has
to be diversifiable but your inability to understand certain things that are
said about computer programming makes your proclamation look odd.
Jim Bromer

On Wed, Aug 11, 2010 at 2:26 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Isn't it time that people started adopting true AGI criteria?

 The universal endlessly repeated criterion here that a system must be
 capable of being scaled up is a narrow AI criterion.

 The proper criterion is diversifiable. If your system can say navigate a
 DARPA car through a grid of city streets, it's AGI if it's diversifiable -
 or rather can diversify itself - if it can then navigate its way through a
 forest, or a strange maze - without being programmed anew. A system is AGI
 if it can diversify from one kind of task/activity to another different kind
 - as humans and animals do - without being additionally programmed . Scale
 is irrelevant and deflects attention from the real problem.
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Scalable vs Diversifiable

2010-08-11 Thread Jim Bromer
I think I may understand where the miscommunication occurred.  When we talk
about scaling up an AGI program we are - of course - referrring to improving
on an AGI program that can work effectively with a very limited amount of
referential knowledge so that it would be able to handle a much greater
diversification of referential knowledge.  You might say that is what
scalability means.
Jim Bromer

On Wed, Aug 11, 2010 at 2:43 PM, Jim Bromer jimbro...@gmail.com wrote:

 I don't feel that a non-programmer can actually define what true AGI
 criteria would be.  The problem is not just oriented around a consumer
 definition of a goal, because it involves a fundamental comprehension of the
 tools available to achieve that goal.  I appreciate your idea that AGI has
 to be diversifiable but your inability to understand certain things that are
 said about computer programming makes your proclamation look odd.
 Jim Bromer

 On Wed, Aug 11, 2010 at 2:26 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Isn't it time that people started adopting true AGI criteria?

 The universal endlessly repeated criterion here that a system must be
 capable of being scaled up is a narrow AI criterion.

 The proper criterion is diversifiable. If your system can say navigate a
 DARPA car through a grid of city streets, it's AGI if it's diversifiable -
 or rather can diversify itself - if it can then navigate its way through a
 forest, or a strange maze - without being programmed anew. A system is AGI
 if it can diversify from one kind of task/activity to another different kind
 - as humans and animals do - without being additionally programmed . Scale
 is irrelevant and deflects attention from the real problem.
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Scalable vs Diversifiable

2010-08-11 Thread Mike Tintner
To respond in kind ,you along with virtually all AGI-ers show an inability to 
understand or define the problems of AGI - i.e. the end-problems that an AGI 
must face,  the problems of creativity vs rationality. You only actually deal 
in standard, narrow AI problems. 

If you don't understand what a new machine must do, all your technical 
knowledge of machines to date may be irrelevant. And in your case, I can't 
think of any concerns of yours like complexity that have anything to do with 
AGI problems at all - nor have you ever tried to relate them to any actual AGI 
problems.

So we're well-matched in inability - except that in creative matters, knowledge 
of the problems-to-be-solved always takes priority over knowledge of entirely 
irrelevant solutions.



From: Jim Bromer 
Sent: Wednesday, August 11, 2010 7:43 PM
To: agi 
Subject: Re: [agi] Scalable vs Diversifiable


I don't feel that a non-programmer can actually define what true AGI criteria 
would be.  The problem is not just oriented around a consumer definition of a 
goal, because it involves a fundamental comprehension of the tools available to 
achieve that goal.  I appreciate your idea that AGI has to be diversifiable but 
your inability to understand certain things that are said about computer 
programming makes your proclamation look odd.
Jim Bromer


On Wed, Aug 11, 2010 at 2:26 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Isn't it time that people started adopting true AGI criteria?

  The universal endlessly repeated criterion here that a system must be capable 
of being scaled up is a narrow AI criterion.

  The proper criterion is diversifiable. If your system can say navigate a 
DARPA car through a grid of city streets, it's AGI if it's diversifiable - or 
rather can diversify itself - if it can then navigate its way through a forest, 
or a strange maze - without being programmed anew. A system is AGI if it can 
diversify from one kind of task/activity to another different kind - as humans 
and animals do - without being additionally programmed . Scale is irrelevant 
and deflects attention from the real problem.
agi | Archives  | Modify Your Subscription   



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread Jim Bromer
I've made two ultra-brilliant statements in the past few days.  One is that
a concept can simultaneously be both precise and vague.  And the other is
that without judgement even opinions are impossible.  (Ok, those two
statements may not be ultra-brilliant but they are brilliant right?  Ok,
maybe not truly brilliant,  but highly insightful and
perspicuously intelligent... Or at least interesting to the cognoscenti
maybe?.. Well, they were interesting to me at least.)

Ok, these two interesting-to-me comments made by me are interesting because
they suggest that we do not know how to program a computer even to create
opinions.  Or if we do, there is a big untapped difference between those
programs that show nascent judgement (perhaps only at levels relative to the
domain of their capabilities) and those that don't.

This is AGI programmer's utopia.  (Or at least my utopia).  Because I need
to find something that is simple enough for me to start with and which can
lend itself to develop and test theories of AGI judgement and scalability.
By allowing an AGI program to participate more in the selection of its own
primitive 'interests' we will be able to interact with it, both as
programmer and as user, to guide it toward selecting those interests which
we can understand and seem interesting to us.  By creating an AGI program
that has a faculty for primitive judgement (as we might envision such an
ability), and then testing the capabilities in areas where the program seems
to work more effectively, we might be better able to develop more
powerful AGI theories that show greater scalability, so long as we are able
to understand what interests the program is pursuing.

Jim Bromer

On Wed, Aug 11, 2010 at 1:40 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote:

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve,
 analyze what it takes to solve them, and then look for and design a
 solution. Starting with the solution and trying to hack the problem to fit
 it is not going to work for AGI, in my opinion. I could be wrong, but I
 would need some evidence to think otherwise.



 I agree that disassociated theories have not proved to be very successful
 at AGI, but then again what has?

 I would use a mathematical method that gave me the number or percentage of
 True cases that satisfy a propositional formula as a way to check the
 internal logic of different combinations of logic-based conjectures.  Since
 methods that can do this with logical variables for any logical system that
 goes (a little) past 32 variables are feasible the potential of this method
 should be easy to check (although it would hit a rather low ceiling of
 scalability).  So I do think that logic and other mathematical methods would
 help in true AGI programs.  However, the other major problem, as I see it,
 is one of application. And strangely enough, this application problem is so
 pervasive, that it means that you cannot even develop artificial opinions!
 You can program the computer to jump on things that you expect it to see,
 and you can program it to create theories about random combinations of
 objects, but how could you have a true opinion without child-level
 judgement?

 This may sound like frivolous philosophy but I think it really shows that
 the starting point isn't totally beyond us.

 Jim Bromer


  On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote:

 This seems to be an overly simplistic view of AGI from a mathematician.
 It's kind of funny how people over emphasize what they know or depend on
 their current expertise too much when trying to solve new problems.

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve,
 analyze what it takes to solve them, and then look for and design a
 solution. Starting with the solution and trying to hack the problem to fit
 it is not going to work for AGI, in my opinion. I could be wrong, but I
 would need some evidence to think otherwise.

 Dave

   On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer jimbro...@gmail.comwrote:

   You probably could show that a sophisticated mathematical structure
 would produce a scalable AGI program if is true, using contemporary
 mathematical models to simulate it.  However, if scalability was
 

Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread David Jones
Slightly off the topic of your last email. But, all this discussion has made
me realize how to phrase something... That is that solving AGI requires
understand the constraints that problems impose on a solution. So, it's sort
of a unbelievably complex constraint satisfaction problem. What we've been
talking about is how we come up with solutions to these problems when we
sometimes aren't actually trying to solve any of the real problems. As I've
been trying to articulate lately is that in order to satisfy the constraints
of the problems AGI imposes, we must really understand the problems we want
to solve and how they can be solved(their constraints). I think that most of
us do not do this because the problem is so complex, that we refuse to
attempt to understand all of its constraints. Instead we focus on something
very small and manageable with fewer constraints. But, that's what creates
narrow AI, because the constraints you have developed the solution for only
apply to a narrow set of problems. Once you try to apply it to a different
problem that imposes new, incompatible constraints, the solution fails.

So, lately I've been pushing for people to truly analyze the problems
involved in AGI, step by step to understand what the constraints are. I
think this is the only way we will develop a solution that is guaranteed to
work without wasting undo time in trial and error. I don't think trial and
error approaches will work. We must know what the constraints are, instead
of guessing at what solutions might approximate the constraints. I think the
problem space is too large to guess.

Of course, I think acquisition of knowledge through automated means is the
first step in understanding these constraints. But, unfortunately, few agree
with me.

Dave

On Wed, Aug 11, 2010 at 3:44 PM, Jim Bromer jimbro...@gmail.com wrote:

 I've made two ultra-brilliant statements in the past few days.  One is that
 a concept can simultaneously be both precise and vague.  And the other is
 that without judgement even opinions are impossible.  (Ok, those two
 statements may not be ultra-brilliant but they are brilliant right?  Ok,
 maybe not truly brilliant,  but highly insightful and
 perspicuously intelligent... Or at least interesting to the cognoscenti
 maybe?.. Well, they were interesting to me at least.)

 Ok, these two interesting-to-me comments made by me are interesting because
 they suggest that we do not know how to program a computer even to create
 opinions.  Or if we do, there is a big untapped difference between those
 programs that show nascent judgement (perhaps only at levels relative to the
 domain of their capabilities) and those that don't.

 This is AGI programmer's utopia.  (Or at least my utopia).  Because I need
 to find something that is simple enough for me to start with and which can
 lend itself to develop and test theories of AGI judgement and scalability.
 By allowing an AGI program to participate more in the selection of its own
 primitive 'interests' we will be able to interact with it, both as
 programmer and as user, to guide it toward selecting those interests which
 we can understand and seem interesting to us.  By creating an AGI program
 that has a faculty for primitive judgement (as we might envision such an
 ability), and then testing the capabilities in areas where the program seems
 to work more effectively, we might be better able to develop more
 powerful AGI theories that show greater scalability, so long as we are able
 to understand what interests the program is pursuing.

 Jim Bromer

 On Wed, Aug 11, 2010 at 1:40 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote:

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve,
 analyze what it takes to solve them, and then look for and design a
 solution. Starting with the solution and trying to hack the problem to fit
 it is not going to work for AGI, in my opinion. I could be wrong, but I
 would need some evidence to think otherwise.



 I agree that disassociated theories have not proved to be very successful
 at AGI, but then again what has?

 I would use a mathematical method that gave me the number or percentage of
 True cases that satisfy a propositional formula as a way to check the
 internal logic of different combinations of logic-based conjectures.  Since
 methods that can do this with logical variables for any logical system that
 goes (a little) past 32 variables are feasible the potential of this method
 should be easy to check (although it would hit a rather low ceiling of
 scalability).  So I do think that logic and 

Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread Jim Bromer
I guess what I was saying was that I can test my mathematical theory and my
theories about primitive judgement both at the same time by trying to find
those areas where the program seems to be good at something.  For example, I
found that it was easy to write a program that found outlines where there
was some contrast between a solid object and whatever was in the background
or whatever was in the foreground.  Now I, as an artist could use that to
create interesting abstractions.  However, that does not mean that an AGI
program that was supposed to learn and acquire greater judgement based on my
ideas for a primitive judgement would be able to do that.  Instead, I would
let it do what it seemed good at, so long as I was able to appreciate what
it was doing.  Since this would lead to something - a next step at least - I
could use this to test my theory that a good more general SAT solution would
be useful as well.
Jim Bromer

On Wed, Aug 11, 2010 at 3:57 PM, David Jones davidher...@gmail.com wrote:

 Slightly off the topic of your last email. But, all this discussion has
 made me realize how to phrase something... That is that solving AGI requires
 understand the constraints that problems impose on a solution. So, it's sort
 of a unbelievably complex constraint satisfaction problem. What we've been
 talking about is how we come up with solutions to these problems when we
 sometimes aren't actually trying to solve any of the real problems. As I've
 been trying to articulate lately is that in order to satisfy the constraints
 of the problems AGI imposes, we must really understand the problems we want
 to solve and how they can be solved(their constraints). I think that most of
 us do not do this because the problem is so complex, that we refuse to
 attempt to understand all of its constraints. Instead we focus on something
 very small and manageable with fewer constraints. But, that's what creates
 narrow AI, because the constraints you have developed the solution for only
 apply to a narrow set of problems. Once you try to apply it to a different
 problem that imposes new, incompatible constraints, the solution fails.

 So, lately I've been pushing for people to truly analyze the problems
 involved in AGI, step by step to understand what the constraints are. I
 think this is the only way we will develop a solution that is guaranteed to
 work without wasting undo time in trial and error. I don't think trial and
 error approaches will work. We must know what the constraints are, instead
 of guessing at what solutions might approximate the constraints. I think the
 problem space is too large to guess.

 Of course, I think acquisition of knowledge through automated means is the
 first step in understanding these constraints. But, unfortunately, few agree
 with me.

 Dave

 On Wed, Aug 11, 2010 at 3:44 PM, Jim Bromer jimbro...@gmail.com wrote:

 I've made two ultra-brilliant statements in the past few days.  One is
 that a concept can simultaneously be both precise and vague.  And the other
 is that without judgement even opinions are impossible.  (Ok, those two
 statements may not be ultra-brilliant but they are brilliant right?  Ok,
 maybe not truly brilliant,  but highly insightful and
 perspicuously intelligent... Or at least interesting to the cognoscenti
 maybe?.. Well, they were interesting to me at least.)

 Ok, these two interesting-to-me comments made by me are interesting
 because they suggest that we do not know how to program a computer even to
 create opinions.  Or if we do, there is a big untapped difference between
 those programs that show nascent judgement (perhaps only at levels relative
 to the domain of their capabilities) and those that don't.

 This is AGI programmer's utopia.  (Or at least my utopia).  Because I need
 to find something that is simple enough for me to start with and which can
 lend itself to develop and test theories of AGI judgement and scalability.
 By allowing an AGI program to participate more in the selection of its own
 primitive 'interests' we will be able to interact with it, both as
 programmer and as user, to guide it toward selecting those interests which
 we can understand and seem interesting to us.  By creating an AGI program
 that has a faculty for primitive judgement (as we might envision such an
 ability), and then testing the capabilities in areas where the program seems
 to work more effectively, we might be better able to develop more
 powerful AGI theories that show greater scalability, so long as we are able
 to understand what interests the program is pursuing.

 Jim Bromer

 On Wed, Aug 11, 2010 at 1:40 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote:

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do 

Re: [agi] Anyone going to the Singularity Summit?

2010-08-11 Thread Steve Richfield
Ben,

It seems COMPLETELY obvious (to me) that almost any mutation would shorten
lifespan, so we shouldn't expect to learn much from it. What particular
lifespan-shortening mutations are in the human genome wouldn't be expected
to be the same, or even the same as separated human populations. Hmmm, an
interesting thought: I wonder if certain racially mixed people have shorter
lifespans because they have several disjoint sets of such mutations?!!! Any
idea where to find such data?

It has long been noticed that some racial subgroups do NOT have certain
age-related illnesses, e.g. Japanese don't have clogged arteries, but they
DO have lots of cancer. So far everyone has been blindly presuming diet, but
seeking a particular level of genetic disaster could also explain it.

Any thoughts?

Steve

On Wed, Aug 11, 2010 at 8:06 AM, Ben Goertzel b...@goertzel.org wrote:


 We have those fruit fly populations also, and analysis of their genetics
 refutes your claim ;p ...


 Where? References? The last I looked, all they had in addition to their
 long-lived groups were uncontrolled control groups, and no groups bred only
 from young flies.



 Michael rose's UCI lab has evolved flies specifically for short lifespan,
 but the results may not be published yet...

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Nao Nao

2010-08-11 Thread John G. Rose
Well both. Though much of the control could be remote depending on
bandwidth. 

 

Also, one robot could benefit from the eyes of many as they would all be
internetworked to a degree.

 

John

 

From: Ian Parker [mailto:ianpark...@gmail.com] 



Your remarks about WiFi echo my own view. Should a robot rely on an external
connection (WiFi) or should it have complex processing itself.

 

In general we try to keep real time response information local, although
local my be viewed in terms of the c the speed of light. If a PC is 150m
away from a robot this is a 300m double journey which will take a
microsecond. To access the Web for a program will, of course, take
considerably longer.

 

A μ sec is nothing even when we are considering time critical functions like
balance. However for balance it might be a good idea to either have the
robot balancing, or else to have a card inserted into the PC.

 

This is one topic for which I have not been able to have a satisfactory
discussion or answer. People who build robots tend to think in terms of
having the processing power on the robot. This I believe is wrong.

 

 

  - Ian Parker

On 10 August 2010 00:06, John G. Rose johnr...@polyplexic.com wrote:

Aww, so cute.

 

I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays
sensory information back to the main servers with all the other Nao's all
collecting personal data in a massive multi-agent geo-distributed
robo-network.

 

So cuddly!

 

And I wonder if it receives and executes commands, commands that come in
over the network from whatever interested corporation or government pays the
most for access.

 

Such a sweet little friendly Nao. Everyone should get one :)

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 

 

An unusually sophisticated ( somewhat expensive) promotional robot vid:

 

 
http://www.telegraph.co.uk/technology/news/7934318/Nao-the-robot-that-expre
sses-and-detects-emotions.html
http://www.telegraph.co.uk/technology/news/7934318/Nao-the-robot-that-expres
ses-and-detects-emotions.html


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ | Modify Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 

 https://www.listbox.com/member/archive/rss/303/  


 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 

 https://www.listbox.com/member/archive/rss/303/  


 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 

 https://www.listbox.com/member/archive/rss/303/  




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Compressed Cross-Indexed Concepts

2010-08-11 Thread John G. Rose
 -Original Message-
 From: Jim Bromer [mailto:jimbro...@gmail.com]
 
 
 Well, if it was a mathematical structure then we could start developing
 prototypes using familiar mathematical structures.  I think the structure
has
 to involve more ideological relationships than mathematical.  

The ideological would still need be expressed mathematically.

 For instance
 you can apply a idea to your own thinking in a such a way that you are
 capable of (gradually) changing how you think about something.  This means
 that an idea can be a compression of some greater change in your own
 programming.  

Mmm yes or like a key.

 While the idea in this example would be associated with a
 fairly strong notion of meaning, since you cannot accurately understand
the
 full consequences of the change it would be somewhat vague at first.  (It
 could be a very precise idea capable of having strong effect, but the
details of
 those effects would not be known until the change had progressed.)
 

Yes. It would need to have receptors, an affinity something like that, or
somehow enable an efficiency change.

 I think the more important question is how does a general concept be
 interpreted across a range of different kinds of ideas.  Actually this is
not so
 difficult, but what I am getting at is how are sophisticated conceptual
 interrelations integrated and resolved?
 Jim

Depends on the structure. We would want to build it such that this happens
at various levels or the various multidimensional densities. But at the same
time complex state is preserved until proven benefits show themselves.

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Nao Nao

2010-08-11 Thread John G. Rose
I wasn't meaning to portray pessimism.

 

And that little sucker probably couldn't pick up a knife yet.

 

But this is a paradigm change happening where we will have many networked
mechanical entities. This opens up a whole new world of security and privacy
issues...  

 

John

 

From: David Jones [mailto:davidher...@gmail.com] 



Way too pessimistic in my opinion. 

On Mon, Aug 9, 2010 at 7:06 PM, John G. Rose johnr...@polyplexic.com
wrote:

Aww, so cute.

 

I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays
sensory information back to the main servers with all the other Nao's all
collecting personal data in a massive multi-agent geo-distributed
robo-network.

 

So cuddly!

 

And I wonder if it receives and executes commands, commands that come in
over the network from whatever interested corporation or government pays the
most for access.

 

Such a sweet little friendly Nao. Everyone should get one :)

 

John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com