Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Bob Mottram
On 27/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
  Consider the following subset of possible requirements: the program is 
  correct
  if and only if it halts.

 It's a perfectly valid requirement, and I can write all sorts of
 software that satisfies it. I can't take a piece of software that I
 didn't write and tell you it it satisfies it, but I can write piece of
 software that satisfies it, that also does all sorts of useful stuff.


This would seem to imply that you've solved the halting problem.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90483744-a4b35c


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Bob Mottram
On 28/01/2008, Ben Goertzel [EMAIL PROTECTED] wrote:
  When your computer can write and debug
  software faster and more accurately than you can, then you should worry.

 A tool that could generate computer code from formal specifications
 would be a wonderful thing, but not an autonomous intelligence.

 A program that creates its own questions based on its own goals, or
 creates its own program specifications based on its own goals, is
 a quite different thing from a tool.


Having written a lot of computer programs, as I suspect many on this
list have, I suspect that fully automatic programming is going to
require the same kind of commonsense reasoning as human have.  When
I'm writing a program I may draw upon diverse ideas derived from what
might be called common knowledge - something which computers
presently don't have.  The alternative is genetic programing, which is
more of a sampled search through the space of all programs, but I
rather doubt that this is what's going on in my mind for the most
part.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90487402-ec9313


[agi] Types of automatic programming? was Re: Singularity Outcomes

2008-01-28 Thread William Pearson
On 28/01/2008, Bob Mottram [EMAIL PROTECTED] wrote:
 On 28/01/2008, Ben Goertzel [EMAIL PROTECTED] wrote:
   When your computer can write and debug
   software faster and more accurately than you can, then you should worry.
 
  A tool that could generate computer code from formal specifications
  would be a wonderful thing, but not an autonomous intelligence.
 
  A program that creates its own questions based on its own goals, or
  creates its own program specifications based on its own goals, is
  a quite different thing from a tool.


 Having written a lot of computer programs, as I suspect many on this
 list have, I suspect that fully automatic programming is going to
 require the same kind of commonsense reasoning as human have.  When
 I'm writing a program I may draw upon diverse ideas derived from what
 might be called common knowledge - something which computers
 presently don't have.  The alternative is genetic programing, which is
 more of a sampled search through the space of all programs, but I
 rather doubt that this is what's going on in my mind for the most
 part.


What kind of processes would you expect to underly the brains ability
to reorganise itself during neural plasticity?

http://cogprints.org/2255/0/buss.htm

These sorts of changes we would generally expect the need of a
programmer to acheive in a computer system. Common sense programming
seems to be far too high level for this, so what sort would you expect
it to be?

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90496970-15b353


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Vladimir Nesov
On Jan 28, 2008 11:22 AM, Bob Mottram [EMAIL PROTECTED] wrote:
 On 27/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
   Consider the following subset of possible requirements: the program is 
   correct
   if and only if it halts.
 
  It's a perfectly valid requirement, and I can write all sorts of
  software that satisfies it. I can't take a piece of software that I
  didn't write and tell you it it satisfies it, but I can write piece of
  software that satisfies it, that also does all sorts of useful stuff.


 This would seem to imply that you've solved the halting problem.


No it won't. Halting problem is so problematic when we are given an
arbitrary program from outside. On the other hand, there are very
powerful languages that are decidable and also do useful stuff. As one
trivial example, I can take even external arbitrary program (say, a
Turing machine that I can't check in general case), place it on a
dedicated tape in UTM, and add control for termination, so that if it
doesn't terminate in 10^6 tacts, it will be terminated by UTM that
runs it. Resulting thing will be able to do all things that original
machine could in 10^6 tacts, and will also be guaranteed to terminate.

You can try checking out for example this paper (link from LtU
discussion), which presents a rather powerful language for describing
terminating programs:
http://lambda-the-ultimate.org/node/2003

Also see http://en.wikipedia.org/wiki/Total_functional_programming

It's not very helpful in itself, but using sufficiently powerful type
system it should also be possible to construct programs that have
required computational complexity and other properties.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90499378-2cd47f


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Vladimir Nesov
On Jan 28, 2008 2:08 PM, Bob Mottram [EMAIL PROTECTED] wrote:
 On 28/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
  You can try checking out for example this paper (link from LtU
  discussion), which presents a rather powerful language for describing
  terminating programs:
  http://lambda-the-ultimate.org/node/2003
  Also see http://en.wikipedia.org/wiki/Total_functional_programming

 This seems to address the halting problem by ignoring it (the same
 approach researchers often take to difficult problems in computer
 vision).

Well, what's pejorative with these solutions? You don't really need to
write bad programs, so problem of checking if program is bad is void
if you have a method for writing programs that are guaranteed to be
good.

 For practical purposes timeouts or watchdogs are ok, but
 they're just engineering workarounds rather than solutions.  In
 practice biological intelligence also uses the same hacks, and I think
 Turing himself pointed this out.

Timeout is a trivial answer for a theoretical question, whereas type
systems allow writing normal code without 'hacks' that also has these
properties. But it's not practically feasible to use them currently,
you'll spend too much time proving that program is correct and too
little time actually writing it. Maybe in time tools will catch up...

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90503888-2fa9e5


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Bob Mottram
On 28/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
 I can take even external arbitrary program (say, a
 Turing machine that I can't check in general case), place it on a
 dedicated tape in UTM, and add control for termination, so that if it
 doesn't terminate in 10^6 tacts, it will be terminated by UTM that
 runs it.

Yes, you can just add a timeout.


 You can try checking out for example this paper (link from LtU
 discussion), which presents a rather powerful language for describing
 terminating programs:
 http://lambda-the-ultimate.org/node/2003
 Also see http://en.wikipedia.org/wiki/Total_functional_programming

This seems to address the halting problem by ignoring it (the same
approach researchers often take to difficult problems in computer
vision).  For practical purposes timeouts or watchdogs are ok, but
they're just engineering workarounds rather than solutions.  In
practice biological intelligence also uses the same hacks, and I think
Turing himself pointed this out.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90503210-4345b9


[agi] Real-time Java-based vision libraries?

2008-01-28 Thread Benjamin Johnston
 

Hi,

 

I’m prototyping some ideas concerning symbol grounding and cognition in the
context of AGI and would like to make use of vision. I’m looking for some
library or classes for Java that can do real time or “nearly real-time”
image/video processing.

 

I’m after either: fairly robust object segmentation; 2.5D or 3D
reconstruction; object recognition; or something like superquadric
reconstruction. Something that attempts to describe the physical structure
of the world seen through the camera.

 

I’m not actually after anything particularly fancy or general – I’m happy
even to craft the lighting and objects to suit the library – I’m more
interested in finding something that works reasonably well “out-of-the-box”
in some open ended domain so that I can conduct a few experiments. That is,
I’m after a library that does some kind of modest, but meaningful, image
processing.

 

I know this is possible – I see it done again and again at AI conferences;
but I can’t seem to be able to find any ready-to-use libraries for Java.
I’ve had a quick attempt at doing it myself, but used very naïve algorithms
and the result wasn’t very robust. I’d love to find something already
available before I dive deeper into the machine vision literature and
attempt to write my own.

 

I’m capturing video frames via the Java Media Framework, but could convert
the stream into images of any reasonable format.

 

Any pointers would be very much appreciated.

 

Thank you,

 

-Benjamin Johnston

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90512336-dca4e2

Re: [agi] Real-time Java-based vision libraries?

2008-01-28 Thread Bob Mottram
On 28/01/2008, Benjamin Johnston [EMAIL PROTECTED] wrote:
 I'm after either: fairly robust object segmentation; 2.5D or 3D
 reconstruction; object recognition; or something like superquadric
 reconstruction. Something that attempts to describe the physical structure
 of the world seen through the camera.


These are all non-trivial problems and I don't know of any libraries
(java or otherwise) which out of the box perform 3D reconstruction
in real time from camera images.  However, this is a problem that I'm
currently working on a solution for (see
http://code.google.com/p/sentience/).



 I know this is possible – I see it done again and again at AI conferences;


Ah, well, appearances can be deceptive.  There are many papers in
computer vision in which you can see fancy 3D reconstructions produced
from camera images.  However, when you really get into the nitty
gritty of how these work you'll usually find that they were either
produced under highly contrived conditions or the result you can see
is not statistically representative (i.e. you might get a good result,
but only 20% of the time).

Programs such as the CMU photopopup and photosynth appear impressive,
but in the CMU case the reconstruction quality is poor (good enough
for entertainment, but not much else) and the photosynth case they're
still trying to reduce the huge amount of number crunching needed to
produce the point cloud models (which can take hours or days with
current computers).

There is progress being made on 3D reconstruction using scanning laser
rangefinders.  This is the same kind of technology used in the DARPA
chellenges, but it's not cheap and it's certainly not off the shelf
in software terms.

I think it will be possible to produce colour 3D models in real time
from camera images using reasonably low cost off the shelf technology
within about five years, but for the moment it remains as a kind of
holy grail in computer vision.  Once this happens then many new
robotics applications will become possible.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90535901-fc9d99

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Matt Mahoney

--- Vladimir Nesov [EMAIL PROTECTED] wrote:

 On Jan 28, 2008 4:53 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Consider the following subset of possible requirements: the program is
   correct
if and only if it halts.
   
  
   It's a perfectly valid requirement, and I can write all sorts of
   software that satisfies it. I can't take a piece of software that I
   didn't write and tell you it it satisfies it, but I can write piece of
   software that satisfies it, that also does all sorts of useful stuff.
 
  That is not the hard problem.  Going from a formal specification (actually
 a
  program) to code is just a matter of compilation.  But verifying that the
  result is correct is undecidable.
 
 What do you mean by that? What word 'result' in your last sentence
 refers to? Do you mean result of compilation? There are verified
 stacks, from the ground up. Given enough effort, it should be possible
 to be arbitrarily sure of their reliability.
 
 And anyway, what is undecidable here?

It is undecidable whether a program satisfies the requirements of a formal
specification, which is the same as saying that it is undecidable whether two
programs are equivalent.  The halting problem reduces to it.


  Maybe AGI will solve some of these problems that seem to be beyond the
  capabilities of humans.  But again it is a double edged sword.  There is a
  disturbing trend in attacks.  Attackers used to be motivated by ego, so
 you
  had viruses that played jokes or wiped your files.  Now they are motivated
 by
  greed, so attacks remain hidden while stealing personal information and
  computing resources.  Acquiring resources is the fitness function for
  competing, recursively self improving AGI, so it is sure to play a role.
 
 Now THAT you can't oppose, competition for resources by deception that
 relies on human gullibility. But it's a completely different problem,
 it's not about computer security at all. It's about human phychology,
 and one can't do anything about it, as long as they remain human. It
 probably can be kind of solved by placing generally intelligent
 'personal firewalls' on all input that human receives.

The problem is not human gullibility but human cognitive limits in dealing
with computer complexity.  Twenty years ago ID theft, phishing, botnets, and
spyware were barely a problem.  This problem will only get worse as software
gets more complex.  What you are suggesting is to abdicate responsibility to
the software, pitting ever smarter security against ever smarter intruders. 
This only guarantees that when your computer is hacked, you will never know. 
But I fear this result is inevitable.

Here is an example of cognitive load.  Firefox will pop up a warning if you
visit a known phishing site, but this doesn't work every time.  It also makes
such sites easier to detect because when you hover the mouse over a link, it
shows the true URL because by default Firefox disables Javascript code that
hackers add to write a fake URL to the status bar (which is enabled in IE and
can be enabled in Firefox).  This is not foolproof against creative attacks
such as registering www.paypaI.com (with a capitol I) or attacking routers or
DNS servers to redirect traffic to bogus sites, or sniffing traffic to
legitimate sites, or keyboard loggers capturing your passwords, or taking
advantage of users who use the same password on more than one site to reduce
their cognitive load (something you would never do, right?)

I use Firefox because I think it is more secure than IE, even though there
seems to be a new attack discovered about once a week. 
http://www.mozilla.org/projects/security/known-vulnerabilities.html
Do you really expect users to keep up with this, plus all their other
software?  No.  You will rely on AGI to do it for you, and when it fails you
will never know.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90580840-9cbff8


Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-01-28 Thread Randall Randall


On Jan 28, 2008, at 12:03 PM, Richard Loosemore wrote:
Your comments below are unfounded, and all the worse for being so  
poisonously phrased.  If you read the conversation from the  
beginning you will discover why:  Matt initially suggested the idea  
that an AGI might be asked to develop a virus of maximum potential,  
for purposes of testing a security system, and that it might  
respond by inserting an entire AGI system into the virus, since  
this would give the virus its maximum potential.  The thrust of my  
reply was that his entire idea of Matt's made no sense, since the  
AGI could not be a general intelligence if it could not see the  
full implications of the request.


Please feel free to accuse me of gross breaches of rhetorical  
etiquette, but if you do, please make sure first that I really have  
committed the crimes.  ;-)


I notice everyone else has (probably wisely) ignored
my response anyway.

I thought I'd done well at removing the most poisonously
phrased parts of my email before sending, but I agree I
should have waiting a few hours and revisited it before
sending, even so.  In any case, changes in meaning due to
sloppy copying of others' arguments are just SOP for most
internet arguments these days.  :(

To bring this slightly back to AGI:

The thrust of my reply was that his entire idea of Matt's made no  
sense, since the AGI could not be a general intelligence if it  
could not see the full implications of the request.


I'm sure you know that most humans fail to see the full
implications of *most* things.  Is it your opinion, then,
that a human is not a general intelligence?

--
Randall Randall [EMAIL PROTECTED]
If I can do it in Alabama, then I'm fairly certain you
 can get away with it anywhere. -- Dresden Codak



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90632569-c873ac


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Matt Mahoney

--- Vladimir Nesov [EMAIL PROTECTED] wrote:

 On Jan 28, 2008 6:33 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 
  It is undecidable whether a program satisfies the requirements of a formal
  specification, which is the same as saying that it is undecidable whether
 two
  programs are equivalent.  The halting problem reduces to it.
 
 Yes it is, if it's an arbitrary program. But you can construct a
 program that doesn't have this problem and also prove that it doesn't.
 You can check if program satisfies specification if it's written in a
 special way (for example, it's annotated with types that guarantee
 required conditions).

It is easy to construct programs that you can prove halt or don't halt.

There is no procedure to verify that a program is equivalent to a formal
specification (another program).  Suppose there was.  Then I can take any
program P and tell if it halts.  I construct a specification S from P by
replacing the halting states with states that transition to themselves in an
infinite loop.  I know that S does not halt.  I ask if S and P are equivalent.
 If they are, then P does not halt, otherwise it does.

 If computer cannot be hacked, it won't be.

If I turn off my computer, it can't be hacked.  Otherwise there is no
guarantee.  AGI is not a magic bullet.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90619751-b7cda9


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Kaj Sotala
On 1/24/08, Richard Loosemore [EMAIL PROTECTED] wrote:
 Theoretically yes, but behind my comment was a deeper analysis (which I
 have posted before, I think) according to which it will actually be very
 difficult for a negative-outcome singularity to occur.

 I was really trying to make the point that a statement like The
 singularity WILL end the human race is completely ridiculous.  There is
 no WILL about it.

Richard,

I'd be curious to hear your opinion of Omohundro's The Basic AI
Drives paper at
http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
(apparently, a longer and more technical version of the same can be
found at 
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
, but I haven't read it yet). I found the arguments made relatively
convincing, and to me, they implied that we do indeed have to be
/very/ careful not to build an AI which might end up destroying
humanity. (I'd thought that was the case before, but reading the paper
only reinforced my view...)




-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90642622-a4687d


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Vladimir Nesov
On Jan 28, 2008 7:41 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 It is easy to construct programs that you can prove halt or don't halt.

 There is no procedure to verify that a program is equivalent to a formal
 specification (another program).  Suppose there was.  Then I can take any
 program P and tell if it halts.  I construct a specification S from P by
 replacing the halting states with states that transition to themselves in an
 infinite loop.  I know that S does not halt.  I ask if S and P are equivalent.
  If they are, then P does not halt, otherwise it does.

Yes, it's what I was telling all along.


  If computer cannot be hacked, it won't be.

 If I turn off my computer, it can't be hacked.  Otherwise there is no
 guarantee.  AGI is not a magic bullet.

Exactly. That's why it can't hack provably correct programs. This race
isn't symmetric. Let's stop at that (unless you have something new to
say), everything was repeated at least three times.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90631134-afef0e


[agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-01-28 Thread Richard Loosemore


Randall,

Your comments below are unfounded, and all the worse for being so 
poisonously phrased.  If you read the conversation from the beginning 
you will discover why:  Matt initially suggested the idea that an AGI 
might be asked to develop a virus of maximum potential, for purposes of 
testing a security system, and that it might respond by inserting an 
entire AGI system into the virus, since this would give the virus its 
maximum potential.  The thrust of my reply was that his entire idea of 
Matt's made no sense, since the AGI could not be a general 
intelligence if it could not see the full implications of the request.


Please feel free to accuse me of gross breaches of rhetorical etiquette, 
but if you do, please make sure first that I really have committed the 
crimes.  ;-)




Richard Loosemore







Randall Randall wrote:


I pulled in some extra context from earlier messages to
illustrate an interesting event, here.

On Jan 27, 2008, at 12:24 PM, Richard Loosemore wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:

Matt Mahoney wrote:

Suppose you
ask the AGI to examine some operating system or server software to 
look for
security flaws.  Is it supposed to guess whether you want to fix 
the flaws or

write a virus?


If it has a moral code (it does) then why on earth would it have to
guess whether you want it fix the flaws or fix the virus?


If I hired you as a security analyst to find flaws in a piece of 
software, and
I didn't tell you what I was going to do with the information, how 
would you

know?


This is so silly it is actually getting quite amusing... :-)

So, you are positing a situation in which I am an AGI, and you want to 
hire me as a security analyst, and you say to me:  Please build the 
most potent virus in the world (one with a complete AGI inside it), 
because I need it for security purposes, but I am not going to tell 
you what I will do with the thing you build.


And we are assuming that I am an AGI with at least two neurons to rub 
together?


How would I know what you were going to do with the information?

I would say Sorry, pal, but you must think I was born yesterday.  I 
am not building such a virus for you or anyone else, because the 
dangers of building it, even as a test, are so enormous that it would 
be ridiculous.  And even if I did think it was a valid request, I 
wouldn't do such a thing for *anyone* who said 'I cannot tell you what 
I will do with the thing that you build'!


In the context of the actual quotes, above, the following statement
is priceless.

It seems to me that you have completely lost track of the original 
issue in this conversation, so your other comments are meaningless 
with respect to that original context.


Let's look at this again:


--- Richard Loosemore [EMAIL PROTECTED] wrote:

Matt Mahoney wrote:

Suppose you
ask the AGI to examine some operating system or server software to 
look for
security flaws.  Is it supposed to guess whether you want to fix 
the flaws or

write a virus?


If it has a moral code (it does) then why on earth would it have to
guess whether you want it fix the flaws or fix the virus?


Notice that in Matt's Is it supposed to guess whether you want to fix the
flaws or write a virus? there's no suggestion that you're asking the AGI
to write a virus, only that you're asking it for security information.  
Richard
then quietly changes to to it, thereby changing the meaning of the 
sentence
to the form he prefers to argue against (however ungrammatical), and 
then he
manages to finish up by accusing *Matt* of forgetting what Matt 
originally said

on the matter.

--
Randall Randall [EMAIL PROTECTED]
Someone needs to invent a Bayesball bat that exists solely for
 smacking people [...] upside the head. -- Psy-Kosh on reddit.com


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90627563-22941c


RE: [agi] Real-time Java-based vision libraries?

2008-01-28 Thread Benjamin Johnston


 Ah, well, appearances can be deceptive.  There are many papers
 in computer vision in which you can see fancy 3D reconstructions
 produced from camera images.  However, when you really get 
 into the nitty gritty of how these work you'll usually find that
 they were either produced under highly contrived conditions or 
 the result you can see is not statistically representative (i.e.
 you might get a good result, but only 20% of the time).

Thanks Bob, 

For my purposes, I'm actually fairly comfortable with highly contrived
conditions, results that are very approximate or just robust object
segmentation (rather than full/partial reconstruction). I share a lab with a
Robocup team, and was toying with the idea of trying to adapt their C++ code
and their highly contrived soccer field to my experiments. Unfortunately,
though, Robocup vision would fail if you were to throw a yellow ball onto
the field (instead of red): the systems aren't open-ended enough for my
liking.

 However, this is a problem that I'm currently working on a 
 solution for (see http://code.google.com/p/sentience/).

This looks very interesting. How long (and on what sort of machine) does it
take to process each stereo pair with your dense stereo correspondence
algorithm?

 These are all non-trivial problems and I don't know of any 
 libraries (java or otherwise) which out of the box perform
 3D reconstruction in real time from camera images.

Thanks - it isn't looking too promising. It seems like I'm going to have to
use bright lights, simple objects and then write some code of my own (or
look into interfacing directly with a C++ system).

-Ben


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90806337-bf10cf


Re: [agi] Real-time Java-based vision libraries?

2008-01-28 Thread Bob Mottram
On 28/01/2008, Benjamin Johnston [EMAIL PROTECTED] wrote:
 This looks very interesting. How long (and on what sort of machine) does it
 take to process each stereo pair with your dense stereo correspondence
 algorithm?

The stereo correspondence takes about 50mS on any reasonably modern PC
or laptop.  Most of the processing time is in fact occupied by the
SLAM algorithm which produces 3D grids.

You can find a description of the stereo algorithm and a link to the
code here http://code.google.com/p/sentience/wiki/StereoCorrespondence

A possible alternative to stereo cameras which may become available
within the next couple of years is something like the Z-cam (
http://www.youtube.com/watch?v=QfVWObYo-Vc ).  It would be interesting
to see this combined with Andrew Davison's monoSLAM.

Anyway, by hook or by crook I think the kind of technology which
you're looking for will arrive within the next few years, although for
the present it remains just out of reach.  Perhaps you should head for
the nearest cryogenic chamber and re-emerge in five years time when
the technology is ready for action.

- Bob

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90851447-6c72fa


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Vladimir Nesov
On Jan 28, 2008 6:33 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 It is undecidable whether a program satisfies the requirements of a formal
 specification, which is the same as saying that it is undecidable whether two
 programs are equivalent.  The halting problem reduces to it.

Yes it is, if it's an arbitrary program. But you can construct a
program that doesn't have this problem and also prove that it doesn't.
You can check if program satisfies specification if it's written in a
special way (for example, it's annotated with types that guarantee
required conditions).


  Now THAT you can't oppose, competition for resources by deception that
  relies on human gullibility. But it's a completely different problem,
  it's not about computer security at all. It's about human phychology,
  and one can't do anything about it, as long as they remain human. It
  probably can be kind of solved by placing generally intelligent
  'personal firewalls' on all input that human receives.

 The problem is not human gullibility but human cognitive limits in dealing
 with computer complexity.

The same thing, but gullibility is there too, and is a problem.


 Twenty years ago ID theft, phishing, botnets, and
 spyware were barely a problem.  This problem will only get worse as software
 gets more complex.  What you are suggesting is to abdicate responsibility to
 the software, pitting ever smarter security against ever smarter intruders.
 This only guarantees that when your computer is hacked, you will never know.
 But I fear this result is inevitable.

If computer cannot be hacked, it won't be.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90586814-8bc9a2


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
 Exactly. That's why it can't hack provably correct programs.

Which is useless because you can't write provably correct programs that aren't
extremely simple.  *All* nontrivial properties of programs are undecidable.
http://en.wikipedia.org/wiki/Rice%27s_theorem

And good luck translating human goals expressed in ambiguous and incomplete
natural language into provably correct formal specifications.

 This race isn't symmetric.

Yes it is, because every security tool can be used by both sides.  Here is one
more example: http://www.virustotal.com/
This would be handy if I wanted to write a virus and make sure it isn't
detected.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90866991-a570cd


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Lukasz Stafiniak
On Jan 29, 2008 12:35 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- Vladimir Nesov [EMAIL PROTECTED] wrote:
  Exactly. That's why it can't hack provably correct programs.

 Which is useless because you can't write provably correct programs that aren't
 extremely simple.  *All* nontrivial properties of programs are undecidable.
 http://en.wikipedia.org/wiki/Rice%27s_theorem

This is false. You can write nontrivial programs for which you can
prove nontrivial properties. Rice's theorem tells that you cannot
prove nontrivial properties for programs written in Turing-complete
languages and given unbounded resources and handed to you by an
adversary.

 And good luck translating human goals expressed in ambiguous and incomplete
 natural language into provably correct formal specifications.

This is true.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90871958-149830


[agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-01-28 Thread Richard Loosemore

Kaj Sotala wrote:

On 1/24/08, Richard Loosemore [EMAIL PROTECTED] wrote:

Theoretically yes, but behind my comment was a deeper analysis (which I
have posted before, I think) according to which it will actually be very
difficult for a negative-outcome singularity to occur.

I was really trying to make the point that a statement like The
singularity WILL end the human race is completely ridiculous.  There is
no WILL about it.


Richard,

I'd be curious to hear your opinion of Omohundro's The Basic AI
Drives paper at
http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
(apparently, a longer and more technical version of the same can be
found at 
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
, but I haven't read it yet). I found the arguments made relatively
convincing, and to me, they implied that we do indeed have to be
/very/ careful not to build an AI which might end up destroying
humanity. (I'd thought that was the case before, but reading the paper
only reinforced my view...)


Kaj,

I have only had time to look at it briefly this evening, but it looks 
like Omohundro is talking about Goal Stack systems.


I made a distinction, once before, between Standard-AI Goal Stack 
systems and another type that had a diffuse motivation system.


Summary of the difference:

1) I am not even convinced that an AI driven by a GS will ever actually 
become generally intelligent, because of the self-contrdictions built 
into the idea of a goal stack.  I am fairly sure that whenever anyone 
tries to scale one of those things up to a real AGI (something that has 
never been done, not by a long way) the AGI will become so unstable that 
it will be an idiot.


2) A motivation-system AGI would have a completely different set of 
properties, and among those properties would be extreme stability.  It 
would be possible to ensure that the thing stayed locked on to a goal 
set that was human-empathic, and which would stay that way.


Omohundros's analysis is all predicated on the Goal Stack approach, so 
my response is that nothing he says has any relevance to the type of AGI 
that I talk about (which, as I say, is probably going to be the only 
type ever created).


I will try to go into this in more depth as soon as I get a chance.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90892197-f7fae5