Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-21 Thread John LaMuth

Assuming I'm a Troll is pretty harsh, isnt it ?

I just wished to pitch my invention to SI in hopes of
aid in developing this ... Along the lines of "friendly" AI.
This innovation is the 1st affect-
ive language analyzer incorporating ethical/motivational
terms, serving in the role of interactive computer
interface. It enables a computer to reason and speak in an
ethical fashion, serving in roles specifying sound human
judgement: such as public relations or security functions.
This innovation is formally based on a multi-level
hierarchy of the traditional groupings of virtues, values,
and ideals, collectively arranged as subsets within a
hierarchy of metaperspectives - as partially depicted below.

Glory--Prudence  Honor--Justice
Providence--FaithLiberty--Hope
Grace--Beauty Free-will--Truth
Tranquility--Ecstasy Equality--Bliss

Dignity--Temperance Integrity--Fortitude
Civility--Charity   Austerity--Decency
Magnanim.--GoodnessEquanimity--Wisdom
Love--JoyPeace--Harmony

The systematic organization underlying this ethical
hierarchy allows for extreme efficiency in programming,
eliminating much of the associated redundancy, providing
a precise determination of motivational parameters at
issue during a given verbal interchange.
This AI platform is organized as a tandem-nested expert
system, composed of a primary affective-language analyzer
overseen by a master control-unit (that coordinates the
verbal interactions over real time). Through an elaborate
matching procedure, the precise motivational parameters
are accurately determined (defined as the passive-monitoring
mode). This basic determination, in turn, serves as the
basis for a response repertoire tailored to the computer
(the true AI simulation mode). This innovation is completely
novel in its ability to simulate emotionally charged language:
an achievement that has previously eluded AI researchers due
to the lack of an adequate model of motivation in general.
As such, it represents a **pure language simulation,** effectively
bypassing many of the limitations plaguing current robotic
research. Affiliated potential applications extend to the
roles of switchboard/receptionist and personal
assistant/companion (in a time-share mode).

Opinions ?

John L

http://www.ethicalvalues.com
http://www.ethicalvalues.info
http://www.emotionchip.net
http://www.global-solutions.org
http://www.world-peace.org
http://www.angelfire.com/rnb/fairhaven/schematics.html
http://www.angelfire.com/rnb/fairhaven/behaviorism.html
http://www.forebrain.org
http://www.charactervalues.com
http://www.charactervalues.org
http://www.charactervalues.net


- Original Message - 
From: "Brad Paulsen" <[EMAIL PROTECTED]>

To: 
Sent: Monday, July 21, 2008 12:35 PM
Subject: Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS



Matt,

Never underestimate the industriousness of a PATENT TROLL.  He's already 
been granted a new patent for the same concept, except (apparently, I 
haven't read the patent yet, this time for "an ethical chip").  Patent 
#7236963 (awarded in 2007) for the "emotion chip."  Don't worry, it's as 
indefensible as the first one.  Same random buzzword generator, different 
title.


The problem is giving one of these morons a technology patent is like 
giving an ADHD kid a loaded gun.  You know they're just looking to use it 
as blackmail for some quick royalty fees.  The posting here was, no doubt, 
for "intimidation purposes."  Of course, somebody ought to tell him the 
AGI crowd doesn't have much use for a solution to the "ethical" artificial 
intelligence problem (whatever the hell that is).  Indeed, even after he 
tells us what it is, it still doesn't make any sense.  And I quote from 
the (first) patent's Abstract "A new model of motivational behavior, 
described as a ten-level metaperspectival hierarchy of..."


Say what?  There is no such word as "metaperspectival."  Not in English, 
at least.  Yet, that's the word he uses to "define" his invention.  But, 
it gets better...


"...ethical terms, serves as the foundation for an ethical simulation of 
artificial intelligence."  Well, I'm glad he intends to conduct his 
simulation ethically.  I think what he really meant, however, was “a 
simulation of ethical artificial intelligence.”  He does get half a 
grammar point for using the correct article (“an”) before “ethical.”  You 
don't see that much these days. But, ah... we have another problem here. 
You see, artificial intelligence IS ALREADY a simulation.  In particular, 
it is a simulation of human intelligence. Hence the word "artificial."  At 
least, that's the idea.  Does he really mean his patent applies to a 
simulation of a simulation?  Given that most existing AI software is 
computationally intensive and gasping for breath most of the time, that's 
got to be one slow-ass AI invention!


Again, from the Abstract of the first patent...

"This AI system is organized as a tandem, neste

Re: [agi] Computing's coming Theory of Everything

2008-07-21 Thread Steve Richfield
Richard,

You are confusing what PCA now is, and what it might become. I am more
interested in the dream than in the present reality. Detailed comments
follow...

On 7/21/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> Steve Richfield wrote:
>
>>  Maybe not "complete" AGI, but a good chunk of one.
>>
>
> Mercy me!  It is not even a gleam in the eye of something that would be
> half adequate.


Who knows what the building blocks of the first successful AGI will be.
Remember that YOU are made of wet neurons, and who knows, maybe they work by
some as-yet-to-be-identified mathematics that will be uncovered in the quest
for a better PCA.

  Do you have any favorites?
>>
>
> No.  The ones I have seen are not worth a second look.


I had the same opinion.

 I have attached an earlier 2006 paper with *_pictures_* of the learned
>> transfer functions, which look a LOT like what is seen in a cat's an money's
>> visual processing.
>>
>
> ... which is so low-level that it counts as peripheral wiring.


Agreed, but there is little difference between GOOD compression and
understanding, so if these guys are truly able to (eventually) perform good
compression, then maybe we are on the way to understanding.

 Note that in the last section where they consider multi-layer applications,
>> that they apparently suggest using *_only one_* PCA layer!
>>
>
> Of course they do:  that is what all these magic bullet people say. They
> can't figure out how to do things in more than one layer, and they do not
> really understand that it is *necessary* to do things in more than one
> layer, so guess what?, they suggest that we not *need* more than one layer.
>
> Sigh.  Programmer Error.


I noted this comment because it didn't ring true for me either. However, my
take on this is that a real/future/good PCA will work for many layers, and
not just the first.

Note that the extensive training was LESS than what a baby sees during its
first hour in the real world.

To give you an idea of what I am looking for, does the algorithm go
>>beyond single-level encoding patterns?
>>
>>  Many of the articles, including the one above, make it clear that they
>> are up against a computing "brick wall". It seems that algorithmic honing is
>> necessary to prove whether the algorithms are any good. Hence, no one has
>> shown any practical application (yet), though they note that JPEG encoding
>> is a sort of grossly degenerative example of their approach.
>>  Of course, the present computational difficulties is NO indication that
>> this isn't the right and best way to go, though I agree that this is yet to
>> be proven.
>>
>
> Hmm... you did not eally answer the question here.


Increasing bluntness: How are they supposed to test multiple-layer methods
when they have to run their computers for days just to test a single layer?
PCs just don't last that long, and Microsoft has provided no checkpoint
capability to support year-long executions.

  Does your response indicate that you are willing to take a shot at
>> explaining some of the math murk in more recent articles? I could certainly
>> use any help that I can get. So far, it appears that a PCA and matrix
>> algebra glossary of terms and abbreviations would go a LONG way to
>> understanding these articles. I wonder if one already exists?
>>
>
> I'd like to help (and I could), but do you realise how pointless it is?


Not yet. I agree that it has't gone anywhere yet. Please make your case that
this will never go anywhere.

 All this brings up another question to consider: Suppose that a magical
>> processing method were discovered that did everything that AGIs needed, but
>> took WAY more computing power than is presently available. What would people
>> here do?
>> 1.  Go work on better hardware.
>> 2.  Work of faster/crummier approximations.
>> 3.  Ignore it completely and look for some other breakthrough.
>>
>
> Steve, you raise a deeply interesting question, at one level, because of
> the answer that it provokes:  if you did not have the computing power to
> prove that the "magical processing method" actually was capable of solving
> the problems of AGI, then you would not be in any position to *know* that it
> was capable of solving the problems if AGI.


This all depends on the underlying theoretical case. Early Game Theory
application was also limited by compute power, but holding the proof that
this was as good as could be done, they pushed for more compute power rather
than walking away and looking for some other approach. I remember when the
RAND Corp required 5 hours to just to solve a 5X5 non-zero-sum game.

Your question answers itself, in other words.


Only in the absence of theoretical support/proof of optimality. PCA looked
like maybe such a proof might be in its future.

Steve Richfield


>> 
>> Steve Richfield wrote:
>>
>>Y'all,
>> I have long predicted a coming "Theory of Everything" (TOE) in
>>CS that would, among other things, 

Re: [agi] Computing's coming Theory of Everything

2008-07-21 Thread Steve Richfield
Derek,

On 7/21/08, Derek Zahn <[EMAIL PROTECTED]> wrote:
>
>
> > > I have attached an earlier 2006 paper with *_pictures_* of the learned
> > > transfer functions, which look a LOT like what is seen in a cat's an
> > > money's visual processing.
> >
> > ... which is so low-level that it counts as peripheral wiring.
>
> True.  Still, it is kind of cool stuff for folks interested in how neural
> systems might self-organize from sensory data.  The visual world has edges
> and borders at various scales and degrees of sharpness and it is interesting
> to see how that can be learned.  Unfortunately, although the linearity
> assumptions of PCA might just barely allow this sort of "proto-V1" as in the
> paper, it doesn't seem likely to extend further up in a feature abstraction
> hierarchy where more complex relationships would seem to require
> nonlinearities.
>

THIS is a big question. Remembering that absolutely ANY function can be
performed by passing the inputs through a suitable non-linearity, adding
them up, and running the results through another suitable non-linearity, it
isn't clear what the limitations of "linear" operations are, given suitable
"translation" of units or point-of-view. Certainly, all fuzzy logical
functions can be performed this way. I even presented a paper at the very
1st NN conference in San Diego, showing that one of the two inhibitory
synapses ever to be characterized was precisely what was needed to perform
an AND NOT to the logarithms of probabilities of assertions being true,
right down to the discontinuity at 1.

>
> Assuming the author's analysis is correct, the observation that the
> discovered eigenvectors form groups that can express rotations of edge (etc)
> filters at various frequencies is kind of nifty, even if it turns out not to
> be biologically plausible.
>

Did you see anything there that was not biologically plausible?

 I don't see any broad generalityfor AGI beyond very low-level sensory
> processing given the limits of PCA
>

Make that present-day PCA. Several people are working on its limitations,
and there seems to be some reason for hope of much better things to come.

 and the sheer volume of training data required to sort out the principal
> components of high-dimensional inputs.
>

Given crummy shitforbrains Hebbian neurons, that aren't smart enough to
continuously normalize their synaptic weights, etc. This too needs MUCH more
work.

>
> For a much more detailed, capable, and perhaps more neurally plausible
> model of similar stuff, the work of Risto Miikkulainen's group is a lot of
> fun.
>

Do you have a hyperlink?

Thanks.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: [agi] Computing's coming Theory of Everything

2008-07-21 Thread Derek Zahn
> > I have attached an earlier 2006 paper with *_pictures_* of the learned > > 
> > transfer functions, which look a LOT like what is seen in a cat's an > > 
> > money's visual processing.> > ... which is so low-level that it counts as 
> > peripheral wiring.
True.  Still, it is kind of cool stuff for folks interested in how neural 
systems might self-organize from sensory data.  The visual world has edges and 
borders at various scales and degrees of sharpness and it is interesting to see 
how that can be learned.  Unfortunately, although the linearity assumptions of 
PCA might just barely allow this sort of "proto-V1" as in the paper, it doesn't 
seem likely to extend further up in a feature abstraction hierarchy where more 
complex relationships would seem to require nonlinearities.  
 
Assuming the author's analysis is correct, the observation that the discovered 
eigenvectors form groups that can express rotations of edge (etc) filters at 
various frequencies is kind of nifty, even if it turns out not to be 
biologically plausible.  I don't see any broad generalityfor AGI beyond very 
low-level sensory processing given the limits of PCA and the sheer volume of 
training data required to sort out the principal components of high-dimensional 
inputs.
 
For a much more detailed, capable, and perhaps more neurally plausible model of 
similar stuff, the work of Risto Miikkulainen's group is a lot of fun.
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Computing's coming Theory of Everything

2008-07-21 Thread Richard Loosemore

Steve Richfield wrote:

Richard,

On 7/21/08, *Richard Loosemore* <[EMAIL PROTECTED] 
> wrote:


Principal component analysis is not new, it has a long history,

 
Yes, as I have just discovered. What I do NOT understand is why anyone 
bothers with clustering (except through ignorance - my own excuse), 
which seems on its face to be greatly inferior.


and so far it is a very long way from being the basis for a complete
AGI,

 
Maybe not "complete" AGI, but a good chunk of one.


Mercy me!  It is not even a gleam in the eye of something that would be 
half adequate.





let alone a theory of everything in computer science.

 
OK, so that may be a bit of an exaggeration, but nonetheless there looks 
like there is SOMETHING big out there that could potentially do the 
particular jobs that I have outlined.


Is there any concrete reason to believe that this particular PCA
paper is doing something that is some kind of quantum leap beyond
what can be found in the (several thousand?) other PCA papers that
have already been written?

 
Do you have any favorites?


No.  The ones I have seen are not worth a second look.


I have attached an earlier 2006 paper with *_pictures_* of the learned 
transfer functions, which look a LOT like what is seen in a cat's an 
money's visual processing.


... which is so low-level that it counts as peripheral wiring.


Note that in the last section where they consider multi-layer 
applications, that they apparently suggest using *_only one_* PCA layer!


Of course they do:  that is what all these magic bullet people say. 
They can't figure out how to do things in more than one layer, and they 
do not really understand that it is *necessary* to do things in more 
than one layer, so guess what?, they suggest that we not *need* more 
than one layer.


Sigh.  Programmer Error.




To give you an idea of what I am looking for, does the algorithm go
beyond single-level encoding patterns?

 
Many of the articles, including the one above, make it clear that they 
are up against a computing "brick wall". It seems that algorithmic 
honing is necessary to prove whether the algorithms are any good. Hence, 
no one has shown any practical application (yet), though they note that 
JPEG encoding is a sort of grossly degenerative example of their approach.
 
Of course, the present computational difficulties is NO indication that 
this isn't the right and best way to go, though I agree that this is yet 
to be proven.


Hmm... you did not eally answer the question here.




Can it find patterns of patterns, up to arbitrary levels of depth?
 And is there empirical evidence that it really does find a set of
patterns comparable to those found by the human cognitive mechanism,
without missing any obvious cases?

 
Again, take a look at the pictures and provide your own opinion. It 
sounds like you are a LOT more familiar with this than I am.


Bloated claims for the effectiveness of some form of PCA turn up
frequently in cog sci, NN and AI.  It can look really impressive
until you realize how limited and non-extensible it is.

 
Curiously, there were NO such claims in any of these articles. Just lots 
of murky math. The attached article is the least opaque of the bunch. I 
was just pointing out that if this ever really DOES come together, then 
WOW. Further, disparate people seem to be coming up with different 
pieces of the puzzle.
 
Does your response indicate that you are willing to take a shot at 
explaining some of the math murk in more recent articles? I could 
certainly use any help that I can get. So far, it appears that a PCA and 
matrix algebra glossary of terms and abbreviations would go a LONG way 
to understanding these articles. I wonder if one already exists?


I'd like to help (and I could), but do you realise how pointless it is? 
 I have enough other things to do that I am not getting on with 
seriously important tasks, never mind explaining PCA minutiae.



All this brings up another question to consider: Suppose that a magical 
processing method were discovered that did everything that AGIs needed, 
but took WAY more computing power than is presently available. What 
would people here do?

1.  Go work on better hardware.
2.  Work of faster/crummier approximations.
3.  Ignore it completely and look for some other breakthrough.


Steve, you raise a deeply interesting question, at one level, because of 
the answer that it provokes:  if you did not have the computing power to 
prove that the "magical processing method" actually was capable of 
solving the problems of AGI, then you would not be in any position to 
*know* that it was capable of solving the problems if AGI.


Your question answers itself, in other words.




Richard Loosemore






There is a NN parallel in electric circuit simulation programs like 
SPICE. Here, the execution time goes up as the *~_square_* of the 
circuit comple

[agi]

2008-07-21 Thread John G. Rose
In-Reply-To: <[EMAIL PROTECTED]>
Subject: RE: [agi] Patterns and Automata
Date: Mon, 21 Jul 2008 17:26:43 -0600
Message-ID: <[EMAIL PROTECTED]>
MIME-Version: 1.0
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Office Outlook 12.0
Thread-Index: Acjqd2/upUdwObXiT+Wx0QccpzkjsgBCQ2zQ
Content-Language: en-us

Well I have lots and lots of related mathematics paper references covering
parts and pieces but nothing that shows how to build the full system. 

Here is a paper that talks a little about forest automata -
http://www.mimuw.edu.pl/~bojan/papers/forest.pdf

For morphisms - 
http://en.wikipedia.org/wiki/Morphism

So.. nothing on the related cognition engineering though... but expanding on
graph isomorphism detection theory leads to the beginnings of that.

John

> -Original Message-
> From: Abram Demski [mailto:[EMAIL PROTECTED]
> Sent: Sunday, July 20, 2008 8:46 AM
> To: agi@v2.listbox.com
> Subject: Re: [agi] Patterns and Automata
> 
> Can you cite any papers related to the approach you're attempting? I
> do not know anything about morphism detection, morphism forests, etc.
> 
> Thanks,
> Abram
> 
> On Sun, Jul 20, 2008 at 2:03 AM, John G. Rose <[EMAIL PROTECTED]>
> wrote:
> >> From: Abram Demski [mailto:[EMAIL PROTECTED]
> >> No, not especially familiar, but it sounds interesting. Personally I
> >> am interested in learning formal grammars to describe data, and there
> >> are well-established equivalences between grammars and automata, so
> >> the approaches are somewhat compatible. According to wikipedia,
> >> semiautomata have no output, so you cannot be using them as a
> >> generative model, but they also lack accept-states, so you can't be
> >> using them as recognition models, either. How are you using them?
> >>
> >
> > Hi Abram,
> >
> > More of recognizing them verses using them to recognize. Also though
> they
> > have potential as morphism detection catalysts.
> >
> > I haven't designed the formal languages, I guess that I'm still
> building
> > alphabets, an alphabet would consist of discrete knowledge structure.
> My
> > model is a morphism forest and I will integrate automata networks
> within
> > this - but still need to do language design. The languages will run
> within
> > the automata networks.
> >
> > Uhm I'm interested too in languages and protocol. Most modern internet
> > protocol is primitive. Any ideas on languages and internet protocol?
> > Sometimes I think that OSI layers need to be refined. Almost like
> there
> > needs to be another layer :) a.k.a. Layer 8.
> >
> > John
> >
> >
> >
> > ---
> > agi
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed: https://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription: https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> >
> 
> 
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> cbdf2a
> Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-21 Thread Brad Paulsen

Matt,

Never underestimate the industriousness of a PATENT TROLL.  He's already been 
granted a new patent for the same concept, except (apparently, I haven't read 
the patent yet, this time for "an ethical chip").  Patent #7236963 (awarded in 
2007) for the "emotion chip."  Don't worry, it's as indefensible as the first 
one.  Same random buzzword generator, different title.


The problem is giving one of these morons a technology patent is like giving an 
ADHD kid a loaded gun.  You know they're just looking to use it as blackmail for 
some quick royalty fees.  The posting here was, no doubt, for "intimidation 
purposes."  Of course, somebody ought to tell him the AGI crowd doesn't have 
much use for a solution to the "ethical" artificial intelligence problem 
(whatever the hell that is).  Indeed, even after he tells us what it is, it 
still doesn't make any sense.  And I quote from the (first) patent's Abstract "A 
new model of motivational behavior, described as a ten-level metaperspectival 
hierarchy of..."


Say what?  There is no such word as "metaperspectival."  Not in English, at 
least.  Yet, that's the word he uses to "define" his invention.  But, it gets 
better...


"...ethical terms, serves as the foundation for an ethical simulation of 
artificial intelligence."  Well, I'm glad he intends to conduct his simulation 
ethically.  I think what he really meant, however, was “a simulation of ethical 
artificial intelligence.”  He does get half a grammar point for using the 
correct article (“an”) before “ethical.”  You don't see that much these days. 
But, ah... we have another problem here.  You see, artificial intelligence IS 
ALREADY a simulation.  In particular, it is a simulation of human intelligence. 
 Hence the word "artificial."  At least, that's the idea.  Does he really mean 
his patent applies to a simulation of a simulation?  Given that most existing AI 
software is computationally intensive and gasping for breath most of the time, 
that's got to be one slow-ass AI invention!


Again, from the Abstract of the first patent...

"This AI system is organized as a tandem, nested...”  Sigh.  Where I come from 
(planet earth), tandem and nested are mutually exclusive modifiers.  It's either 
tandem (i.e., “along side of” or “behind each other”) or it's nested (i.e., 
“inside of”).  Can't be both at the same time.  Sorry.


Continuing, still in the Abstract...

“...overseen by a master control unit – expert system (coordinating the 
motivational interchanges over real time).”


OMG.  Let me see if I have this straight.  He has succeeded in patenting a 
simulation of a simulation with a “master control unit” that is, itself, another 
simulation.  The only thing that contraption will do in real time is sit there 
looking stupid.  That's presuming he could make it work which, as far as I can 
tell by scanning his patent, is right up there with the probability we'll solve 
the energy crises and the greenhouse effect using cold fusion.


I have a good dozen of these "gems," most of them from the Abstract alone.  It 
gets REALLY weird when you read the patent description where he talks about how 
this invention solves the "affective language understanding" problem heretofore 
unsolved.  News Alert: the entire NLP "problem" has yet to be solved (after 50 
of trying by some of the best minds in the world).


I have a PDF version of the newer patent (#7236963) which I will send (off-list) 
to anyone interested.  Be advised, it's 3MB+ in size.  Alternatively, you can 
read about it (see a picture of Mr. LaMuth, and download the PDF) at 
www.emotionchip.net.  I also have a PDF version of the other, earlier, patent he 
holds (#6587846) – the supposed “recently issued” patent (actually, granted in 
2003).  I will also send this off-list to anyone interested (it's only about 
1.3MB).  Frankly, the reason these PDFs are so large is that every page is a 
graphic image.  The documents contain no data stored as text (that I could 
find).  This is pretty typical with U.S. Patent Office documents.  Somebody 
there really likes (or liked) the TIFF image format.  Unfortunately, this makes 
the Search function in Acrobat (or FoxIt Reader) completely useless.


BTW, this guy apparently uses a dialup ISP.  Yeah.  State of the art.  Sheesh!

Cheers,

Brad



Matt Mahoney wrote:

This is a real patent, unfortunately...
http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6587846

But I think it will expire before anyone has the technology to implement it. :-)

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




---
agi
Archives:

Re: [agi] Computing's coming Theory of Everything

2008-07-21 Thread Vladimir Nesov
On Mon, Jul 21, 2008 at 10:32 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> Steve,
>
> Principal component analysis is not new, it has a long history, and so far
> it is a very long way from being the basis for a complete AGI, let alone a
> theory of everything in computer science.
>
> Is there any concrete reason to believe that this particular PCA paper is
> doing something that is some kind of quantum leap beyond what can be found
> in the (several thousand?) other PCA papers that have already been written?
>
> To give you an idea of what I am looking for, does the algorithm go beyond
> single-level encoding patterns?  Can it find patterns of patterns, up to
> arbitrary levels of depth?  And is there empirical evidence that it really
> does find a set of patterns comparable to those found by the human cognitive
> mechanism, without missing any obvious cases?
>
> Bloated claims for the effectiveness of some form of PCA turn up frequently
> in cog sci, NN and AI.  It can look really impressive until you realize how
> limited and non-extensible it is.
>

Indeed, there are many techniques to perform "flat" clustering, and
some of them work really well. The trick is to use such techniques to
build up levels of representation, from "sensory" perception and up to
the higher concepts, with cross-checks everywhere and goal-directed
dynamics at the core.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Computing's coming Theory of Everything

2008-07-21 Thread Richard Loosemore


Steve,

Principal component analysis is not new, it has a long history, and so 
far it is a very long way from being the basis for a complete AGI, let 
alone a theory of everything in computer science.


Is there any concrete reason to believe that this particular PCA paper 
is doing something that is some kind of quantum leap beyond what can be 
found in the (several thousand?) other PCA papers that have already been 
written?


To give you an idea of what I am looking for, does the algorithm go 
beyond single-level encoding patterns?  Can it find patterns of 
patterns, up to arbitrary levels of depth?  And is there empirical 
evidence that it really does find a set of patterns comparable to those 
found by the human cognitive mechanism, without missing any obvious cases?


Bloated claims for the effectiveness of some form of PCA turn up 
frequently in cog sci, NN and AI.  It can look really impressive until 
you realize how limited and non-extensible it is.




Richard Loosemore



Steve Richfield wrote:

Y'all,
 
I have long predicted a coming "Theory of Everything" (TOE) in CS that 
would, among other things, be the "secret sauce" that AGI so desperately 
needs. This year at WORLDCOMP I saw two presentations that seem to be 
running in the right direction. An earlier IEEE article by one of the 
authors seems to be right on target. Here is my own take on this...
 
Form:  The TOE would provide a way of unsupervised learning to rapidly 
form productive NNs, would provide a subroutine that AGI programs could 
throw observations into and SIGNIFICANT patterns would be identified, 
would be the key to excellent video compression, and indirectly, would 
provide the "perfect" encryption that nearly perfect compression would 
provide.
 
Some video compression folks in Germany have come up with "Principal 
Component Analysis" that works a little like clustering, only it also 
includes temporal consideration, so that things that come and go 
together are presumed to be related, thereby eliminating the 
"superstitious clustering" problem of static cluster analysis. There is 
just one "catch": This is buried in array transforms and compression 
jargon that baffles even me, a former in-house numerical analysis 
consultant to the physics and astronomy departments of a major 
university. Further, it is computationally intensive.
 
Teaser: Their article is entitled "A new method for Principal Component 
Analysis of high-dimensional data using Compressive Sensing" and applies 
methods that *_benefit_* from having many dimensions, rather than being 
plagued by them (e.g. as in cluster analysis).
 
Enter a retired math professor who has come up with some clever 
"simplifications" (to the computer, but certainly not to me) to make 
these sorts of computations tractable for real-world use. It looks like 
this could be quickly put to use, if only someone could translate this 
stuff from linear algebra to English for us mere mortals. He also 
authored a textbook that Amazon provides peeks into, but in addition to 
its 3-digit price tag, it was also rather opaque.
 
It's been ~40 years since I have had my head into matrix transforms, so 
I have ordered up some books to hopefully help me through it. Is there 
someone here who is fresh in this area who would like to take a shot at 
"translating" some obtuse mathematical articles into English - or at 
least providing a few pages of prosaic footnotes to explain their 
terminology?
 
I will gladly forward the articles that seem to be relevant to anyone 
who wants to take a shot at this.
 
Any takers?
 
Steve Richfield
 

*agi* | Archives  
 | Modify 
 
Your Subscription	[Powered by Listbox] 






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-21 Thread Richard Loosemore


Seems like this is getting to be a regualr event on the AGI, or other 
singularity-related lists:  about once a year or so some crackpot 
announces that they have filed a patent for a complete thinking machine, 
or a robot-ethics system or some other garbage.


The other crackpot announcement we get on a kind of annual basis is a 
press conference to show the world the first complete, fully functional 
human level AGI system.


Haven't seen any of the latter recently, so we are probably due for one 
pretty soon now.




Richard Loosemore








John LaMuth wrote:
Announcing the recently issued U.S. patent concerning ethical artificial 
intelligence titled: Inductive Inference Affective Language Analyzer 
Simulating AI. This innovative patent (# 6,587,846) introduces the newly 
proposed concept of the Ten Ethical Laws of Robotics: a system that 
radically expands upon previous ethical-robotic systems. As implied in 
its title, this patent represents the first AI system incorporating 
ethical/motivational terms: enabling a computer to reason and speak 
ethically, serving in roles specifying sound human judgement. These Ten 
Ethical Laws directly expand upon Isaac Asimov’s Three Laws of Robotics, 
an earlier Science Fiction construct (from I, Robot)  that aimed to rein 
in the potential conduct of the futuristic AI robot.. Indeed, Asimov’s 
first two laws state that (1) a robot must not harm a human (or through 
inaction allow a human to come to harm), and (2) a robot must obey human 
orders (unless conflicting with rule #1). Although this cursory system 
of safeguards proves intriguing in a Sci-Fi sense, it nevertheless 
remains simplistic in its dictates, leaving open the specific details 
for implementing such a system. The newly patented Ten Ethical Laws 
fortunately remedy such a shortcoming, representing a general overview 
of the enduring conflict pitting virtue against vice: the virtues of 
which are partially listed below:
 
Glory/Prudence   Honor/Justice

Providence/Faith Liberty/Hope
Grace/Beauty  Free-will/Truth
Tranquility/Ecstasy  Equality/Bliss
 
Dignity/Temperance Integrity/Fortitude

Civility/Charity  Austerity/Decency
Magnanim./Goodness  Equanimity/Wisdom
Love/Joy   Peace/Harmony
 
The Ten Ethical Laws are written in a positive style of formal mandate, 
focusing on the virtues to the necessary exclusion of the corresponding 
vices, as formally listed at:
www.angelfire.com/rnb/fairhaven/ethical-laws.html 

 
The purely virtuous mode (by definition) is fully cognizant of the 
contrasting realm of the vices, without necessarily responding in kind. 
Furthermore, the corre-sponding hierarchy of the vices listed below 
contrasts point-for-point with the respective virtuous mode (the overall 
patent is actually composed of 320 individual terms).
 
Infamy/Insurgency   Dishonor/Vengeance

Prodigal/Betrayal Slavery/Despair
Wrath/UglinessTyranny/Hypocrisy
Anger/Abomination  Prejudice/Perdition
 
Foolishness/Gluttony   Caprice/Cowardice

Vulgarity/Avarice Cruelty/Antagonism
Oppression/Evil   Persecution/Cunning
Hatred/Iniquity Belligerence/Turpitude
 
With such ethical safeguards firmly in place, the AI computer is 
formally prohibited from expressing the corresponding vices, allowing 
for a truly flawless simulation of virtue. Indeed, these Ten Ethical 
Robotic Laws hold the potential for parallel applications to a human 
sphere of influence.. Although only a cursory outline of applications is 
possible at this juncture, a more detailed treatment is posted at:
 
 www.ethicalvalues.com 
 
John E.  LaMuth  -  M. S.

fax: 586-314-5960
P.O. Box 105  Lucerne Valley, CA   92356
www.emotionchip.net 
http://www.ethicalvalues.com

The Ten Ethical Laws of Robotics
 
(A brief excerpt from the patent specification)
 
A further pressing issue necessarily remains; namely, in addition to the 
virtues and values, the vices are similarly represented in the matching 
procedure (for completeness sake). These vices are appropriate in a 
diagnostic sense, but are maladaptive should they ever be acted upon. 
Response restrictions are necessarily incorporated into both the 
hardware and programming, along the lines of Isaac Asimov’s Laws of 
Robotics. Asimov’s first two laws state that (1) a robot must not harm a 
human (or through inaction allow a human to come to harm), and (2) a 
robot must obey human orders (unless they conflict with rule #1). 
Fortunately, through the aid of the power pyramid definitions, a more 
systematic set of ethical guidelines is constructed; as represented in the

Ten Ethical Laws of Robotics
 
( I ) As personal authority, I will express my individualism within the 
guidelines of the four basic ego states (guilt, worry, nostalgia, and 
desire) to the exclusi

Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-21 Thread Mike Tintner


BillK: I prefer Warren Ellis's angry, profane Three Laws of Robotics.

(linked from BoingBoing)





Actually, while I take Ellis' point as in

"1...what are you thinking? "Ooh, I must protect the bag of meat at all 
costs because I couldn't possibly plug in the charger all on my own."   Shut 
the  up...


the issue of how an agent, robotic or living, is to secure its energy 
supply, is a huge, complicated and primary one both for an individual and a 
society  - and does seem to be ignored in most theorising about AGI's and 
implementations. Think of this little spot of bother called Iraq.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-21 Thread Bob Mottram
2008/7/21 Matt Mahoney <[EMAIL PROTECTED]>:
> This is a real patent, unfortunately...
> http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6587846
>
> But I think it will expire before anyone has the technology to implement it. 
> :-)



The idea that you can patent an invention which doesn't exist seems
like an abuse of the system to me, but the US patent office is well
known as perhaps the most permissive in the world.  I think the
attitude they take is that you can pretty much patent anything, and
then whether the patent stands up or not just depends upon subsequent
legal squabbling.

Certainly this patent contains very high level ill-defined concepts,
long pondered by philosophers and esposed by poets.  What is grace,
free will, or evil?  Intuitively, most people believe that they know
what these concepts mean, but when you drill down it all begins to get
far murkier.

Whether there will be robots furnished with such cognitive glitterati
within the next 30 years remains to be seen.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-21 Thread BillK
On Mon, Jul 21, 2008 at 12:59 PM, Matt Mahoney wrote:
> This is a real patent, unfortunately...
> http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6587846
>
> But I think it will expire before anyone has the technology to implement it. 
> :-)
>


I prefer Warren Ellis's angry, profane Three Laws of Robotics.
(linked from BoingBoing)



BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-21 Thread Matt Mahoney
This is a real patent, unfortunately...
http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6587846

But I think it will expire before anyone has the technology to implement it. :-)

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-21 Thread Mike Tintner
Perhaps like Bob, I'm not sure whether this isn't a leg-pull. But, to take it 
seriously, how do you propose to give your robot free will - especially 
considering that the vast majority of AI/AGI-ers & roboticists are still 
committed to an algorithmic paradigm which both excludes free will and denies 
its possibility?
  John LaMuth: Announcing the recently issued U.S. patent concerning ethical 
artificial intelligence titled: Inductive Inference Affective Language Analyzer 
Simulating AI. This innovative patent (# 6,587,846) introduces the newly 
proposed concept of the Ten Ethical Laws of Robotics: a system that radically 
expands upon previous ethical-robotic systems. As implied in its title, this 
patent represents the first AI system incorporating ethical/motivational terms: 
enabling a computer to reason and speak ethically, serving in roles specifying 
sound human judgement. These Ten Ethical Laws directly expand upon Isaac 
Asimov's Three Laws of Robotics, an earlier Science Fiction construct (from I, 
Robot)  that aimed to rein in the potential conduct of the futuristic AI 
robot.. Indeed, Asimov's first two laws state that (1) a robot must not harm a 
human (or through inaction allow a human to come to harm), and (2) a robot must 
obey human orders (unless conflicting with rule #1). Although this cursory 
system of safeguards proves intriguing in a Sci-Fi sense, it nevertheless 
remains simplistic in its dictates, leaving open the specific details for 
implementing such a system. The newly patented Ten Ethical Laws fortunately 
remedy such a shortcoming, representing a general overview of the enduring 
conflict pitting virtue against vice: the virtues of which are partially listed 
below: 

  Glory/Prudence   Honor/Justice 
  Providence/Faith Liberty/Hope
  Grace/Beauty  Free-will/Truth
  Tranquility/Ecstasy  Equality/Bliss

  Dignity/Temperance Integrity/Fortitude
  Civility/Charity  Austerity/Decency 
  Magnanim./Goodness  Equanimity/Wisdom
  Love/Joy   Peace/Harmony

  The Ten Ethical Laws are written in a positive style of formal mandate, 
focusing on the virtues to the necessary exclusion of the corresponding vices, 
as formally listed at:
  www.angelfire.com/rnb/fairhaven/ethical-laws.html

  The purely virtuous mode (by definition) is fully cognizant of the 
contrasting realm of the vices, without necessarily responding in kind. 
Furthermore, the corre-sponding hierarchy of the vices listed below contrasts 
point-for-point with the respective virtuous mode (the overall patent is 
actually composed of 320 individual terms).

  Infamy/Insurgency   Dishonor/Vengeance 
  Prodigal/Betrayal Slavery/Despair
  Wrath/UglinessTyranny/Hypocrisy
  Anger/Abomination  Prejudice/Perdition

  Foolishness/Gluttony   Caprice/Cowardice 
  Vulgarity/Avarice Cruelty/Antagonism 
  Oppression/Evil   Persecution/Cunning 
  Hatred/Iniquity Belligerence/Turpitude

  With such ethical safeguards firmly in place, the AI computer is formally 
prohibited from expressing the corresponding vices, allowing for a truly 
flawless simulation of virtue. Indeed, these Ten Ethical Robotic Laws hold the 
potential for parallel applications to a human sphere of influence.. Although 
only a cursory outline of applications is possible at this juncture, a more 
detailed treatment is posted at:

   www.ethicalvalues.com 

  John E.  LaMuth  -  M. S.
  fax: 586-314-5960
  P.O. Box 105  Lucerne Valley, CA   92356
  www.emotionchip.net 
  http://www.ethicalvalues.com

  The Ten Ethical Laws of Robotics 

  (A brief excerpt from the patent specification)

  A further pressing issue necessarily remains; namely, in addition to the 
virtues and values, the vices are similarly represented in the matching 
procedure (for completeness sake). These vices are appropriate in a diagnostic 
sense, but are maladaptive should they ever be acted upon. Response 
restrictions are necessarily incorporated into both the hardware and 
programming, along the lines of Isaac Asimov's Laws of Robotics. Asimov's first 
two laws state that (1) a robot must not harm a human (or through inaction 
allow a human to come to harm), and (2) a robot must obey human orders (unless 
they conflict with rule #1). Fortunately, through the aid of the power pyramid 
definitions, a more systematic set of ethical guidelines is constructed; as 
represented in the
  Ten Ethical Laws of Robotics 

  ( I ) As personal authority, I will express my individualism within the 
guidelines of the four basic ego states (guilt, worry, nostalgia, and desire) 
to the exclusion of the corresponding vices (laziness, negligence, apathy, and 
indifference). 

  ( II ) As personal follower, I will behave pragmatically in accordance with 
the alter ego states (hero worship, blame, approval, and concern) at the 
expense of the corresponding vices (treachery, vindictiveness, spi

Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-21 Thread Bob Mottram
2008/7/21 John LaMuth <[EMAIL PROTECTED]>:
> Announcing the recently issued U.S. patent concerning ethical artificial
> intelligence titled: Inductive Inference Affective Language Analyzer
> Simulating AI.



This just show what a farce the US patent system has become.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com