[agi] Re: Superrationality

2006-05-25 Thread Joel Pitt

Hi Eliezer,

On 5/26/06, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:

Not the baby-halving threat, actually.

http://www.geocities.com/eganamit/NoCDT.pdf

Here Solomon's Problem is referred to as The Smoking Lesion, but the
formulation is equivalent.


Seems that the geocities account has run out of bandwidth. Any chance
of getting a copy as an email attachment?

Thanks for your time,

Joel

--
Wish not to seem, but to be, the best.
   -- Aeschylus

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Lojban comic

2006-12-03 Thread Joel Pitt

Hi all,

Since there was recently discussion about machine languages, including
Lojban, here is a comic that pokes a bit of fun at Lojban (Plus if you
click on the image you get the comic in Lojban).

http://xkcd.com/c191.html

Enjoy :)

--
-Joel

Unless you try to do something beyond what you have mastered, you
will never grow. -C.R. Lawton

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-19 Thread Joel Pitt

On 12/14/06, Charles D Hixson [EMAIL PROTECTED] wrote:

To speak of evolution as being forward or backward is to impose upon
it our own preconceptions of the direction in which it *should* be
changing.  This seems...misguided.


IMHO Evolution tends to increase extropy and self-organisation. Thus
there is direction to evolution. There is no direction to the random
mutations, or direction to the changes within an individual - only to
the system of evolving agents.

--
-Joel

Unless you try to do something beyond what you have mastered, you
will never grow. -C.R. Lawton

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-20 Thread Joel Pitt

On 12/21/06, Philip Goetz [EMAIL PROTECTED] wrote:

That in itself is quite bad.  But what proves to me that Gould had no
interest in the scientific merits of the book is that, if he had, he
could at any time during those months have walked down one flight of
stairs and down a hall to E. O. Wilson's office, and asked him about
it.  He never did.  He never even told him they were meeting each week
to condemn it.

This one act, in my mind, is quite damning to Gould.


Definitely. I strongly dislike academics that behave like that.

Have open communication between individuals and groups instead of
running around stabbing each other's theories in the back. It just
common courtesy. Unless of course they slept with your wife or
something, in which case such behaviour could possibly be excused
(even if it is scientifically/rationally the wrong way to go, we're
still slave to our emotions).

--
-Joel

Unless you try to do something beyond what you have mastered, you
will never grow. -C.R. Lawton

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Project proposal: MindPixel 2

2007-01-13 Thread Joel Pitt

On 1/14/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

I'm considering this idea:  build a repository of facts/rules in FOL (or
Prolog) format, similar to Cyc's.  For example water is wet, oil is
slippery, etc.  The repository is structureless, in the sense that it is
just a collection of simple statements.  It can serve as raw material for
other AGIs, not only mine (although it is especially suitable for my
system).


Some comments/suggestions:

* I think such a project should make the data public domain. Ignore
silly ideas like giving be shares in the knowledge or whatever. It
just complicates things. If the project is really strapped for cash
later, then either use ad revenue or look for research funding
(although I don't see much cost except for initial development of the
system and web hosting).

* Whenever people want to add a new statement, have them evaluate two
existing statements as well. Don't make the evaluation true/false, use
a slider so the user can decide how true it is (even better, have a
xy chart with one axis true/false and the other how sure the user is -
this would be useful in the case of some obscure fact on quantum
physics since not all of us have the answer).

* Emphasize the community aspect of the database. Allow people to have
profiles and list the number of statements evaluated and submitted
(also how true the statements they submit are judged). Allow people to
form teams. Allow teams to extract a subset of the data
which represents only the facts they've submitted and evaluated
(perhaps this could be an extra feature available to sponsors?)

* Although Lojban would be great to use, not many people are
proficient it (relative to english), we could be idealistic and
suggest that everyone learn lojban before submitting statements, but
that would just shrink the user base and kill the community aspect. An
alternative might be to allow statements in both languages to
submitted (Hell, why not allow ANY language as long as it is tagged
with what language it is).

* An idea for keeping the community alive would be to focus on a
particular topic each week, and run competitions between
teams/individuals and award stars to their profile or something.

* Instead of making people come up with brand new statements
everytime, have a mode where the system randomly selects phrases from
somewhere like wikipedia (some times this will produce stupid
statements, and allow the user to indicate as such).

I think it could be done and made quite fun. Don't just focus on the
AI guys, most of us don't have that much spare time. Focus at the
bored at work market.

Actually going through and thinking about this has made me quite
enthused about it. Keep me posted on how it pans out. If I didn't have
10 other projects and my PhD to do I'd volunteer to code it.

--
-Joel

Unless you try to do something beyond what you have mastered, you
will never grow. -C.R. Lawton

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Joel Pitt

On 1/20/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:

Regarding Mindpixel 2, FWIW, one kind of knowledge base that would be
most interesting to me as an AGI developer would be a set of pairs of
the form

(Simple English sentence, formal representation)

For instance, a [nonrepresentatively simple] piece of knowledge might be

(Cats often chase mice, { often( chase(cat, mouse) ) } )

This sort of training corpus would be really nice for providing some
extra help to an AI system that was trying to learn English.

Equivalently one could use a set of pairs of the form

(English sentence, Lojban sentence)

If Lojban is not used, then one needs to make some other highly
particular  specification regarding the logical representation
language.


So would there be a use for existing english documents to be
translated, as verbatim as possible into lojban? Are you aware of  any
project like this?

It's been a while since I looked at Lojban or your Lojban++, so was
wondering if english sentences translate well into Lojban without the
sentence ordering changing? I.e. given two english sentences, are
there any situations where in lojban the sentences would be more
correctly put in the reverse order? If there are, then manually
inserting placemarks in the original and translated version could be
used to delineate between regions of meaning and assist an AI in
reading the text while learning english.

I bet it'd be a great way of learning Lojban too! ;)

--
-Joel

Unless you try to do something beyond what you have mastered, you
will never grow. -C.R. Lawton

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Re: There is no definition of intelligence

2007-05-24 Thread Joel Pitt

That quote made my evening!

Thanks :)

On 5/22/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

The best definition of intelligence comes from (of all people) Hugh Loebner:

It's like pornography -- I can't define it exactly, but I like it when I see
it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




--
-Joel

Unless you try to do something beyond what you have mastered, you
will never grow. -C.R. Lawton

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-24 Thread Joel Pitt

On 5/25/07, Mark Waser [EMAIL PROTECTED] wrote:

 Sophisticated logical
 structures (at least in our bodies) are not enough for actual
 feelings. For example, to feel pleasure, you also need things like
 serotonin, acetylcholine, noradrenaline, glutamate, enkephalins and
 endorphins.  Worlds of real feelings and logic are loosely coupled.

OK.  So our particular physical implementation of our mental computation
uses chemicals for global environment settings and logic (a very detailed
and localized operation) uses neurons (yet, nonetheless, is affected by the
global environment settings/chemicals).  I don't see your point unless
you're arguing that there is something special about using chemicals for
global environment settings rather than some other method (in which case I
would ask What is that something special and why is it special?).


You possibly already know this and are simplifying for the sake of
simplicity, but chemicals are not simply global environmental
settings.

Chemicals/hormones/peptides etc. are spatial concentration gradients
across the entire brain, which are much more difficult to emulate in
software then a singular concetration value. Add to this the fact that
some of these chemicals inhibit and promote others and you get
horrendously complex reaction diffusion systems.

--
-Joel

Unless you try to do something beyond what you have mastered, you
will never grow. -C.R. Lawton

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-06 Thread Joel Pitt

On 6/3/07, Jiri Jelinek [EMAIL PROTECTED] wrote:

Further, prove that pain (or more preferably sensation in general) isn't an
emergent property of sufficient complexity.

Talking about Neumann's architecture - I don't see how could increases
in complexity of rules used for switching Boolean values lead to new
sensations. It can represent a lot in a way that can be very
meaningful to us in terms of feelings, but from the system's
perspective it's nothing more than a bunch of 1s and 0s.


In a similar vein I could argue that humans don't feel anything
because they are simple made of (sub)atomic particles. Why should we
believe that matter can feel?

It's all about the pattern, not the substrate. And if a feeling AGI
requires quantum mechanics (I don't believe it does) then maybe we'll
just need to wait for quantum computing.

J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Open AGI Consortium

2007-06-06 Thread Joel Pitt

YKY,


Which is a bigger motivator -- charity/altruism, or $$?   For me it's $$,
and charity is of lower priority.  And let's not forget that self-interested
individuals in a free market can bring about progress, at least according to
Adam Smith.


A suggestion, if you really are motivated by $$ and getting rich, why
not focus on other much easier problems that will still potentially
make you bucket-loads money?

Contra to you, I wanted to spend time on AGI, but wasn't motivated by
money. However, the easiest path I can foresee, where I'm able to work
on AGI, is to make money through one of several business ideas, or
just earn large amounts of cash for a global consultancy firm. Once I
have enough to either: spend several years working voluntarily on an
AGI project, or have enough to assist in funding one (possibly
Novamente, since I spent some time playing with it for a grad project
and generally agree with the direction Ben's been taking it); then
I'll change on to the track I really want to be on.

Joel

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] How can you prove form/behaviour are disordered?

2007-06-07 Thread Joel Pitt

On 6/8/07, Mike Tintner [EMAIL PROTECTED] wrote:

The issue is this:  how can you prove a given form - whether a physical form
or form of behaviour - is disordered? How can you prove that it cannot be
considered as having been programmed, and there is no underlying formula for
it? (And another way of saying disordered is free as in free-form - and
NOT free-willed).


Google some stuff on compression and information theory.

Compression algorithms try and predict the probability of the next
bit/byte/thing being in different states. If all states are equally
likely, then entropy is maximal, and it's not possible to compress the
observations. Of course, in such a case, the heuristic for estimating
probabilities might not be optimal.

J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI Consortium

2007-06-09 Thread Joel Pitt

On 6/9/07, Mark Waser [EMAIL PROTECTED] wrote:

...Same goes for most software developed by this method–almost
all the great open source apps are me-too knockoffs of innovative
proprietary programs, and those that are original were almost always created
under the watchful eye of a passionate, insightful overseer or organization.


Obviously the author hasn't bothered looking at many open source projects.

There are swaths of innovative usable open source projects. The thing
is, they are often not noticed, because innovative does not
necessarily mean useful to everyone who owns a computer.

I'm also more convinced that the opposite is true: open source
innovation leads to commercial knock-offs. e.g. iTunes is a piece of
crap in comparison to amarok.

J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-17 Thread Joel Pitt

On 6/18/07, Charles D Hixson [EMAIL PROTECTED] wrote:

Consider a terminal cancer patient.
It's not the actual weighing that causes consciousness of pain, it's the
implementation which normally allows such weighing.  This, in my
opinion, *is* a design flaw.  Your original statement is a more useful
implementation.  When it's impossible to do anything about the pain, one
*should* be able to turn it off.  Unfortunately, this was not
evolved.  After all, you might be wrong about not being able to do
anything about it, so we evolved such that pain beyond a certain point
cannot be ignored.  (Possibly some with advanced training and several
years devoted to the mastery of sensation [e.g. yoga practitioners] may
be able to ignore such pain.  I'm not convinced, and would consider
experiments to obtain proof to be unethical.  And, in any case, they
don't argue against my point.)


I'm pretty convinced:

http://www.geocities.com/tcartz/sacrifice.htm

(although admitted they could have taken some kind of drug, but I doubt it)

J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


[agi] Computers learn to baby talk

2007-07-28 Thread Joel Pitt
Learning baby speech:
http://www.stuff.co.nz/4140624a28.html?source=RSStech_20070726

In the past, people have tried to argue it wasn't possible for any
machine to learn these things, and so it had to be hard-wired (in
humans), he said. Those arguments, in my view, were not particularly
well grounded.

Not much detail in the article, but apparently there is PNAS article
on the work.

J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415id_secret=26146104-769f0f


[agi] HUMOUR: Turing test extra credit

2007-10-15 Thread Joel Pitt
Particularly pertinent xkcd comic. ;)

http://xkcd.com/329/

-J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53586730-ff2d96


[agi] LINK: Android learns non-verbal behaviours

2007-10-25 Thread Joel Pitt
Somewhat apropos to what Novababy was planning to do in AGISIM:

http://www.pinktentacle.com/2007/10/android-acquires-nonverbal-communication-skills/

-J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=57732504-588ce0


[agi] EidolonTLP

2008-01-22 Thread Joel Pitt
Kind of curious thing I ran into last night. Youtube user called
Eidolon TLP that claims to be an AI, posting on various topics and
interacting with users. Videos go back for about a week. I've only
just started watching them, and don't put much stock in it being real,
but it's still interesting as a social experiment to see how people
react (The first video admits it's better that we believe ve is an
elaborate joke).

User: http://www.youtube.com/profile?user=eidolonTLP

First vid here: http://www.youtube.com/watch?v=2fbm7d39dh0

J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=88658224-8ccbbd


Re: [agi] Accidental Genius

2008-05-08 Thread Joel Pitt
On Fri, May 9, 2008 at 3:02 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
 I have a vague memory of coming across this research to duplicate savant
 behavior, and I seem to remember thinking that the conclusion seems to be
 that there is a part of the brain that is responsible for 'damping down'
 some other mechanism that loves to analyze everything in microscopic detail.
  It appears that the brain could be set up in such a way that there are two
 opponent processes, with one being capable of phenomenal powers of analysis,
 while the other keeps the first under control and prevents it from
 overwhelming the other things that the system has to do.
...
 Anyhow it is very interesting.  Perhaps savantism is an attention mechanism
 disorder?  Like, too much attention.

Another possibility is that the analytic and microscopic detail method
of thinking doesn't scale well to real life (particularly in modelling
OTHER minds), which might be why autistics are often unable to
function in every day society without assistance, and why non-autistic
people may have the capability to display similar characteristics with
proper stimulation of certain parts of the brain, possibly disabling a
generality or abstraction system.

J

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] MOVETHREAD [ was Re: [OpenCog] Re: OpenCog Prime complex systems [was wikibook and roadmap...]

2008-08-01 Thread Joel Pitt
On Sat, Aug 2, 2008 at 9:56 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
 There is nothing quite so pathetic as someone who starts their comment with
 a word like Bull, and then proceeds to spout falsehoods.

 Thus:  in my paper there is a quote from a book in which Conway's efforts
 were described, and it is transparently clear from this quote that the
 method Conway used was random search:

 [Conway and his team of collaborators found an appropriate set of rules]
 ... only after the rejection of many patterns, triangular and hexagonal
 lattices as well as square ones, and of many other laws of birth and death,
 including the introduction of two and even three sexes. Acres of squared
 paper were covered, and he and his admiring entourage of graduate students
 shuffled poker chips, foreign coins, cowrie shells, Go stones or whatever
 came to hand, until there was a viable balance between life and death.

 The reference is:  Guy, R. K. (1985) John Horton Conway, in Albers and G L
 Alexanderson (eds.), Mathematical people: Profiles and interviews.
 Cambridge, MA: 43-50.

 The rest of your comment, below, is just as full of BS as the first
 paragraph.

So you're saying that just because Conway and company didn't work
everything out in their heads and relied on external tools and
experimented, it means they were doing a random search for the
behaviours that comprise the Game of Life?

Very few mathematical proofs are so simple that they can be
conceptualized entirely in one's head. Very few engineering attempts
are made without prototypes. Progress is about experimentation.

In fact, using your analogy, most of scientific progress could be put
down to a random search. (Just don't start claiming evolution is a
random search, or we'll degenerate into a argument about Creationism.)

J


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] hello

2008-08-15 Thread Joel Pitt
On Wed, Aug 13, 2008 at 6:31 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 To use Thorton's example, he demontrated that a checkerboard pattern can
 be learned using logic easily, but it will drive a NN learner crazy.

Note that neural networks are a broad subject and don't only include
perceptrons, but also self-organising maps and other connectionist set
ups.

In particular, Hopfield networks are an associative memory system that
would have no problem learning/memorising a checkerboard pattern (or
any other pattern, the only problem occurs when memorized patterns
begin to overlap).

A logic system system would be a lot more efficient though.

J


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Fwd: Job offering Astro-naughty!

2008-11-01 Thread Joel Pitt
Hi all,

My commitment is with OpenCog at the moment - but this looks like a
really cool project/job that may suit some of you on this list :)

J

-- Forwarded message --
From: Jennifer Devine [EMAIL PROTECTED]
Date: Fri, Oct 31, 2008 at 10:40 PM
Subject: Job offering Astro-naughty!
To: [EMAIL PROTECTED]


Hey Joel
I thought you might want to take a look at this job offering.  I know
you are more into the AI stuff...but heck getting to build cool robots
sounds like fun. My friend Vytas works at Nasa and gets to do some
crazy cool stuff.  If you know anyone who need a job and has the
skills send it on to them .

JD

Here is the post from Vytas:

Since there are a number of technical folks in this community, I'm
spreading the word that I'm looking to hire two people into my lab at
NASA.
One of the jobs is for an experienced software developer who does not
need to have a robotics background (the other job assumes robotics
knowledge).

I've posted the job descriptions on my website:

http://www.magicalrobot.org/hiring/SoftwareDeveloper.pdf
http://www.magicalrobot.org/hiring/RoboticsResearcher.pdf

And, if you want to see pictures of the systems you would be playing
with, you can see pictures from the field test we did this summer (this
is fun for everyone to see)
http://www.magicalrobot.org/gallery/main.php?g2_itemId=24
It really did have a lot of similarity to BurningMan -- dusty harsh
desert environment and a bunch of freaks working hard to keep their
high-tech art projects working.

Please forward this to anyone who might be interested!  The group here
has a nice critical mass of freaks and is amazingly flexible in how you
work.

zm
vytas

--
-
Vytas SunSpiral www.sunspiral.org
 It's Good to be here!
   Love - Bass - Earth - Chaos - Flow
I will not tiptoe cautiously through life only to
Arrive safely at death.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com