I think the key fact is that most of these projects are currently
relatively inactive --- plenty of passion out there, just not a
lot of resources.
The last I heard both the HAL project and the CAM-brain project
where pretty much at a stand still due to lack of funding?
Perhaps a good piece
maitri wrote:
The second guy was from either England or the states, not sure. He was
working out of his garage with his wife. He was trying to develop robot
AI including vision, speech, hearing and movement.
This one's a bit more difficult, Steve Grand perhaps?
Gary Miller wrote:
On Dec. 9 Kevin said:
It seems to me that building a strictly black box AGI that only uses
text or graphical input\output can have tremendous implications for our
society, even without arms and eyes and ears, etc. Almost anything can
be designed or contemplated within a
I think my position is similar to Ben's; it's not really what you
ground things in, but rather that you don't expose your limited
little computer brain to an environment that is too complex --
at least not to start with. Language, even reasonably simple
context free languages, could well be too
I don't think this is all that crazy an idea. A reasonable
number of people think that intelligence is essentailly about
game playing in some sense, I happen to be one.
I actually used to play The Legend of Zelda many years back.
Not a bad game from what I remember. However I'm not convinced
One addition/correction:
Shane Legg wrote:
An AGI wouldn't have this and so playing the game would be a
lot harder.
Of course and AGI *could* have this... but you need to build a
big knowledge base into your system and that's a big big job...
or custom build a knowledge base
Alan Grimes wrote:
According to my rule of thumb,
If it has a natural language database it is wrong,
I more or less agree...
Currently I'm trying to learn Italian before I leave
New Zealand to start my PhD. After a few months working
through books on Italian grammar and trying to learn lots
I suspect that Esperanto will not be much more difficult to tackle
than any current existing language, or at best a *tiny* bit easier.
The greatest difficulty of language is not grammar, or spelling,
punctuation, etc. To get an AGI to the point of using _any_ language
naturally on the level
Pei Wang wrote:
In my opinion, one of the most common mistakes made by people is to think AI
in terms of computability and computational complexity, using concepts like
Turing machine, algorithm, and so on. For a long argument, see
http://www.cis.temple.edu/~pwang/551-PT/Lecture/Computation.pdf.
Hi,
This isn't something that I really know much about, but I'll
put my understanding of the issue down in the hope that if
I'm missing something then somebody will point it out and
I'll learn something :)
The literal Church-Turing thesis states that all formal models
of what constitutes a well
Daniel,
An ARFF file is just a collection of n-tuple data items where each tuple
dimension has defined type information. It also has a dimension that
is marked as being the class of the data item. So because it's
basically just a big table of data you could in theory put any kind of
Which is more or less why I figured you weren't going to do
a Penrose on us as you would then fact the usual reply...
Which begs the million dollar question:
Just what is this cunning problem that you have in mind?
:)
Shane
Eliezer S. Yudkowsky wrote:
Shane Legg wrote:
Eliezer S
Hi Cliff,
I'm not good at math -- I can't follow the AIXI materials and I don't
know what Solomonoff induction is. So it's unclear to me how a
certain goal is mathematically defined in this uncertain, fuzzy
universe.
In AIXI you don't really define a goal as such. Rather you have
an agent
Hi Cliff,
So Solomonoff induction, whatever that precisely is, depends on a
somehow compressible universe. Do the AIXI theorems *prove* something
along those lines about our universe,
AIXI and related work does not prove that our universe is compressible.
Nor do they need to. The sun seems
Eliezer S. Yudkowsky wrote:
Has the problem been thought up just in the sense of What happens when
two AIXIs meet? or in the formalizable sense of Here's a computational
challenge C on which a tl-bounded human upload outperforms AIXI-tl?
I don't know of anybody else considering human upload
Cliff Stabbert wrote:
[On a side note, I'm curious whether and if so, how, lossy compression
might relate. It would seem that in a number of cases a simpler
algorithm than expresses exactly the behaviour could be valuable in
that it expresses 95% of the behaviour of the environment being
The other text book that I know is by Cristian S. Calude, the Prof. of
complexity theory that I studied under here in New Zealand. A new
version of this book just recently came out. Going by the last version,
the book will be somewhat more terse than the Li and Vitanyi book and
thus more
Hi Cliff,
Sorry about the delay... I've been out sailing watching the America's
Cup racing --- just a pity my team keeps losing to the damn Swiss! :(
Anyway:
Cliff Stabbert wrote:
SL This seems to be problematic to me. For example, a random string
SL generated by coin flips is not
Hi Cliff and others,
As I came up with this kind of a test perhaps I should
say a few things about its motivation...
The problem was that the Webmind system had a number of
proposed reasoning systems and it wasn't clear which was
the best. Essentially the reasoning systems took as input
a
Kevin wrote:
Kevin's random babbling follows:
Is there a working definition of what complexity exactly is? It seems to
be quite subjective to me. But setting that aside for the moment...
I think the situation is similar to that with the concept of
intelligence in the sense that it means
Semiborg?
:)
Shane
Ben Goertzel wrote:
Hi ,
For a speculative futuristic article I'm writing (for a journal issue edited
by Francis Heylighen), I need a new word: a word to denote a mind that is
halfway between an individual mind and a society of minds.
Not a hive-mind, but rather a community
A while back Rob Sperry posted a link to a video
of a presentation by Robert Hecht-Nielsen.
( http://inc2.ucsd.edu/inc_videos/ )
In it he claims to have worked out how the brain
thinks :) I didn't look at it at the time as it's
150MB+ and only had a dial up account, but checked
it out the other
Brad Wyble wrote:
Well the short gist of this guy's spiel is that Lenat is on
the right track.
My understanding was that he argues that Lenat is on the wrong
track! Lenat is trying to accumulate a large body of relatively
high level logical rules about the world. This is very hard to
do and
The total number of particles in the whole universe is usually
estimated to be around 10^80. These guys claim that the storage
of the brain is 10^8432 bits. That means that my brain has around
10^8352 bits of storage for every particle in the whole universe.
I thought I was feeling smarter
is approximately 10^8432.
The model is obviously an oversimplification, and the number is way too big.
Pei
- Original Message -
From: shane legg [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, September 16, 2003 6:24 AM
Subject: RE: [agi] Discovering the Capacity of Human
Yeah, it's a bit of a worry.
By the way, if anybody is trying to look it up, I spelt the guy's
name wrong, it's actually Stirling's equation. You can find
it in an online book here:
http://www.inference.phy.cam.ac.uk/mackay/itprnn/book.html
It's a great book, about 640 pages long. The result
arnoud wrote:
How large can those constants be? How complex may the environment be maximally
for an ideal, but still realistic, agi agent (thus not a solomonof or AIXI
agent) to be still succesful? Does somebody know how to calculate (and
formalise) this?
I'm not sure if this makes much sense.
Arnoud,
I'm not sure if this makes much sense. An ideal agent is not going
to be a realistic agent. The bigger your computer and the better
your software more complexity your agent will be able to deal with.
With an ideal realistic agent I meant the best software we can make on the
best
Ciao Arnoud,
Perhaps my pattern wasn't clear enough
1
2
3
4
.
.
.
00099
00100
00101
.
.
.
0
1
.
.
.
8
9
then repeat from the start again. However each character is
part of the sequence. So the agent sees 10002300...
So the whole pattern in some sense is
Agi types might like these two articles,
http://www.theregister.co.uk/content/4/33463.html
http://www.theregister.co.uk/content/4/33486.html
Shane
Want to chat instantly with your online friends? Get the FREE Yahoo!
Thanks Pei.
Following the links to the people who are running this I found a
whole bunch of academic AI people who are interested in and working
on general intelligence. Their approach seems to be very much
based around the idea that powerful systems for vision, sound, speech,
motor skills and
Also I think this is pretty cool in case you miss it:
http://www.ai.mit.edu/projects/genesis/movies.html
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
Hi Ben,
Thanks for the comments.
I understand your perspective and I think it's a reasonable one.
Well your thinking has surely left it's mark on my views ;-)
I think that what you'll get from this approach, if you're lucky, is a kind
of primitive brain, suitable to control something with
Hi all,
I'm curious about the general sentiments that people have
about the appropriate level of openness for an AGI project.
My mind certainly isn't made up on the issue and I can see
reasons for going either way. If a single individual or
small group of people made a sudden break through in
Hi Pei,As usual, I disagree! I think you are making a straw man argument.The problem is that what you describe as neural networks is just a certainlimited class of neural networks. That class has certain limitations, which
you point out. However you can't then extend those conclusions to
Ben,My suspicion is that in the brain knowledge is often stored on two levels:
* specific neuronal groups correlated with specific informationIn terms of the activation of specific neurons indicating high level concepts,I think there is good evidence of this now. See for example the work of
Hi Pei,Most of our disagreement seems to be about definitions and choicesof words, rather than facts.
(1) My memo is not intend to cover every system labeled as neural network--- that is why I use a whole section to define what I mean by NNmodel discussed in the paper. I'm fully aware of the fact
Pei,To my mind the key thing with neural networks is that theyare based on large numbers of relatively simple units thatinteract in a local way by sending fairly simple messages.Of course that's still very broad. A CA could be considered
a neural network according to this description, and indeedto
Jiri,
I would have assumed that to be the case, like what Ben said.
I guess they have just decided that my research is sufficiently
interesting to keep up to date on. Though getting hits from these
people on a daily basis seems a bit over the top. I only publish
something once every few months
Daniel,
It seems to be a combination of things. For example, my most recent
hits from military related computers came from an air force base just
a few hours ago:
px20o.wpafb.af.mil - - [19/Dec/2005:12:07:41 +] GET /documents/42.pdf HTTP/1.1 200 50543 - Mozilla/4.0 (compatible; MSIE 6.0;
After a few hours digging around on the internet, what I found was thata number of popular blogs get hits from military DNSs. The most likelyreason seems to be that some people in the military who have office jobs
spend a lot of time surfing the net. When they find something cool theytell all
For a universal test of AI, I would of course suggest universal intelligenceas defined in this report:http://www.idsia.ch/idsiareport/IDSIA-10-06.pdf
ShaneOn Fri, 02 Jun 2006 09:15:26 -500, [EMAIL PROTECTED] [EMAIL PROTECTED]
wrote:What is the universal test for the ability of any given AI SYSTEM
James,Currently I'm writing a much longer paper (about 40 pages) on intelligencemeasurement. A draft version of this will be ready in about a month whichI hope to circulate around a bit for comments and criticism. There is also
another guy who has recently come to my attention who is doing
On 7/13/06, Pei Wang [EMAIL PROTECTED] wrote:
Shane,Do you mean Warren Smith?Yes.Shane
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
On 7/25/06, Ben Goertzel [EMAIL PROTECTED] wrote:
Hmmm...About the measurement of general intelligence in AGI's ...I would tend to advocate a vectorial intelligence approachI'm not against a vector approach. Naturally every intelligent
system will have domains in which it is stronger than
Basically, as you can all probably see, Davy has written a chat bot typeof program. If you email him he'll send you a copy --- he says it's a bitover 1.5 MB and runs on XP.It's a bit hard to understand how it works, partly because (by his own
confession) he doesn't know much about AI and so
Ben,So you think that, Powerful AGI == good Hutter test resultBut you have a problem with the reverse implication,good Hutter test result =/= Powerful AGIIs this correct?
Shane
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to
That seems clear.Human-level AGI =/= Good Hutter test result
just asHuman =/= Good Hutter test resultMy suggestion then is to very slightly modify the test as follows: Instead of just getting the raw characters, what you get is thesequence of characters and the probability distribution over the
Yes, I think a hybridized AGI and compression algorithm could dobetter than either one on its ownHowever, this might result in
an incredibly slow compression process, depending on how fast the AGIthinks.(It would take ME a long time to carry out this process overthe whole Hutter
But Shane, your 19 year old self had a much larger and more diversevolume of data to go on than just the text or speech that you
ingested...I would claim that a blind and deaf person at 19 could pass aTuring test if they had been exposed to enough information overthe years. Especially if they had
definitions listed in AIMA(http://aima.cs.berkeley.edu/
), page 2.PeiOn 9/1/06, Shane Legg [EMAIL PROTECTED] wrote: As part of some research I've been doing with Prof. Hutter on AIXI and formal definitions of machine intelligence, I've been
collecting definitions of intelligence that have been
This is a question that I've thought about from time to time. The conclusionI've come to is that there isn't really one or two reasons, there are many.Surprisingly, most people in academic AI aren't really all that into AI.
It's a job. It's more interesting than doing database programming ina
Eliezer,Shane, what would you do if you had your headway?Say, you won the
lottery tomorrow (ignoring the fact that no rational person would buy aticket).Not just AGI - what specifically would you sit down and doall day?I've got a list of things I'd like to be working on. For example, I'd like to
I think On Intelligence is a good book. It made an impact on
me when I first read it, and it lead to me reading a lot more neuro
science since then. Indeed in hindsight is seems strange to me
that I was so interested in AGI and yet I hadn't seriously studied
what is known about how the brain
Sorry, the new version of the book I mentioned (I read the old one) is
called Principles of Neural Science.
With regards to computer power, I think it is very important. The average
person doing research in AI (i.e. a PhD grad student) doesn't have access
to much more than a PC or perhaps a
It might however be worth thinking about the licence:
Confidentiality. 1. Protection of Confidential Information. You agree that
all code, inventions, algorithms, business concepts, workflow, ideas, and
all other business, technical and financial information, including but not
limited to the
The second scary bit, which I didn't mention above, is made clear in the
blog post from the company CEO, Donna Dubinsky:
Why do we offer you a license without deployment rights? Well, although we
are very excited about the ultimate applications of this technology, we feel
it is too early to
On 3/9/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
Perhaps the ultimate Turing Test would be to make the system itself act as
the
interviewer for a Turing Test of another system.
It's called an inverted Turing test. See:
Watt, S. (1996) Naive-Psychology and the Inverted Turing Test.
Ben, I didn't know you were a Ruby fan...
After working in C# with Peter I'd say that's is a pretty good choice.
Sort of like Java but you can get closer to the metal where needed
quite easily.
For my project we are using Ruby and C. Almost all the code can
be in high level Ruby which is very
On 3/21/07, Chuck Esterbrook [EMAIL PROTECTED] wrote:
Sometimes the slowness of a program is not contained in a small
portion of a program.
Sure. For us however this isn't the case.
Cobra looks nice, very clean to read, even more so than Python.
However the fact that it's in beta and .NET
On 3/23/07, David Clark [EMAIL PROTECTED] wrote:
I have a Math minor from University but in 32 years of computer work, I
haven't used more than grade 12 Math in any computer project yet.
...
I created a bond comparison program for a major wealth investment firm
that
used a pretty fancy
On 3/23/07, David Clark [EMAIL PROTECTED] wrote:
Both the code and algorythmn must be good for any computer system to work
and neither is easy. The bond formula was published for many years but this
particular company certainly didn't have a copy of it inside a program they
could use. The
On 4/4/07, Eugen Leitl [EMAIL PROTECTED] wrote:
how do you reconcile the fact that babies are very stupid compared to
adults? Babies have no less genetic hardware than adults but the
difference
The wiring is not determined by the genome, it's only a facility envelope.
Some wiring is
On 4/5/07, Eugen Leitl [EMAIL PROTECTED] wrote:
I forget the exact number, but I think something like 20% of the
human
genome describes the brain. If somebody is interested in building a
No, it codes for the brain tissue. That's something very different from
describing the brain. See
Kaj,
(Disclaimer: I do not claim to know the sort of maths that Ben and
Hutter and others have used in defining intelligence. I'm fully aware
that I'm dabbling in areas that I have little education in, and might
be making a complete fool of myself. Nonetheless...)
I'm currently writing my
Mike,
1) It seems to assume that intelligence is based on a rational,
deterministic program - is that right? Adaptive intelligence, I would argue,
definitely isn't. There isn't a rational, right way to approach the problems
adaptive intelligence has to deal with.
I'm not sure what you mean
Numbers for humans vary rather a lot. Some types of cells have up to
200,000 connections (Purkinje neurons) while others have very few.
Thus talking about the number of synapses per neuron doesn't make
much sense. It all depends on which type of neuron etc. you mean.
Anyway, when talking about
Mike,
But interestingly while you deny that the given conception of intelligence
is rational and deterministic.. you then proceed to argue rationally and
deterministically.
Universal intelligence is not based on a definition of what rationality is.
It is based
on the idea of achievement. I
Ben,
Are you claiming that the choice of compiler constant is not pragmatically
significant in the definition of the Solomonoff-Levin universal prior, and
in Kolmogorov
complexity? For finite binary sequences...
I really don't see this, so it would be great if you could elaborate.
In some
On 5/2/07, Mark Waser [EMAIL PROTECTED] wrote:
One of the things that I think is *absolutely wrong* about Legg's
paper is that he only uses more history as an example of generalization. I
think that predictive power is test for intelligence (just as he states) but
that it *must* include
Josh,
Interesting work, and I like the nature of your approach.
We have essentially a kind of a pin ball machine at IDSIA
and some of the guys were going to work on watching this
and trying to learn simple concepts from the observations.
I don't work on it so I'm not sure what the current state
On 5/14/07, David Clark [EMAIL PROTECTED] wrote:
Even though I have a Math minor from University, I have used next to no
Mathematics in my 30 year programming/design career.
Yes, but what do you program?
I've been programming for 24 years and I use math all the time.
Recently I've been
Pei,
necessary to spend some time on this issue, since the definition of
intelligence one accepts directly determines one's research goal and
criteria in evaluating other people's work. Nobody can do or even talk
about AI or AGI without an idea about what it means.
This is exactly why I am
Mark,
Gödel's theorem does not say that something is not true, but rather that
it cannot be proven to be true even though it is true.
Thus I think that the analogue of Gödel's theorem here would be something
more like: For any formal definition of intelligence there will exist a
form of
Pei,
Fully agree. The situation in mainstream AI is even worse on this
topic, compared to the new AGI community. Will you write something for
AGI-08 on this?
Marcus suggested that I submit something to AGI-08. However I'm not
sure what I could submit at the moment. I'll have a think about
Pei,
However, in general I do think that, other things being equal, the
system that uses less resources is more intelligent.
Would the following be possible with your notion of intelligence:
There is a computer system that does a reasonable job of solving
some optimization problem. We go
Pei,
No. To me that is not intelligence, though it works even better.
This seems to me to be very divergent from the usual meaning
of the word intelligence. It opens up the possibility that a super
computer that is able to win a Nobel prize by running a somewhat
efficient AI algorithm could
Eliezer,
As the system is now solving the optimization problem in a much
simpler way (brute force search), according to your perspective it
has actually become less intelligent?
It has become more powerful and less intelligent, in the same way that
natural selection is very powerful and
Pei,
This just shows the complexity of the usual meaning of the word
intelligence --- many people do associate with the ability of solving
hard problems, but at the same time, many people (often the same
people!) don't think a brute-force solution show any intelligence.
I think this comes
Ben,
According to this distinction, AIXI and evolution have high intelligence
but low efficient intelligence.
Yes, and in the case of AIXI it is presumably zero given that the resource
consumption is infinite. Evolution on the other hand is just efficient
enough
that when implemented on a
On 5/17/07, Pei Wang [EMAIL PROTECTED] wrote:
Sorry, it should be I assume you are not arguing that evolution is
the only way to produce intelligence
Definitely not. Though the results in my elegant sequence prediction
paper show that at some point math is of no further use due to
Matt,
Shane Legg's definition of universal intelligence requires (I believe)
complexity but not adaptability.
In a universal intelligence test the agent never knows what the environment
it is facing is. It can only try to learn from experience and adapt in
order to
perform well. This means
Pei,
Yes, the book is the best source for most of the topics. Sorry for the
absurd price, which I have no way to influence.
It's $190. Somebody is making a lot of money on each copy and
I'm sure it's not you. To get a 400 page hard cover published at
lulu.com is more like $25.
Shane
On 6/8/07, Matt Mahoney [EMAIL PROTECTED] wrote:
The author has received reliable information, from a Source who wishes to
remain anonymous, that the decimal expansion of Omega begins
Omega = 0.998020554253273471801908...
For which choice of universal Turing machine?
It's actually
Hello Edward,
I'm glad you found some of the writing and links interesting. Let me try to
answer some of your questions.
I understand the basic idea that if you are seeking a prior likelihood for
the occurrence of an event and you have no data about the frequency of its
occurrence -- absent
Hi Ed,
So is the real significance of the universal prior, not its probability
value given in a given probability space (which seems relatively
unimportant, provided is not one or close to zero), but rather the fact that
it can model almost any kind of probability space?
It just takes a
86 matches
Mail list logo