/compression/rationale.html
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Cc: Bruce J. Klein [EMAIL PROTECTED]
Sent: Saturday, August 12, 2006 12:28:30 PM
Subject: [agi] Marcus Hutter's lossless compression of human knowledge
as unrelated to AGI. How do you test if a machine with only text I/O knows that roses are red? Suppose it sees "red roses", then later "roses are" and predicts "red". An LSA or distant-bigram model will do this.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: Rus
formats like JPEG and MP3 exploit
this by discarding what cannot be seen or heard. However, text doesn't work
this way. How much can you discard from a text file before it differs
noticeably?
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Pei Wang [EMAIL PROTECTED
00 are "red"?Fourth, a program that downloads the Wikipedia benchmark violates the rules of the prize. The decompressor must run on a computer without a network connection. Rules are here:http://cs.fit.edu/~mmahoney/compression/textrules.html-- Matt Mahoney, [EMAIL PROTECTED]
To unsubscr
translation. It will lead to better spam detection.
It will automate a lot of work now done by people on phones. Language modeling
is short of AGI but I think it is an important goal.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2
. In Proceedings Nineteenth International Joint Conference on
Artificial Intelligence (IJCAI-05), 1136-1141, Edinburgh, Scotland, 2005.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, August 13, 2006 5:25:19 PM
Subject
model plus one
other technique on cleaned up text. Nobody has put all this stuff together.
As a result, the best compresors still use byte-level ngram statistics and at
most some crude lexical parsing. I think we can do better.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From
sion ratio is ideal.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: Mark Waser [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Tuesday, August 15, 2006 9:28:26 AMSubject: Re: Mahoney/Sampo: [agi] Marcus Hutter's lossless compression of human knowledge prize
I
don't see
if x and y do not share any information. Then, CDM("it is hot", "it is very warm") CDM("it is hot", "it is cold").assuming your compressor uses a good language model.Now if only we had some test to tell which compressors have the best language mo
people do not believe that text compression is related to AI (even though speech recognition researchers have been evaluating models by perplexity since the early 1990's).-- Matt Mahoney,
[EMAIL PROTECTED]- Original Message From: Mark Waser [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Tuesday, Augu
l a word. Who is smart and who is dumb? -- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: Mark Waser [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Wednesday, August 16, 2006 9:17:52 AMSubject: Re: Mahoney/Sampo: [agi] Marcus Hutter's lossless compression of human knowledge priz
and what to discard.But the Hutter prize is to motivate better language models, not vision or hearing or robotics. For that task, I think lossless text compression is the right approach.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: boris [EMAIL PROTECTED]To: agi@v2.listbox.com
that
we want to emulate in AI. A machine can make a model precise at no extra cost,
enabling us to use text compression to objectively measure these qualities.
Researchers in speech recognition have been using this approach for the last 15
years.
-- Matt Mahoney, [EMAIL PROTECTED
to disagree.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, August 25, 2006 12:31:06 PM
Subject: Re: [agi] Lossy ** lossless compressio
On 8/20/06, Matt Mahoney [EMAIL PROTECTED] wrote:
The argument
. Look at benchmarks for video or audio
codecs. Which sounds better, AAC or Ogg?
-- Matt Mahoney, [EMAIL PROTECTED]
---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
read the result?
3. Assuming we overcome this obstacle, it may be that the program will say how
many fingers, but in that case the program also completely determines my
behavior and might not allow me to answer.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Eliezer S
I think that either putting Wikipedia in canonical form, or recognizing that it is in canonical form, are two equally difficult problems. So the problem does not go away easily.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: Mark Waser [EMAIL PROTECTED]To: agi@v2.listbox.comSent
that this is not in canonical form, then prove it. Specify a criteria for canonical form, a pass/fail test. I want an algorithm or a program, no hand waving or generalities. Input an arbitrary string, output yes or no.Do you see my point now? -- Matt Mahoney, [EMAIL PROTECTED]- Original Message From
guess then all you have to do is store the canonical form and compare the input with it.After you solve this simple, easy problem and send me the program, I will solve the much harder problem of converting Wikipedia to canonical form.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From
reasonably derive this information
by observing that p(x, x') is approximately equal to p(x) or p(x'). In other
words, knowing both x and x' does not tell you any more than x or x' alone, or
CDM(x, x') ~ 0.5. I think this is a reasonable way to model lossy behavior in
humans.
-- Matt Mahoney
, IEEE Intl. Conf. on Acoustics, Speech, and Signal
Processing, 717-720, 1999.
[3] Ido Dagan, Lillian Lee, Fernando C. N. Pereira, Similarity-Based Models of
Word Cooccurrence Probabilities, Machine Learning, 1999.
http://citeseer.ist.psu.edu/dagan99similaritybased.html
-- Matt Mahoney
. This greatly reduces the storage requirement (i.e. a simpler
model). Furthermore, the SVD is equivalent to a 3 layer linear neural network
with the layers representing words, an abstract semantic space, and documents.
Not that SVD is fast...
-- Matt Mahoney, [EMAIL PROTECTED]
---
To unsubscribe
though you know it is really deterministic. If you didn't model the program this way, you wouldn't need to check function arguments or throw exceptions. So you are really supporting my argument that you cannot predict (and therefore cannot control) an AGI.-- Matt Mahoney, [EMAIL PROTECTED
must build a system with enough hardware to simulate it properly.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: YKY (Yan King Yin) [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Monday, October 9, 2006 2:23:59 PMSubject: Re: [agi] G0 theory completedMatt:
(Sorry about the delay...
?, Technical Report
IDSIA-12-06, IDSIA / USI-SUPSI, Dalle Molle
Institute for Artificial Intelligence, Galleria 2, 6928 Manno, Switzerland.
http://www.vetta.org/documents/IDSIA-12-06-1.pdf
-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: David Clark [EMAIL PROTECTED]To: agi@v2
YKY, it looks like you removed the G0 page. Is this proprietary now too?http://www.geocities.com/genericai/-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: YKY (Yan King Yin) [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Monday, October 16, 2006 9:37:23 PMSubject: Re: A Mind
is still faster than a
microphone.
- Interactive learning systems
- Integrated intelligent systems
Lots of theoretical results, but no real applications.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your
- Original Message
From: BillK [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, October 19, 2006 11:43:46 AM
Subject: Re: [agi] SOTA
On 10/19/06, Matt Mahoney wrote:
- NLP components such as parsers, translators, grammar-checkers
Parsing is unsolved. Translators like
know it is probably
between 10^12 to 10^15 and we aren't even sure of that. So when AI is solved,
it will probably be a surprise.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2
- Original Message
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, October 20, 2006 3:35:57 PM
Subject: Re: [agi] SOTA
On 10/20/06, Matt Mahoney [EMAIL PROTECTED] wrote:
It is not that we can't come up with the right algorithms. It's that we
don't have
(as in Turing's 1950 example).
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]
is much more complex than that. But I
think a neural architecture or a hybrid system that includes neural networks of
some type is the right direction. For example, Novamente (if I understand
correctly, a weighted hypergraph) has some resemblance to a neural network
-- Matt Mahoney, [EMAIL
.
3, 1561-1564
[2] The Piraha challenge: an Amazonian tribe takes grammar to a strange place,
Science News, Dec. 10, 2005,
http://www.findarticles.com/p/articles/mi_m1200/is_24_168/ai_n16029317/pg_1
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored
of
compound sentences? More training data? Different training data? A new
theory of language acquisition? More hardware? How much?
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http
- Original Message
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, October 24, 2006 12:37:16 PM
Subject: Re: [agi] Language modeling
Matt Mahoney wrote:
Converting natural language to a formal representation requires language
modeling at the highest
a long time, and even then don't always work in the face of technology or a rapidly changing environment.-- Matt Mahoney, [EMAIL PROTECTED]
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]
, it is likely to be extremely complex. Whatever it is, it has to be correct.To answer your other question, I am working on natural language processing, although my approach is somewhat unusual.http://cs.fit.edu/~mmahoney/compression/text.html-- Matt Mahoney, [EMAIL PROTECTED]
This list is sponsored
, then what is your definition of intelligence?-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: John Scanlon [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Tuesday, October 31, 2006 8:48:43 AMSubject: [agi] Natural versus formal AI interface languages
One of the major obstacles
learn Lojban, just like they can
learn Cycl or LISP. Lets not repeat these mistakes. This is not training, it
is programming a knowledge base. This is narrow AI.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your
^9 bits. How much information does it take to list all
the irregularities in English like swim-swam, mouse-mice, etc?
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2
be built? What would be its architecture? What
learning algorithm? What training data? What computational cost?
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 2, 2006 3:45:42 PM
Subject: Re: Re
a good goal if it means
deliberately degrading performance in order to appear human. So I am looking
for better tests. I don't believe the approach of let's just build it and
see what it does is going to produce anything useful.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list
of the tests.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, November 3, 2006 10:51:16 PM
Subject: Re: Re: Re: Re: [agi] Natural versus formal AI interface languages
I am happy enough with the long-term goal
Another important lesson from SHRDLU, aside from discovering that the approach
of hand coding knowledge doesn't work, was how long it took to discover this.
It was not at all obvious from the initial success. Cycorp still hasn't
figured it out after over 20 years.
-- Matt Mahoney, [EMAIL
day for 2 years.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
er to in "it is raining"?Is the following sentence correct: "The cat caught a moose"?What is the structured representation of "What?"-- Matt Mahoney, [EMAIL PROTECTED]
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
it too. We need to think about opaque representations, systems we can train and test without looking inside, systems that work but we don't know how. This will be hard, but we have already tried the easy ways.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: James Ratcliff [EMAIL
history of small (i.e narrow AI) projects that appear superficially to be
meaningful steps toward AGI. Sometimes it is decades before we discover that
they don't scale.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe
because it is at the extreme chaotic end of the spectrum.
Changing one bit of the key or plaintext affects every bit of the ciphertext.
The difference is that it is easier (faster and more ethical) to experiment
with language models than the human genome.
-- Matt Mahoney, [EMAIL PROTECTED
. There is no good theory to explain why it works. It just does.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: James Ratcliff [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Wednesday, November 8, 2006 10:14:43 AMSubject: Re: [agi] The crux of the problemMatt: To parse English you have
o simplify and understand, we are trying to compress the language model to an impossibly small size, always misled down a dead end path by our initial successes with low complexity toy systems.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: James Ratcliff [EMAIL PROTECTED]To: agi@v2.li
with n = 10^9 is much faster than brute force
cryptanalysis in O(2^n) time with n = 128.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Eric Baum [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 9, 2006 12:18:34 PM
Subject: Re: [agi] Natural versus formal AI
http://josie.stanford.edu:8080/parser/Fails the Turing test :-) "I ate pizza with {pepperoni|George|chopsticks}" all have the same parse.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: James Ratcliff [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Sunday, November 12, 20
machine, so it
has no special ability to solve NP-hard problems. The fact that humans can
learn natural language is proof enough that it can be done.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Eric Baum [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, November 12
at the learning algorithm. It turns out that there is an efficient neural model for SVD. http://gen.gorrellville.com/gorrell06.pdfIt should not take decades to develop a knowledge base like Cyc. Statistical approaches can do this in a matter of minutes or hours.-- Matt Mahoney, [EMAIL PROTECTED
it the way we understand Google. We know
how a search engine works. We will understand how learning works. But we will
not be able to predict or control what we build, even if we poke inside.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
wrong answers? I can do that.
3. If translating natural language to a structured representation is not hard,
then do it. People have been working on this for 50 years without success.
Doing logical inference is the easy part.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From
), “Prediction and
Entropy of Printed English”, Bell Sys. Tech. J (3) p. 50-64.
Standing, L. (1973), “Learning 10,000 Pictures”,
Quarterly Journal of Experimental Psychology (25) pp. 207-222.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Richard Loosemore [EMAIL PROTECTED
be like trying to understand why a driver made a left turn by examining the
neural firing patterns in the driver's brain.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 9:39:14 AM
Subject: Re
and compression is not obvious. I have
summarized the arguments here.
http://cs.fit.edu/~mmahoney/compression/rationale.html
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 2:38:49 PM
Subject
has not worked, and what you
think can be done about it?
And Google DOES keep the searchable part of the Internet in memory
http://blog.topix.net/archives/11.html
because they have enough hardware to do it.
http://en.wikipedia.org/wiki/Supercomputer#Quasi-supercomputing
-- Matt Mahoney
.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 3:48:37 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
The connection between intelligence and compression is not obvious
Richard Loosemore [EMAIL PROTECTED] wrote:
5) I have looked at your paper and my feelings are exactly the same as
Mark's theorems developed on erroneous assumptions are worthless.
Which assumptions are erroneous?
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From
.
http://www.vetta.org/documents/IDSIA-12-06-1.pdf
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 9:57:40 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
finish.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 3:16:54 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
I consider the last question in each of your examples
.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: James Ratcliff [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 1:41:41 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
The main first subtitle:
Compression is Equivalent to General
(computer_worm)
An AGI of this type would be far more dangerous because it could analyze code,
discover large numbers of vulnerabilities and exploit them all at once. As the
Internet gets bigger, faster, and more complex, the risk increases.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
training text is not interactive, and I would need about 1 GB.
Maybe you have some ideas?
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: YKY (Yan King Yin) [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 7:17:55 PM
Subject: Re: [agi] One grammar
a language model on a computer.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: James Ratcliff [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, November 17, 2006 9:40:41 AM
Subject: Re: [agi] One grammar parser URL
Not quite gonna work that way unfortunatly. (I think)
The 10^9
implementation like GIMPS or SETI would not have enough
interconnection speed to support a language model. I think you need about a
1Gb/s connection with low latency to distribute it over a few hundred PCs.
4. Execute access is one buffer overflow away.
-- Matt Mahoney, [EMAIL PROTECTED
distribution of all environments).
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: James Ratcliff [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, November 18, 2006 7:42:19 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
Have to amend that to acts
Pei, you classified NARS as a principle-based AI. Are there any others in
that category? What about Novamente?
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com agi@v2.listbox.com
Sent: Friday, November 17, 2006 11:51:58 AM
.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Andrii (lOkadin) Zvorygin [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, November 26, 2006 4:37:02 PM
Subject: Re: Re: [agi] Understanding Natural Language
On 11/25/06, Matt Mahoney [EMAIL PROTECTED] wrote:
Andrii
with pepperoni.
I ate pizza with a fork.
Using my definition of understanding, you have to recognize that ate with a
fork and pizza with pepperoni rank higher than ate with pepperoni and
pizza with a fork. A parser needs to know millions of rules like this.
-- Matt Mahoney, [EMAIL PROTECTED
list several definitions
that depend on context. Also, words gradually change their meaning over time.
I think FOL represents complex ideas poorly. Try translating what you just
wrote into FOL and you will see what I mean.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From
So what is your definition of understanding?
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 5:36:39 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
On 11/19/06, Matt
.
I think if you insist on an operational definition of consciousness you will
be confronted with a disturbing lack of evidence that it even exists.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please
--- Hank Conn [EMAIL PROTECTED] wrote:
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:
The goals of humanity, like all other species, was determined by
evolution.
It is to propagate the species.
That's not the goal of humanity. That's the goal of the evolution of
humanity, which
because babies that liked to listen to their mother's heartbeat had a survival
advantage.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
--- Hank Conn [EMAIL PROTECTED] wrote:
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Hank Conn [EMAIL PROTECTED] wrote:
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:
I suppose the alternative is to not scan brains, but then you still
have
death, disease
://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
-- Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
I am disputing the very idea that monkeys (or rats or pigeons or humans)
have a part of the brain which generates the reward/punishment signal
for operant conditioning
of neurons.
But not by training. You don't decide to be hungry or not, because animals
that could do so were removed from the gene pool.
Is this not a sensible way to program the top level goals for an AGI?
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http
explanation of how it works.
Thus, you have successfully proved that you are an explaining intelligence
and it is not.
If anything, you've further proved my point that an AGI is going to have
to be able to explain/be explained.
- Original Message -
From: Matt Mahoney [EMAIL
--- Eric Baum [EMAIL PROTECTED] wrote:
Matt --- Hank Conn [EMAIL PROTECTED] wrote:
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote: The goals
of humanity, like all other species, was determined by
evolution. It is to propagate the species.
That's not the goal of humanity
of the humans who built it. This means
sufficient skills to do research, and to write programs from ambiguous natural
language specificiations and have enough world knowledge to figure out what
the customer really wanted.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http
but it could also be a huge matrix with
billions of elements. But it will require a different approach to build, not
so much engineering, but more of an experimental science, where you test
different learning algoriths at the inputs and outputs only.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list
non REM sleep. Perhaps this is part of a feedback loop
to erase memories from the hippocampus after they have been copied.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Bob Mottram [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, December 19, 2006 8:45:34 AM
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
-- Matt Mahoney, [EMAIL PROTECTED
as if that is what it wants?
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
to examine and update the knowledge manually.
We should know by now that there is just too much data to do this.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member
.
Lenat briefly mentions Sergey's (one of Google's founders) goal of solving AI
by 2020. I think if Google and Cyc work together on this, they will succeed.
- Original Message
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, January 14, 2007 3:14:07 PM
.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
is
deterministic.
I think Einstein's view of quantum mechanics (God does not play dice) makes
more sense when viewed in this light.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2
--- Eugen Leitl [EMAIL PROTECTED] wrote:
On Mon, Jan 22, 2007 at 05:26:43PM -0800, Matt Mahoney wrote:
The issues of consciousness have been discussed on the singularity list.
These are hard questions.
I'm not sure questions about anything as ill-defined as consciousness
, usable AGI sooner.
How much knowledge you need depends on what problem you are trying to solve.
Building an AGI to run a corporation is not the same as building a better spam
detector.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
. But the rest of the brain has a complex structure that is poorly
understood.
AGI might still be harder than we think. It has happened before.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, February 13, 2007
, repeat. Your code has to be both
optimized and structured so that it can be easily changed in ways you can't
predict. This is hard, but unfortunately we do not know yet what will work.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
of activities do they perform during sleep?
Or feel free to chime in with thoughts on AGI and sleep even if you
haven't begun building yet...
-Chuck
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options
the problem is a false path. If people actually used Logban then
it would be used in ways not intended by the developer and it would develop
all the warts of real languages. The real problem is to understand how humans
learn language.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored
1 - 100 of 777 matches
Mail list logo