Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-12 Thread Matt Mahoney
/compression/rationale.html -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Ben Goertzel [EMAIL PROTECTED] To: agi@v2.listbox.com Cc: Bruce J. Klein [EMAIL PROTECTED] Sent: Saturday, August 12, 2006 12:28:30 PM Subject: [agi] Marcus Hutter's lossless compression of human knowledge

Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-12 Thread Matt Mahoney
as unrelated to AGI. How do you test if a machine with only text I/O knows that roses are red? Suppose it sees "red roses", then later "roses are" and predicts "red". An LSA or distant-bigram model will do this.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: Rus

Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-12 Thread Matt Mahoney
formats like JPEG and MP3 exploit this by discarding what cannot be seen or heard. However, text doesn't work this way. How much can you discard from a text file before it differs noticeably? -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Pei Wang [EMAIL PROTECTED

Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-13 Thread Matt Mahoney
00 are "red"?Fourth, a program that downloads the Wikipedia benchmark violates the rules of the prize. The decompressor must run on a computer without a network connection. Rules are here:http://cs.fit.edu/~mmahoney/compression/textrules.html-- Matt Mahoney, [EMAIL PROTECTED] To unsubscr

Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-13 Thread Matt Mahoney
translation. It will lead to better spam detection. It will automate a lot of work now done by people on phones. Language modeling is short of AGI but I think it is an important goal. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Ben Goertzel [EMAIL PROTECTED] To: agi@v2

Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-13 Thread Matt Mahoney
. In Proceedings Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-05), 1136-1141, Edinburgh, Scotland, 2005. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Mark Waser [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Sunday, August 13, 2006 5:25:19 PM Subject

Re: Sampo: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-15 Thread Matt Mahoney
model plus one other technique on cleaned up text. Nobody has put all this stuff together. As a result, the best compresors still use byte-level ngram statistics and at most some crude lexical parsing. I think we can do better. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From

Re: Mahoney/Sampo: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-15 Thread Matt Mahoney
sion ratio is ideal.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: Mark Waser [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Tuesday, August 15, 2006 9:28:26 AMSubject: Re: Mahoney/Sampo: [agi] Marcus Hutter's lossless compression of human knowledge prize I don't see

Re: Mahoney/Sampo: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-15 Thread Matt Mahoney
if x and y do not share any information. Then, CDM("it is hot", "it is very warm") CDM("it is hot", "it is cold").assuming your compressor uses a good language model.Now if only we had some test to tell which compressors have the best language mo

Re: Mahoney/Sampo: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-15 Thread Matt Mahoney
people do not believe that text compression is related to AI (even though speech recognition researchers have been evaluating models by perplexity since the early 1990's).-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: Mark Waser [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Tuesday, Augu

Re: Mahoney/Sampo: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-16 Thread Matt Mahoney
l a word. Who is smart and who is dumb? -- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: Mark Waser [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Wednesday, August 16, 2006 9:17:52 AMSubject: Re: Mahoney/Sampo: [agi] Marcus Hutter's lossless compression of human knowledge priz

Re: [agi] Lossy ** lossless compression

2006-08-20 Thread Matt Mahoney
and what to discard.But the Hutter prize is to motivate better language models, not vision or hearing or robotics. For that task, I think lossless text compression is the right approach.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: boris [EMAIL PROTECTED]To: agi@v2.listbox.com

Re: [agi] Lossy ** lossless compression

2006-08-22 Thread Matt Mahoney
that we want to emulate in AI. A machine can make a model precise at no extra cost, enabling us to use text compression to objectively measure these qualities. Researchers in speech recognition have been using this approach for the last 15 years. -- Matt Mahoney, [EMAIL PROTECTED

Re: [agi] Lossy ** lossless compressio

2006-08-25 Thread Matt Mahoney
to disagree. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Philip Goetz [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Friday, August 25, 2006 12:31:06 PM Subject: Re: [agi] Lossy ** lossless compressio On 8/20/06, Matt Mahoney [EMAIL PROTECTED] wrote: The argument

Re: [agi] Lossy ** lossless compression

2006-08-25 Thread Matt Mahoney
. Look at benchmarks for video or audio codecs. Which sounds better, AAC or Ogg? -- Matt Mahoney, [EMAIL PROTECTED] --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] Lossy ** lossless compression

2006-08-26 Thread Matt Mahoney
read the result? 3. Assuming we overcome this obstacle, it may be that the program will say how many fingers, but in that case the program also completely determines my behavior and might not allow me to answer. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Eliezer S

Re: [agi] Lossy ** lossless compression

2006-08-26 Thread Matt Mahoney
I think that either putting Wikipedia in canonical form, or recognizing that it is in canonical form, are two equally difficult problems. So the problem does not go away easily.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: Mark Waser [EMAIL PROTECTED]To: agi@v2.listbox.comSent

Re: [agi] Lossy ** lossless compression

2006-08-26 Thread Matt Mahoney
that this is not in canonical form, then prove it. Specify a criteria for canonical form, a pass/fail test. I want an algorithm or a program, no hand waving or generalities. Input an arbitrary string, output yes or no.Do you see my point now? -- Matt Mahoney, [EMAIL PROTECTED]- Original Message From

Re: [agi] Lossy ** lossless compression

2006-08-27 Thread Matt Mahoney
guess then all you have to do is store the canonical form and compare the input with it.After you solve this simple, easy problem and send me the program, I will solve the much harder problem of converting Wikipedia to canonical form.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From

Re: [agi] Lossy ** lossless compressi

2006-08-27 Thread Matt Mahoney
reasonably derive this information by observing that p(x, x') is approximately equal to p(x) or p(x'). In other words, knowing both x and x' does not tell you any more than x or x' alone, or CDM(x, x') ~ 0.5. I think this is a reasonable way to model lossy behavior in humans. -- Matt Mahoney

Re: [agi] Lossy ** lossless compressi

2006-08-28 Thread Matt Mahoney
, IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing, 717-720, 1999. [3] Ido Dagan, Lillian Lee, Fernando C. N. Pereira, Similarity-Based Models of Word Cooccurrence Probabilities, Machine Learning, 1999. http://citeseer.ist.psu.edu/dagan99similaritybased.html -- Matt Mahoney

Re: [agi] Vision

2006-09-05 Thread Matt Mahoney
. This greatly reduces the storage requirement (i.e. a simpler model). Furthermore, the SVD is equivalent to a 3 layer linear neural network with the layers representing words, an abstract semantic space, and documents. Not that SVD is fast... -- Matt Mahoney, [EMAIL PROTECTED] --- To unsubscribe

Re: [agi] G0 theory completed

2006-10-06 Thread Matt Mahoney
though you know it is really deterministic. If you didn't model the program this way, you wouldn't need to check function arguments or throw exceptions. So you are really supporting my argument that you cannot predict (and therefore cannot control) an AGI.-- Matt Mahoney, [EMAIL PROTECTED

Re: [agi] G0 theory completed

2006-10-09 Thread Matt Mahoney
must build a system with enough hardware to simulate it properly.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: YKY (Yan King Yin) [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Monday, October 9, 2006 2:23:59 PMSubject: Re: [agi] G0 theory completedMatt: (Sorry about the delay...

Re: [agi] G0 theory completed

2006-10-10 Thread Matt Mahoney
?, Technical Report IDSIA-12-06, IDSIA / USI-SUPSI, Dalle Molle Institute for Artificial Intelligence, Galleria 2, 6928 Manno, Switzerland. http://www.vetta.org/documents/IDSIA-12-06-1.pdf -- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: David Clark [EMAIL PROTECTED]To: agi@v2

Re: A Mind Ontology Project? [Re: [agi] method for joining efforts]

2006-10-17 Thread Matt Mahoney
YKY, it looks like you removed the G0 page. Is this proprietary now too?http://www.geocities.com/genericai/-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: YKY (Yan King Yin) [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Monday, October 16, 2006 9:37:23 PMSubject: Re: A Mind

Re: [agi] SOTA

2006-10-19 Thread Matt Mahoney
is still faster than a microphone. - Interactive learning systems - Integrated intelligent systems Lots of theoretical results, but no real applications. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your

Re: [agi] SOTA

2006-10-19 Thread Matt Mahoney
- Original Message From: BillK [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Thursday, October 19, 2006 11:43:46 AM Subject: Re: [agi] SOTA On 10/19/06, Matt Mahoney wrote: - NLP components such as parsers, translators, grammar-checkers Parsing is unsolved. Translators like

Re: [agi] SOTA

2006-10-20 Thread Matt Mahoney
know it is probably between 10^12 to 10^15 and we aren't even sure of that. So when AI is solved, it will probably be a surprise. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2

Re: [agi] SOTA

2006-10-20 Thread Matt Mahoney
- Original Message From: Pei Wang [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Friday, October 20, 2006 3:35:57 PM Subject: Re: [agi] SOTA On 10/20/06, Matt Mahoney [EMAIL PROTECTED] wrote: It is not that we can't come up with the right algorithms. It's that we don't have

Re: [agi] SOTA

2006-10-21 Thread Matt Mahoney
(as in Turing's 1950 example). -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] SOTA

2006-10-21 Thread Matt Mahoney
is much more complex than that. But I think a neural architecture or a hybrid system that includes neural networks of some type is the right direction. For example, Novamente (if I understand correctly, a weighted hypergraph) has some resemblance to a neural network -- Matt Mahoney, [EMAIL

[agi] Language modeling

2006-10-22 Thread Matt Mahoney
. 3, 1561-1564 [2] The Piraha challenge: an Amazonian tribe takes grammar to a strange place, Science News, Dec. 10, 2005, http://www.findarticles.com/p/articles/mi_m1200/is_24_168/ai_n16029317/pg_1 -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored

Re: [agi] Language modeling

2006-10-23 Thread Matt Mahoney
of compound sentences? More training data? Different training data? A new theory of language acquisition? More hardware? How much? -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http

Re: [agi] Language modeling

2006-10-25 Thread Matt Mahoney
- Original Message From: Richard Loosemore [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Tuesday, October 24, 2006 12:37:16 PM Subject: Re: [agi] Language modeling Matt Mahoney wrote: Converting natural language to a formal representation requires language modeling at the highest

Re: [agi] Motivational Systems that are stable

2006-10-27 Thread Matt Mahoney
a long time, and even then don't always work in the face of technology or a rapidly changing environment.-- Matt Mahoney, [EMAIL PROTECTED] This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] Motivational Systems that are stable

2006-10-28 Thread Matt Mahoney
, it is likely to be extremely complex. Whatever it is, it has to be correct.To answer your other question, I am working on natural language processing, although my approach is somewhat unusual.http://cs.fit.edu/~mmahoney/compression/text.html-- Matt Mahoney, [EMAIL PROTECTED] This list is sponsored

Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread Matt Mahoney
, then what is your definition of intelligence?-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: John Scanlon [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Tuesday, October 31, 2006 8:48:43 AMSubject: [agi] Natural versus formal AI interface languages One of the major obstacles

Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread Matt Mahoney
learn Lojban, just like they can learn Cycl or LISP. Lets not repeat these mistakes. This is not training, it is programming a knowledge base. This is narrow AI. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your

Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Matt Mahoney
^9 bits. How much information does it take to list all the irregularities in English like swim-swam, mouse-mice, etc? -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2

Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Matt Mahoney
be built? What would be its architecture? What learning algorithm? What training data? What computational cost? -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Ben Goertzel [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Thursday, November 2, 2006 3:45:42 PM Subject: Re: Re

Re: Re: [agi] Natural versus formal AI interface languages

2006-11-03 Thread Matt Mahoney
a good goal if it means deliberately degrading performance in order to appear human. So I am looking for better tests. I don't believe the approach of let's just build it and see what it does is going to produce anything useful. -- Matt Mahoney, [EMAIL PROTECTED] - This list

Re: [agi] Natural versus formal AI interface languages

2006-11-04 Thread Matt Mahoney
of the tests. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Ben Goertzel [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Friday, November 3, 2006 10:51:16 PM Subject: Re: Re: Re: Re: [agi] Natural versus formal AI interface languages I am happy enough with the long-term goal

Re: [agi] Natural versus formal AI interface languages

2006-11-05 Thread Matt Mahoney
Another important lesson from SHRDLU, aside from discovering that the approach of hand coding knowledge doesn't work, was how long it took to discover this. It was not at all obvious from the initial success. Cycorp still hasn't figured it out after over 20 years. -- Matt Mahoney, [EMAIL

Re: [agi] Natural versus formal AI interface languages

2006-11-06 Thread Matt Mahoney
day for 2 years. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] The concept of a KBMS

2006-11-06 Thread Matt Mahoney
er to in "it is raining"?Is the following sentence correct: "The cat caught a moose"?What is the structured representation of "What?"-- Matt Mahoney, [EMAIL PROTECTED] This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] The crux of the problem

2006-11-07 Thread Matt Mahoney
it too. We need to think about opaque representations, systems we can train and test without looking inside, systems that work but we don't know how. This will be hard, but we have already tried the easy ways.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: James Ratcliff [EMAIL

Re: Re: RE: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Matt Mahoney
history of small (i.e narrow AI) projects that appear superficially to be meaningful steps toward AGI. Sometimes it is decades before we discover that they don't scale. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe

Re: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Matt Mahoney
because it is at the extreme chaotic end of the spectrum. Changing one bit of the key or plaintext affects every bit of the ciphertext. The difference is that it is easier (faster and more ethical) to experiment with language models than the human genome. -- Matt Mahoney, [EMAIL PROTECTED

Re: [agi] The crux of the problem

2006-11-08 Thread Matt Mahoney
. There is no good theory to explain why it works. It just does.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: James Ratcliff [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Wednesday, November 8, 2006 10:14:43 AMSubject: Re: [agi] The crux of the problemMatt: To parse English you have

Re: [agi] The crux of the problem

2006-11-10 Thread Matt Mahoney
o simplify and understand, we are trying to compress the language model to an impossibly small size, always misled down a dead end path by our initial successes with low complexity toy systems.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: James Ratcliff [EMAIL PROTECTED]To: agi@v2.li

Re: [agi] Natural versus formal AI interface languages

2006-11-10 Thread Matt Mahoney
with n = 10^9 is much faster than brute force cryptanalysis in O(2^n) time with n = 128. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Eric Baum [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Thursday, November 9, 2006 12:18:34 PM Subject: Re: [agi] Natural versus formal AI

Re: [agi] One grammar parser URL

2006-11-12 Thread Matt Mahoney
http://josie.stanford.edu:8080/parser/Fails the Turing test :-) "I ate pizza with {pepperoni|George|chopsticks}" all have the same parse.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: James Ratcliff [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Sunday, November 12, 20

Re: [agi] Natural versus formal AI interface languages

2006-11-12 Thread Matt Mahoney
machine, so it has no special ability to solve NP-hard problems. The fact that humans can learn natural language is proof enough that it can be done. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Eric Baum [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Sunday, November 12

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-13 Thread Matt Mahoney
at the learning algorithm. It turns out that there is an efficient neural model for SVD. http://gen.gorrellville.com/gorrell06.pdfIt should not take decades to develop a knowledge base like Cyc. Statistical approaches can do this in a matter of minutes or hours.-- Matt Mahoney, [EMAIL PROTECTED

Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread Matt Mahoney
it the way we understand Google. We know how a search engine works. We will understand how learning works. But we will not be able to predict or control what we build, even if we poke inside. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email

Re: [agi] One grammar parser URL

2006-11-15 Thread Matt Mahoney
wrong answers? I can do that. 3. If translating natural language to a structured representation is not hard, then do it. People have been working on this for 50 years without success. Doing logical inference is the easy part. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Matt Mahoney
), “Prediction and Entropy of Printed English”, Bell Sys. Tech. J (3) p. 50-64. Standing, L. (1973), “Learning 10,000 Pictures”, Quarterly Journal of Experimental Psychology (25) pp. 207-222. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Richard Loosemore [EMAIL PROTECTED

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Matt Mahoney
be like trying to understand why a driver made a left turn by examining the neural firing patterns in the driver's brain. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Mark Waser [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Wednesday, November 15, 2006 9:39:14 AM Subject: Re

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Matt Mahoney
and compression is not obvious. I have summarized the arguments here. http://cs.fit.edu/~mmahoney/compression/rationale.html -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Richard Loosemore [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Wednesday, November 15, 2006 2:38:49 PM Subject

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Matt Mahoney
has not worked, and what you think can be done about it? And Google DOES keep the searchable part of the Internet in memory http://blog.topix.net/archives/11.html because they have enough hardware to do it. http://en.wikipedia.org/wiki/Supercomputer#Quasi-supercomputing -- Matt Mahoney

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Matt Mahoney
. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Mark Waser [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Wednesday, November 15, 2006 3:48:37 PM Subject: Re: [agi] A question on the symbol-system hypothesis The connection between intelligence and compression is not obvious

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Matt Mahoney
Richard Loosemore [EMAIL PROTECTED] wrote: 5) I have looked at your paper and my feelings are exactly the same as Mark's theorems developed on erroneous assumptions are worthless. Which assumptions are erroneous? -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Matt Mahoney
. http://www.vetta.org/documents/IDSIA-12-06-1.pdf -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Mark Waser [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Thursday, November 16, 2006 9:57:40 AM Subject: Re: [agi] A question on the symbol-system hypothesis

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Matt Mahoney
finish. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Mark Waser [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Thursday, November 16, 2006 3:16:54 PM Subject: Re: [agi] A question on the symbol-system hypothesis I consider the last question in each of your examples

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Matt Mahoney
. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: James Ratcliff [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Thursday, November 16, 2006 1:41:41 PM Subject: Re: [agi] A question on the symbol-system hypothesis The main first subtitle: Compression is Equivalent to General

Re: [agi] RSI - What is it and how fast?

2006-11-16 Thread Matt Mahoney
(computer_worm) An AGI of this type would be far more dangerous because it could analyze code, discover large numbers of vulnerabilities and exploit them all at once. As the Internet gets bigger, faster, and more complex, the risk increases. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message

Re: [agi] One grammar parser URL

2006-11-16 Thread Matt Mahoney
training text is not interactive, and I would need about 1 GB. Maybe you have some ideas? -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: YKY (Yan King Yin) [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Thursday, November 16, 2006 7:17:55 PM Subject: Re: [agi] One grammar

Re: [agi] One grammar parser URL

2006-11-17 Thread Matt Mahoney
a language model on a computer. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: James Ratcliff [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Friday, November 17, 2006 9:40:41 AM Subject: Re: [agi] One grammar parser URL Not quite gonna work that way unfortunatly. (I think) The 10^9

Re: [agi] A question on the symbol-system hypothesis

2006-11-18 Thread Matt Mahoney
implementation like GIMPS or SETI would not have enough interconnection speed to support a language model. I think you need about a 1Gb/s connection with low latency to distribute it over a few hundred PCs. 4. Execute access is one buffer overflow away. -- Matt Mahoney, [EMAIL PROTECTED

Re: [agi] A question on the symbol-system hypothesis

2006-11-18 Thread Matt Mahoney
distribution of all environments). -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: James Ratcliff [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Saturday, November 18, 2006 7:42:19 AM Subject: Re: [agi] A question on the symbol-system hypothesis Have to amend that to acts

Re: [agi] new paper: What Do You Mean by AI?

2006-11-18 Thread Matt Mahoney
Pei, you classified NARS as a principle-based AI. Are there any others in that category? What about Novamente? -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Pei Wang [EMAIL PROTECTED] To: agi@v2.listbox.com agi@v2.listbox.com Sent: Friday, November 17, 2006 11:51:58 AM

Re: Re: [agi] Understanding Natural Language

2006-11-26 Thread Matt Mahoney
. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Andrii (lOkadin) Zvorygin [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Sunday, November 26, 2006 4:37:02 PM Subject: Re: Re: [agi] Understanding Natural Language On 11/25/06, Matt Mahoney [EMAIL PROTECTED] wrote: Andrii

Re: [agi] Understanding Natural Language

2006-11-28 Thread Matt Mahoney
with pepperoni. I ate pizza with a fork. Using my definition of understanding, you have to recognize that ate with a fork and pizza with pepperoni rank higher than ate with pepperoni and pizza with a fork. A parser needs to know millions of rules like this. -- Matt Mahoney, [EMAIL PROTECTED

Re: [agi] Understanding Natural Language

2006-11-28 Thread Matt Mahoney
list several definitions that depend on context. Also, words gradually change their meaning over time. I think FOL represents complex ideas poorly. Try translating what you just wrote into FOL and you will see what I mean. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From

Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Matt Mahoney
So what is your definition of understanding? -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Philip Goetz [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Wednesday, November 29, 2006 5:36:39 PM Subject: Re: [agi] A question on the symbol-system hypothesis On 11/19/06, Matt

Re: [agi] A question on the symbol-system hypothesis

2006-12-01 Thread Matt Mahoney
. I think if you insist on an operational definition of consciousness you will be confronted with a disturbing lack of evidence that it even exists. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Matt Mahoney
--- Hank Conn [EMAIL PROTECTED] wrote: On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote: The goals of humanity, like all other species, was determined by evolution. It is to propagate the species. That's not the goal of humanity. That's the goal of the evolution of humanity, which

Re: Re: [agi] Language acquisition in humans: How bound up is it with tonal pattern recognition...?

2006-12-02 Thread Matt Mahoney
because babies that liked to listen to their mother's heartbeat had a survival advantage. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Matt Mahoney
--- Hank Conn [EMAIL PROTECTED] wrote: On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote: --- Hank Conn [EMAIL PROTECTED] wrote: On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote: I suppose the alternative is to not scan brains, but then you still have death, disease

Re: [agi] A question on the symbol-system hypothesis

2006-12-02 Thread Matt Mahoney
://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303 - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303 -- Matt Mahoney

Re: [agi] Re: Motivational Systems of an AI

2006-12-03 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: I am disputing the very idea that monkeys (or rats or pigeons or humans) have a part of the brain which generates the reward/punishment signal for operant conditioning

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-03 Thread Matt Mahoney
of neurons. But not by training. You don't decide to be hungry or not, because animals that could do so were removed from the gene pool. Is this not a sensible way to program the top level goals for an AGI? -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http

Re: [agi] A question on the symbol-system hypothesis

2006-12-03 Thread Matt Mahoney
explanation of how it works. Thus, you have successfully proved that you are an explaining intelligence and it is not. If anything, you've further proved my point that an AGI is going to have to be able to explain/be explained. - Original Message - From: Matt Mahoney [EMAIL

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-05 Thread Matt Mahoney
--- Eric Baum [EMAIL PROTECTED] wrote: Matt --- Hank Conn [EMAIL PROTECTED] wrote: On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote: The goals of humanity, like all other species, was determined by evolution. It is to propagate the species. That's not the goal of humanity

Re: [agi] The Singularity

2006-12-05 Thread Matt Mahoney
of the humans who built it. This means sufficient skills to do research, and to write programs from ambiguous natural language specificiations and have enough world knowledge to figure out what the customer really wanted. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http

Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Matt Mahoney
but it could also be a huge matrix with billions of elements. But it will require a different approach to build, not so much engineering, but more of an experimental science, where you test different learning algoriths at the inputs and outputs only. -- Matt Mahoney, [EMAIL PROTECTED] - This list

Re: [agi] Brain memory Map Article -

2006-12-20 Thread Matt Mahoney
non REM sleep. Perhaps this is part of a feedback loop to erase memories from the hippocampus after they have been copied. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Bob Mottram [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Tuesday, December 19, 2006 8:45:34 AM

Re: [agi] teleoperated robots

2007-01-07 Thread Matt Mahoney
To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303 - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303 -- Matt Mahoney, [EMAIL PROTECTED

Re: [agi] SOTA

2007-01-12 Thread Matt Mahoney
as if that is what it wants? -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] Project proposal: MindPixel 2

2007-01-14 Thread Matt Mahoney
to examine and update the knowledge manually. We should know by now that there is just too much data to do this. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member

Re: [agi] Project proposal: MindPixel 2

2007-01-15 Thread Matt Mahoney
. Lenat briefly mentions Sergey's (one of Google's founders) goal of solving AI by 2020. I think if Google and Cyc work together on this, they will succeed. - Original Message From: Matt Mahoney [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Sunday, January 14, 2007 3:14:07 PM

Re: [agi] Project proposal: MindPixel 2

2007-01-18 Thread Matt Mahoney
. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] Chaitin randomness

2007-01-20 Thread Matt Mahoney
is deterministic. I think Einstein's view of quantum mechanics (God does not play dice) makes more sense when viewed in this light. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2

Re: [agi] (video)The Future of Cognitive Computing

2007-01-23 Thread Matt Mahoney
--- Eugen Leitl [EMAIL PROTECTED] wrote: On Mon, Jan 22, 2007 at 05:26:43PM -0800, Matt Mahoney wrote: The issues of consciousness have been discussed on the singularity list. These are hard questions. I'm not sure questions about anything as ill-defined as consciousness

Re: [agi] Project proposal: MindPixel 2

2007-01-26 Thread Matt Mahoney
, usable AGI sooner. How much knowledge you need depends on what problem you are trying to solve. Building an AGI to run a corporation is not the same as building a better spam detector. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email

Re: [agi] Enumeration of useful genetic biases for AGI

2007-02-13 Thread Matt Mahoney
. But the rest of the brain has a complex structure that is poorly understood. AGI might still be harder than we think. It has happened before. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Ben Goertzel [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Tuesday, February 13, 2007

[agi] Re: Languages for AGI

2007-02-20 Thread Matt Mahoney
, repeat. Your code has to be both optimized and structured so that it can be easily changed in ways you can't predict. This is hard, but unfortunately we do not know yet what will work. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email

Re: [agi] Do AGIs dream of electric sheep?

2007-02-25 Thread Matt Mahoney
of activities do they perform during sleep? Or feel free to chime in with thoughts on AGI and sleep even if you haven't begun building yet... -Chuck -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options

Re: [agi] The Missing Piece

2007-03-01 Thread Matt Mahoney
the problem is a false path. If people actually used Logban then it would be used in ways not intended by the developer and it would develop all the warts of real languages. The real problem is to understand how humans learn language. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored

  1   2   3   4   5   6   7   8   >