[agi] Context free text analysis is not a proper method of natural language understanding

2007-10-03 Thread m1n1mal
Relating to the idea that text compression (as demonstrated by general compression algorithms) is a measure of intelligence, Claims: (1) To understand natural language requires knowledge (CONTEXT) of the social world(s) it refers to. (2) Communication includes (at most) a shadow of the context

Re: [agi] Context free text analysis is not a proper method of natural language understanding

2007-10-03 Thread Bob Mottram
On 03/10/2007, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Given (1), no context-free analysis can understand natural language. Given (2), no adaptive agent can learn (proper) understanding of natural language given only texts. For human-like understanding, an AGI would need to participate in

Re: [agi] Context free text analysis is not a proper method of natural language understanding

2007-10-03 Thread Vladimir Nesov
... or maybe they can be inferred from texts alone. It all depends on learning ability of particular design, and we as yet have none. Cart before the horse. On 10/3/07, Bob Mottram [EMAIL PROTECTED] wrote: On 03/10/2007, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Given (1), no context-free

Re: [agi] Religion-free technical content

2007-10-03 Thread J Storrs Hall, PhD
On Tuesday 02 October 2007 08:46:43 pm, Richard Loosemore wrote: J Storrs Hall, PhD wrote: I find your argument quotidian and lacking in depth. ... What you said above was pure, unalloyed bullshit: an exquisite cocktail of complete technical ignorance, patronizing insults and breathtaking

Re: [agi] Religion-free technical content

2007-10-03 Thread Mark Waser
So do you claim that there are universal moral truths that can be applied unambiguously in every situation? What a stupid question. *Anything* can be ambiguous if you're clueless. The moral truth of Thou shalt not destroy the universe is universal. The ability to interpret it and apply it

Re: [agi] Religion-free technical content

2007-10-03 Thread J Storrs Hall, PhD
On Tuesday 02 October 2007 05:50:57 pm, Edward W. Porter wrote: The below is a good post: Thank you! I have one major question for Josh. You said “PRESENT-DAY TECHNIQUES CAN DO MOST OF THE THINGS THAT AN AI NEEDS TO DO, WITH THE EXCEPTION OF COMING UP WITH NEW REPRESENTATIONS AND

Re: [agi] Religion-free technical content

2007-10-03 Thread Linas Vepstas
On Mon, Oct 01, 2007 at 10:40:53AM -0400, Edward W. Porter wrote: [...] RSI (Recursive Self Improvement) [...] I didn't know exactly what the term covers. So could you, or someone, please define exactly what its meaning is? Is it any system capable of learning how to improve its current

Re: [agi] Religion-free technical content

2007-10-03 Thread Richard Loosemore
I criticised your original remarks because they demonstrated a complete lack of understanding of what complex systems actually are. You said things about complex systems that were, quite frankly, ridiculous: Turing-machine equivalence, for example, has nothing to do with this. In your more

RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
Josh, Thank you for your reply, copied below. It was – as have been many of your posts – thoughtful and helpful. I did have a question about the following section “THE LEARNING PROCESS MUST NOT ONLY IMPROVE THE WORLD MODEL AND WHATNOT, BUT MUST IMPROVE (= MODIFY) *ITSELF*. KIND OF THE WAY

RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
From what you say below it would appear human-level AGI would not require recursive self improvement, because as you appear to define it human's don't either (i.e., we currently don't artificially substantially expand the size of our brain). I wonder what percent of the AGI community would accept

[agi] RSI

2007-10-03 Thread Richard Loosemore
Edward W. Porter wrote: From what you say below it would appear human-level AGI would not require recursive self improvement, because as you appear to define it human's don't either (i.e., we currently don't artificially substantially expand the size of our brain). I wonder what percent of the

Re: [agi] intelligent compression

2007-10-03 Thread Matt Mahoney
--- Mike Dougherty [EMAIL PROTECTED] wrote: On 10/2/07, Matt Mahoney [EMAIL PROTECTED] wrote: It says a lot about the human visual perception system. This is an extremely lossy function. Video contains only a few bits per second of useful information. The demo is able to remove a

RE: [agi] RSI

2007-10-03 Thread Derek Zahn
Edward W. Porter writes: As I say, what is, and is not, RSI would appear to be a matter of definition. But so far the several people who have gotten back to me, including yourself, seem to take the position that that is not the type of recursive self improvement they consider to be RSI. Some

Re: [agi] Religion-free technical content

2007-10-03 Thread J Storrs Hall, PhD
Thanks! It's worthwhile being specific about levels of interpretation in the discussion of self-modification. I can write self-modifying assembly code that yet does not change the physical processor, or even its microcode it it's one of those old architectures. I can write a self-modifying

RE: [agi] RSI

2007-10-03 Thread Derek Zahn
I wrote: If we do not give arbitrary access to the mind model itself or its implementation, it seems safer than if we do -- this limits the extent that RSI is possible: the efficiency of the model implementation and the capabilities of the model do not change. An obvious objection to this

Re: [agi] RSI

2007-10-03 Thread Bob Mottram
On 03/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote: RSI is not necessary for human-level AGI. I think it's too early to be able to make a categorical statement of this kind. Does not a new born baby recursively improve its thought processes until it reaches human level ? - This list

RE: [agi] RSI

2007-10-03 Thread Edward W. Porter
Good distinction! Edward W. Porter -Original Message- From: Derek Zahn [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 03, 2007 3:22 PM To: agi@v2.listbox.com Subject: RE: [agi] RSI Edward W. Porter writes: As I say, what is, and is not, RSI would appear to be a matter of

Re: [agi] Context free text analysis is not a proper method of natural language understanding

2007-10-03 Thread Matt Mahoney
--- [EMAIL PROTECTED] wrote: Relating to the idea that text compression (as demonstrated by general compression algorithms) is a measure of intelligence, Claims: (1) To understand natural language requires knowledge (CONTEXT) of the social world(s) it refers to. (2) Communication includes

RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
Again a well reasoned response. With regard to the limitations of AM, I think if the young Doug Lenat and those of his generation had had 32K processor Blue Gene Ls, with 4TBytes of RAM, to play with they would have soon started coming up with things way way beyond AM. In fact, if the average

Re: [agi] RSI

2007-10-03 Thread Linas Vepstas
On Wed, Oct 03, 2007 at 02:09:05PM -0400, Richard Loosemore wrote: RSI is only what happens after you get an AGI up to the human level: it could then be used [sic] to build a more intelligent version of itself, and so on up to some unknown plateau. That plateau is often referred to as

Re: [agi] RSI

2007-10-03 Thread J Storrs Hall, PhD
On Wednesday 03 October 2007 03:47:31 pm, Bob Mottram wrote: On 03/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote: RSI is not necessary for human-level AGI. I think it's too early to be able to make a categorical statement of this kind. Does not a new born baby recursively improve its

Re: [agi] intelligent compression

2007-10-03 Thread Mike Dougherty
On 10/3/07, Matt Mahoney [EMAIL PROTECTED] wrote: The higher levels detect complex objects like airplanes or printed words or faces. We could (lossily) compress images much smaller if we knew how to recognize these features. The idea would be to compress a movie to a written script, then

Re: [agi] Religion-free technical content

2007-10-03 Thread Mike Dougherty
On 10/3/07, Edward W. Porter [EMAIL PROTECTED] wrote: In fact, if the average AI post-grad of today had such hardware to play with, things would really start jumping. Within ten years the equivents of such machines could easily be sold for somewhere between $10k and $100k, and lots of

Re: [agi] RSI

2007-10-03 Thread Matt Mahoney
On 03/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote: RSI is not necessary for human-level AGI. How about: RSI will not be possible until human-level AGI. Specifically, the AGI will need the same skills as its builders with regard to language understanding, system engineering, and software

Re: [agi] Religion-free technical content

2007-10-03 Thread Linas Vepstas
On Wed, Oct 03, 2007 at 02:00:03PM -0400, Edward W. Porter wrote: From what you say below it would appear human-level AGI would not require recursive self improvement, [...] A lot of people on this list seem to hang a lot on RSI, as they use it, implying it is necessary for human-level AGI.

Re: [agi] intelligent compression

2007-10-03 Thread Matt Mahoney
--- Mike Dougherty [EMAIL PROTECTED] wrote: On 10/3/07, Matt Mahoney [EMAIL PROTECTED] wrote: The higher levels detect complex objects like airplanes or printed words or faces. We could (lossily) compress images much smaller if we knew how to recognize these features. The idea would be

RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
To Mike Douglas regarding the below comment to my prior post: I think your notion that post-grads with powerful machines would only operate in the space of ideas that don’t work is unfair. A lot of post-grads may be drones, but some of them are cranking some really good stuff. The article,

Re: [agi] intelligent compression

2007-10-03 Thread Russell Wallace
On 10/3/07, Matt Mahoney [EMAIL PROTECTED] wrote: [snipped parts of post agreed with] I think with a better understanding of this algorithm, that a visual perception knowledge base can be trained in a hierarchical manner, building from simple visual patterns to more abstract concepts. Just

Re: [agi] Religion-free technical content

2007-10-03 Thread Mike Tintner
RE: [agi] Religion-free technical contentEdward Porter:I don't know about you, but I think there are actually a lot of very bright people in the interrelated fields of AGI, AI, Cognitive Science, and Brain science. There are also a lot of very good ideas floating around. Yes there are bright

Re: [agi] Language and compression

2007-10-03 Thread Russell Wallace
On 9/23/07, Matt Mahoney [EMAIL PROTECTED] wrote: I realize that a language model must encode both the meaning of a text string and its representation. This makes lossless compression an inappropriate test for evaluating models of visual or auditory perception. The tiny amount of relevant

Re: [agi] Religion-free technical content

2007-10-03 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote: So do you claim that there are universal moral truths that can be applied unambiguously in every situation? What a stupid question. *Anything* can be ambiguous if you're clueless. The moral truth of Thou shalt not destroy the universe is

RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
Re: The following statement in Linas Vepstas’s 10/3/2007 5:51 PM post: P.S. THE INDIAN MATHEMATICIAN RAMANUJAN SEEMS TO HAVE MANAGED TO TRAIN A SET OF NEURONS IN HIS HEAD TO BE A VERY FAST SYMBOLIC MULTIPLIER/DIVIDER. WITH THIS, HE WAS ABLE TO SEE VAST AMOUNTS (SIX VOLUMES WORTH BEFORE DYING AT

Re: [agi] Religion-free technical content

2007-10-03 Thread Linas Vepstas
On Wed, Oct 03, 2007 at 06:31:35PM -0400, Edward W. Porter wrote: One of them once told me that in Japan it was common for high school boys who were interested in math, science, or business to go to abacus classes after school or on weekends. He said once they fully mastered using physical

Re: [agi] Language and compression

2007-10-03 Thread Vladimir Nesov
Lossless compression can be far from what intelligence does because structure of categorization that intelligence performs on the world probably doesn't correspond to its probabilistic structure. As I see it, intelligent system can't infer many universal laws that will hold in the distant future

Re: [agi] Religion-free technical content

2007-10-03 Thread Linas Vepstas
On Tue, Oct 02, 2007 at 01:20:54PM -0400, Richard Loosemore wrote: When the first AGI is built, its first actions will be to make sure that nobody is trying to build a dangerous, unfriendly AGI. Yes, OK, granted, self-preservation is a reasonable character trait. After that point, the

Re: [agi] Language and compression

2007-10-03 Thread Matt Mahoney
--- Russell Wallace [EMAIL PROTECTED] wrote: On 9/23/07, Matt Mahoney [EMAIL PROTECTED] wrote: I realize that a language model must encode both the meaning of a text string and its representation. This makes lossless compression an inappropriate test for evaluating models of visual or

Re: [agi] Religion-free technical content

2007-10-03 Thread J Storrs Hall, PhD
On Wednesday 03 October 2007 06:21:46 pm, Mike Tintner wrote: Yes there are bright people in AGI. But there's no one remotely close to the level, say, of von Neumann or Turing, right? And do you really think a revolution such as AGI is going to come about without that kind of revolutionary,

Re: [agi] Religion-free technical content

2007-10-03 Thread Linas Vepstas
On Wed, Oct 03, 2007 at 12:20:10PM -0400, Richard Loosemore wrote: Second, You mention the 3-body problem in Newtonian mechanics. Although I did not use it as such in the paper, this is my poster child of a partial complex system. I often cite the case of planetary system dynamics as an

Re: [agi] Language and compression

2007-10-03 Thread Russell Wallace
On 10/4/07, Matt Mahoney [EMAIL PROTECTED] wrote: Lossless video compression would not get far. The brightness of a pixel depends on the number of photons striking the corresponding CCD sensor. The randomness due to quantum mechanics is absolutely incompressible and makes up a significant

Re: [agi] Language and compression

2007-10-03 Thread Matt Mahoney
--- Russell Wallace [EMAIL PROTECTED] wrote: On 10/4/07, Matt Mahoney [EMAIL PROTECTED] wrote: Lossless video compression would not get far. The brightness of a pixel depends on the number of photons striking the corresponding CCD sensor. The randomness due to quantum mechanics is

RE: [agi] Religion-free technical content breaking the small hardware mindset

2007-10-03 Thread Edward W. Porter
Mike Tintner wrote in his Wed 10/3/2007 6:22 PM post: BUT THERE'S NO ONE REMOTELY CLOSE TO THE LEVEL, SAY, OF VON NEUMANN OR TURING, RIGHT? AND DO YOU REALLY THINK A REVOLUTION SUCH AS AGI IS GOING TO COME ABOUT WITHOUT THAT KIND OF REVOLUTIONARY, CREATIVE THINKER? JUST BY TWEAKING EXISTING

Re: [agi] Language and compression

2007-10-03 Thread Russell Wallace
On 10/4/07, Matt Mahoney [EMAIL PROTECTED] wrote: Yes, but it has nothing to do with AI. You are modeling physics, a much harder problem. Well, I think compression in general doesn't have much to do with AI, like I said before :) But I'm surprised you call physics modeling a harder problem,

Re: [agi] Language and compression

2007-10-03 Thread Russell Wallace
On 10/4/07, Vladimir Nesov [EMAIL PROTECTED] wrote: On 10/4/07, Russell Wallace [EMAIL PROTECTED] wrote: Suppose 50% is the absolute max you can get - that's still worth having, in cases where you don't want to throw away data. But why is it going to correlate with intelligence? It's not.

Re: [agi] Language and compression

2007-10-03 Thread Vladimir Nesov
On 10/4/07, Russell Wallace [EMAIL PROTECTED] wrote: On 10/4/07, Matt Mahoney [EMAIL PROTECTED] wrote: Lossless video compression would not get far. The brightness of a pixel depends on the number of photons striking the corresponding CCD sensor. The randomness due to quantum mechanics is

Re: [agi] Religion-free technical content breaking the small hardware mindset

2007-10-03 Thread Russell Wallace
On 10/4/07, Edward W. Porter [EMAIL PROTECTED] wrote: The biggest brick wall is the small-hardware mindset that has been absolutely necessary for decades to get anything actually accomplished on the hardware of the day. But it has caused people to close their minds to the vast power of brain

Re: [agi] Language and compression

2007-10-03 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote: The same probably goes for text compression: clever (but not intelligent) statistics-gathering algorithm on texts can probably do a much better job for compressing than human-like intelligence which just chunks this information according to its

Re: [agi] Language and compression

2007-10-03 Thread Matt Mahoney
--- Russell Wallace [EMAIL PROTECTED] wrote: On 10/4/07, Matt Mahoney [EMAIL PROTECTED] wrote: Yes, but it has nothing to do with AI. You are modeling physics, a much harder problem. Well, I think compression in general doesn't have much to do with AI, like I said before :) But I'm

Re: [agi] breaking the small hardware mindset

2007-10-03 Thread Mike Tintner
MessageEdward:The biggest brick wall is the small-hardware mindset that has been absolutely necessary for decades to get anything actually accomplished on the hardware of the day Completely disagree. It's that purely numerical mindset about small/big hardware that I see as so widespread and

Re: [agi] Language and compression

2007-10-03 Thread Russell Wallace
On 10/4/07, Matt Mahoney [EMAIL PROTECTED] wrote: And text is the only data type with this property. Images, audio, executable code, and seismic data can all be compressed with very little memory. How sure are we of that? Of course all those things _can_ be compressed with very little memory -

Re: [agi] Language and compression

2007-10-03 Thread Matt Mahoney
--- Russell Wallace [EMAIL PROTECTED] wrote: On 10/4/07, Matt Mahoney [EMAIL PROTECTED] wrote: And text is the only data type with this property. Images, audio, executable code, and seismic data can all be compressed with very little memory. How sure are we of that? Of course all those

RE: [agi] breaking the small hardware mindset

2007-10-03 Thread Edward W. Porter
Mike Tintner said in his 10/3/2007 9:38 PM post: I DON'T THINK AGI - CORRECT ME - HAS SOLVED A SINGLE CREATIVE PROBLEM - E.G. CREATIVITY - UNPROGRAMMED ADAPTIVITY - DRAWING ANALOGIES - VISUAL OBJECT RECOGNITION - NLP - CONCEPTS - CREATING AN EMOTIONAL SYSTEM - GENERAL LEARNING - EMBODIED/

Re: [agi] Religion-free technical content

2007-10-03 Thread Mike Dougherty
On 10/3/07, Edward W. Porter [EMAIL PROTECTED] wrote: I think your notion that post-grads with powerful machines would only operate in the space of ideas that don't work is unfair. Yeah, i can agree - it was harsh. My real intention was to suggest that NOT having a bigger computer is not