Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Matt Mahoney
--- On Wed, 1/7/09, Ben Goertzel b...@goertzel.org wrote: if proving Fermat's Last theorem was just a matter of doing math, it would have been done 150 years ago ;-p obviously, all hard problems that can be solved have already been solved... ??? In theory, FLT could be solved by brute force

Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Mike Tintner
Matt: Logic has not solved AGI because logic is a poor model of the way people think. Neural networks have not solved AGI because you would need about 10^15 bits of memory and 10^16 OPS to simulate a human brain sized network. Genetic algorithms have not solved AGI because the

Re: [agi] The Smushaby of Flatway...PS

2009-01-08 Thread Mike Tintner
PS I should have said the fundamental deficiencies of the PURELY logicomathematical form of thinking. It's not deficient in itself - only if you think like so many AGIers that it's the only form of thinking, or able to accommodate the entirety of human thinking.

Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Matt Mahoney
--- On Thu, 1/8/09, Mike Tintner tint...@blueyonder.co.uk wrote: What then do you see as the way people *do* think? You surprise me, Matt, because both the details of your answer here and your thinking generally strike me as *very* logicomathematical - with lots of emphasis on numbers and

[agi] The Self

2009-01-08 Thread Mike Tintner
[I second this recommendation elsewhere from Colin Hales - IMO, although it may not appear at first obvious, the scientific study of the self, as of mirror neurons, will have a profound effect on conceptions of AGI (and why these two things are essential for intelligence) - and this is not

Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Mike Tintner
Matt, Thanks. But how do you see these: Pattern recognition in parallel, and hierarchical learning of increasingly complex patterns by classical conditioning (association), clustering in context space (feature creation), and reinforcement learning to meet evolved goals. as fundamentally

RE: [agi] The Smushaby of Flatway.

2009-01-08 Thread Ed Porter
In response to Jim Bromer's post of Wed 1/7/2009 8:24 PM =Jim Bromer== All of the major AI paradigms, including those that are capable of learning, are flat according to my definition. What makes them flat is that the method of decision making is minimally-structured and they

Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Ronald C. Blue
IFrom: Jim Bromer [mailto:jimbro...@gmail.com] Sent: Wednesday, January 07, 2009 8:24 PM All of the major AI paradigms, including those that are capable of learning, are flat according to my definition. What makes them flat is that the method of decision making is minimally-structured

Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread J. Andrew Rogers
On Jan 8, 2009, at 10:29 AM, Ronald C. Blue wrote: ...Noise is not noise... Speaking of noise, was that ghastly HTML formatting really necessary? It made the email nearly unreadable. J. Andrew Rogers --- agi Archives:

Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Matt Mahoney
--- On Thu, 1/8/09, Mike Tintner tint...@blueyonder.co.uk wrote: Matt, Thanks. But how do you see these: Pattern recognition in parallel, and hierarchical learning of increasingly complex patterns by classical conditioning (association), clustering in context space (feature creation),

Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Mike Tintner
Matt:Free association is the basic way of recalling memories. If you experience A followed by B, then the next time you experience A you will think of (or predict) B. Pavlov demonstrated this type of learning in animals in 1927. Matt, You're not thinking your argument through. Look carefully

Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Eric Burton
That email had really nice images, but I don't know why gmail viewed them automatically! On 1/8/09, Mike Tintner tint...@blueyonder.co.uk wrote: Matt:Free association is the basic way of recalling memories. If you experience A followed by B, then the next time you experience A you will think

Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Matt Mahoney
Mike, Your own thought processes only seem mysterious because you can't predict what you will think without actually thinking it. It's not just a property of the human brain, but of all Turing machines. No program can non-trivially model itself. (By model, I mean that P models Q if for any

RE: [agi] The Smushaby of Flatway.

2009-01-08 Thread Ronald C Blue
A picture is like an instant 1000 words and you will remind a picture almost 70 years but not 1000 words. -Original Message- From: J. Andrew Rogers and...@ceruleansystems.com To: agi@v2.listbox.com Sent: 1/8/09 1:59 PM Subject: Re: [agi] The Smushaby of Flatway. On Jan 8, 2009, at

Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Vladimir Nesov
On Fri, Jan 9, 2009 at 12:19 AM, Matt Mahoney matmaho...@yahoo.com wrote: Mike, Your own thought processes only seem mysterious because you can't predict what you will think without actually thinking it. It's not just a property of the human brain, but of all Turing machines. No program can

Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Richard Loosemore
Ronald C. Blue wrote: [snip] [snip] ... chaos stimulation because ... correlational wavelet opponent processing machine ... globally entangled ... Paul rf trap ... parallel modulating string pulses ... a relative zero energy value or opponent process ... phase locked ... parallel

Re: [agi] Epineuronal programming

2009-01-08 Thread Steve Richfield
Abram, On 1/7/09, Abram Demski abramdem...@gmail.com wrote: Steve, Dp/dt methods do not fundamentally change the space of possible models (if your initial mathematical claim of equivalence is true). The claim is that a given neuron performs the same transformation, whether on object

Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Matt Mahoney
--- On Thu, 1/8/09, Vladimir Nesov robot...@gmail.com wrote: On Fri, Jan 9, 2009 at 12:19 AM, Matt Mahoney matmaho...@yahoo.com wrote: Mike, Your own thought processes only seem mysterious because you can't predict what you will think without actually thinking it. It's not just a

Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Vladimir Nesov
On Fri, Jan 9, 2009 at 6:04 AM, Matt Mahoney matmaho...@yahoo.com wrote: Your earlier counterexample was a trivial simulation. It simulated itself but did nothing else. If P did something that Q didn't, then Q would not be simulating P. My counterexample also bragged, outside the input