[agi] Remembering Caught in the Act

2008-09-05 Thread Brad Paulsen
http://www.nytimes.com/2008/09/05/science/05brain.html?_r=3partner=rssnytemc=rssoref=sloginoref=sloginoref=slogin or, indirectly, http://science.slashdot.org/article.pl?sid=08/09/05/0138237from=rss --- agi Archives:

Re: [agi] Remembering Caught in the Act

2008-09-05 Thread Bob Mottram
As the article says, this has long been suspected but until now hadn't been demonstrated. Edelman was describing the same phenomena as the remembered present well over a decade ago, and his idea seems to have been loosely inspired by ideas from Freud and James. Remembering seems to be an act of

Re: [agi] Remembering Caught in the Act

2008-09-05 Thread Kaj Sotala
On Fri, Sep 5, 2008 at 11:21 AM, Brad Paulsen [EMAIL PROTECTED] wrote: http://www.nytimes.com/2008/09/05/science/05brain.html?_r=3partner=rssnytemc=rssoref=sloginoref=sloginoref=slogin http://www.sciencemag.org/cgi/content/short/1164685 for the original study.

Re: [agi] Remembering Caught in the Act

2008-09-05 Thread Mike Tintner
Er sorry - my question is answered in the interesting Slashdot thread (thanks again): Past studies have shown how many neurons are involved in a single, simple memory. Researchers might be able to isolate a few single neurons in the process of summoning a memory, but that is like saying that

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread Mike Tintner
OK, I'll bite: what's nondeterministic programming if not a contradiction? Again - v. briefly - it's a reality - nondeterministic programming is a reality, so there's no material, mechanistic, software problem in getting a machine to decide either way. The only problem is a logical one of

Re: [agi] open models, closed models, priors

2008-09-05 Thread Pei Wang
On Thu, Sep 4, 2008 at 11:17 PM, Abram Demski [EMAIL PROTECTED] wrote: Pei, I sympathize with your care in wording, because I'm very aware of the strange meaning that the word model takes on in formal accounts of semantics. While a cognitive scientist might talk about a person's model of the

Re: [agi] Remembering Caught in the Act

2008-09-05 Thread Bob Mottram
2008/9/5 Mike Tintner [EMAIL PROTECTED]: Past studies have shown how many neurons are involved in a single, simple memory. Researchers might be able to isolate a few single neurons in the process of summoning a memory, but that is like saying that they have isolated a few water molecules in

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread William Pearson
2008/9/5 Mike Tintner [EMAIL PROTECTED]: By contrast, all deterministic/programmed machines and computers are guaranteed to complete any task they begin. If only such could be guaranteed! We would never have system hangs, dead locks. Even if it could be made so, computer systems would not

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread Mike Tintner
MT:By contrast, all deterministic/programmed machines and computers are guaranteed to complete any task they begin. Will:If only such could be guaranteed! We would never have system hangs, dead locks. Even if it could be made so, computer systems would not always want to do so. Will, That's

Language modeling (was Re: [agi] draft for comment)

2008-09-05 Thread Matt Mahoney
--- On Thu, 9/4/08, Pei Wang [EMAIL PROTECTED] wrote: I guess you still see NARS as using model-theoretic semantics, so you call it symbolic and contrast it with system with sensors. This is not correct --- see http://nars.wang.googlepages.com/wang.semantics.pdf and

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-05 Thread Pei Wang
On Fri, Sep 5, 2008 at 11:15 AM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Thu, 9/4/08, Pei Wang [EMAIL PROTECTED] wrote: I guess you still see NARS as using model-theoretic semantics, so you call it symbolic and contrast it with system with sensors. This is not correct --- see

Re: Real vs. simulated environments (was Re: [agi] draft for comment.. P.S.)

2008-09-05 Thread Steve Richfield
Matt, FINALLY, someone here is saying some of the same things that I have been saying. With general agreement with your posting, I will make some comments... On 9/4/08, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Thu, 9/4/08, Valentina Poletti [EMAIL PROTECTED] wrote: Ppl like Ben argue that

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread Abram Demski
Mike, Will's objection is not quite so easily dismissed. You need to argue that there is an alternative, not just that Will's is more of the same. --Abram On Fri, Sep 5, 2008 at 9:34 AM, Mike Tintner [EMAIL PROTECTED] wrote: MT:By contrast, all deterministic/programmed machines and computers

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread Abram Demski
Mike, The philosophical paradigm I'm assuming is that the only two alternatives are deterministic and random. Either the next state is completely determined by the last, or it is only probabilistically determined. Deterministic does not mean computable, since physical processes can be totally

Re: [agi] What is Friendly AI?

2008-09-05 Thread Steve Richfield
Vladamir, On 9/4/08, Vladimir Nesov [EMAIL PROTECTED] wrote: On Thu, Sep 4, 2008 at 12:02 PM, Valentina Poletti [EMAIL PROTECTED] wrote: Also, Steve made another good point here: loads of people at any moment do whatever they can to block the advancement and progress of human beings as

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread Mike Tintner
Abram, I don't understand why.how I need to argue an alternative - please explain. If it helps, a deterministic, programmed machine can, at any given point, only follow one route through a given territory or problem space or maze - even if surprising *appearing* to halt/deviate from the

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread Terren Suydam
Hi Mike, comments below... --- On Fri, 9/5/08, Mike Tintner [EMAIL PROTECTED] wrote: Again - v. briefly - it's a reality - nondeterministic programming is a reality, so there's no material, mechanistic, software problem in getting a machine to decide either way. This is inherently

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread Abram Demski
Mike, On Fri, Sep 5, 2008 at 1:15 PM, Mike Tintner [EMAIL PROTECTED] wrote: Abram, I don't understand why.how I need to argue an alternative - please explain. I am not sure what to say, but here is my view of the situation. You are claiming that there is a broad range of things that

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-05 Thread Matt Mahoney
--- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote: NARS indeed can learn semantics before syntax --- see http://nars.wang.googlepages.com/wang.roadmap.pdf Yes, I see this corrects many of the problems with Cyc and with traditional language models. I didn't see a description of a mechanism

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-05 Thread Pei Wang
On Fri, Sep 5, 2008 at 6:15 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote: NARS indeed can learn semantics before syntax --- see http://nars.wang.googlepages.com/wang.roadmap.pdf Yes, I see this corrects many of the problems with Cyc and with

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-05 Thread Matt Mahoney
--- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote: Like to many existing AI works, my disagreement with you is not that much on the solution you proposed (I can see the value), but on the problem you specified as the goal of AI. For example, I have no doubt about the theoretical and

AI isn't cheap (was Re: Real vs. simulated environments (was Re: [agi] draft for comment.. P.S.))

2008-09-05 Thread Matt Mahoney
--- On Fri, 9/5/08, Steve Richfield [EMAIL PROTECTED] wrote: I think that a billion or so, divided up into small pieces to fund EVERY disparate approach to see where the low hanging fruit is, would go a LONG way in guiding subsequent billions. I doubt that it would take a trillion to succeed.

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-05 Thread Pei Wang
Matt, Thanks for taking the time to explain your ideas in detail. As I said, our different opinions on how to do AI come from our very different understanding of intelligence. I don't take passing Turing Test as my research goal (as explained in