Re: [agi] AGI bottlenecks

2006-06-01 Thread Richard Loosemore
my deux centimes' worth. On a more positive note, I do think it is possible for AGI researchers to work together within a common formalism. My presentation at the AGIRI workshop was about that, and when I get the paper version of the talk finalized I will post it somewhere. Richard Loosemore

Re: [agi] AGI bottlenecks

2006-06-02 Thread Richard Loosemore
substituted for those components, making them less than obvious. Exactly the same critique bears on anyone who suggests that Reinforcement Learning could be the basis for an AGI. I do not believe there is still any reply to that critique. Richard Loosemore William Pearson wrote: On 01/06

Re: [agi] AGI bottlenecks

2006-06-09 Thread Richard Loosemore
is not a set of environment states S, a set of actions A, and a set of scalar rewards in the Reals.) Watching history repeat itself is pretty damned annoying. Richard Loosemore James Ratcliff wrote: Richard, Can you explain differently, in other words the second part of this post. I am very

Re: [agi] information in the brain?

2006-06-09 Thread Richard Loosemore
of the visual cortex flow was going frontward? In other words, the frontal cortex is doing a lot more than just handling infromation from the environment, so I am not sure your original question can be easily answered. Richard Loosemore Philip Goetz wrote: On 6/9/06, Eugen Leitl [EMAIL

Re: [agi] Mentifex AI Breakthrough on Wed.7.JUN.2006

2006-06-11 Thread Richard Loosemore
computer in 1982. Richard Loosemore A. T. Murray wrote: In Vernor Vinge's classic paper on Technological Singularity: And what of the arrival of the Singularity itself? What can be said of its actual appearance? Since it involves an intellectual runaway, it will probably occur faster

Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-15 Thread Richard Loosemore
. Hope this clarifies it a little. Richard Loosemore --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

[agi] Not having trouble with parameters! WAS [Re: How the Brain Represents Abstract Knowledge]

2006-06-15 Thread Richard Loosemore
to test the system as a whole, and get hammered in the mean time for not actually doing anything that counts. Very tricky. Richard Loosemore. --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] Processing speed for core intelligence in human brain

2006-07-13 Thread Richard Loosemore
You Got, Its The Way That You Do It. Richard Loosemore. --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] NP-Hard is not an applicable concept

2006-07-16 Thread Richard Loosemore
undefined and yet at the same time subject to a proof of how computationally difficult it is. I'm not sure why you would think it an unhelpful argument. Isn't it a clear case of semantic incoherence? Richard Loosemore --- To unsubscribe, change your address, or temporarily deactivate your

[agi] [META] Is there anything we can do to keep junk out of the AGI Forum?

2006-07-26 Thread Richard Loosemore
I am beginning to wonder if this forum would be better off with a restricted membership policy. Richard Loosemore Davy Bartoloni - Minware S.r.l. wrote: Which thing we want from a IA? , we want TRULY something? the doubt rises me that nobody affidera' never to the words

Re: [agi] fuzzy logic necessary?

2006-08-03 Thread Richard Loosemore
to know when anyone sat down and figured out that it could not be valid. Richard Loosemore --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] fuzzy logic necessary?

2006-08-04 Thread Richard Loosemore
that there are so many people out there who cannot even understand that last point, let alone debate it. Richard Loosemore Pei Wang wrote: Richard, Thanks for taking the time to explain your position. I actually agree with most what you wrote, though I don't think it is inconsistent with my

Re: [agi] fuzzy logic necessary?

2006-08-05 Thread Richard Loosemore
that Yan produced, but it is not literally a production rule. Writing it in rule form like that is just a summary of a constraint structure that, when triggers, engages in the active process of trying to fit itself to the rest of the situation model. Richard Loosemore

Re: [agi] fuzzy logic necessary?

2006-08-06 Thread Richard Loosemore
at closely will not be PL at all. I'm working on it. (As hard as I can, though not by any means full time, alas). Richard Loosemore --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] Why so few AGI projects?

2006-09-13 Thread Richard Loosemore
this, and they will start to find promotions slipping, or they'll just be dumped. Short term results pressure in other words. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] Failure scenarios

2006-09-25 Thread Richard Loosemore
Ben Goertzel wrote: Hi, The real grounding problem is the awkward and annoying fact that if you presume a KR format, you can't reverse engineer a learning mechanism that reliably fills that KR with knowledge. Sure... To go back to the source, in

Re: [agi] Computer monitoring and control API

2006-09-29 Thread Richard Loosemore
of several interpretations of what you say, but am not sure which you mean. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] Fwd: Articles in this week's Science

2006-10-12 Thread Richard Loosemore
brains are far from optimal as intelligences... -- Ben G On 10/11/06, Richard Loosemore [EMAIL PROTECTED] wrote: Sergio, Your words sound nice in theory, but that is not the way it is happening on the ground. What I tried to say was that neuroscience folks are far too quick to deploy words like

Re: [agi] SOTA

2006-10-19 Thread Richard Loosemore
since 20 years ago. Having a clue about just what a complex thing intelligence is, has everything to do with it. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL

Re: [agi] SOTA

2006-10-20 Thread Richard Loosemore
BillK wrote: On 10/19/06, Richard Loosemore [EMAIL PROTECTED] wrote: Sorry, but IMO large databases, fast hardware, and cheap memory ain't got nothing to do with it. Anyone who doubts this get a copy of Pim Levelt's Speaking, read and digest the whole thing, and then meditate on the fact

Re: [agi] SOTA

2006-10-20 Thread Richard Loosemore
that the general course of its behavior is as reliable as the behavior of an Ideal Gas: can't predict the position and momentum of all its particles, but you sure can predict such overall characteristics as temperature, pressure and volume. Richard Loosemore - This list is sponsored

Re: [agi] SOTA

2006-10-20 Thread Richard Loosemore
BillK wrote: On 10/20/06, Richard Loosemore [EMAIL PROTECTED] wrote: For you to blithely say Most normal speaking requires relatively little 'intelligence' is just mind-boggling. I am not trying to say that language skills don't require a human level of intelligence. That's obvious

Re: [agi] SOTA

2006-10-20 Thread Richard Loosemore
it. There is just no point. What you said above is just flat-out wrong from beginning to end. I have done research in that field, and taught postgraduate courses in it, and what you are saying is completely divorced from reality. Richard Loosemore - This list is sponsored by AGIRI: http

Re: [agi] SOTA

2006-10-20 Thread Richard Loosemore
be disastrous. I realise that I have been tempted to explain an idea in partial, cryptic terms (laying myself open to requests for more detail, or scorn), so apologies if the above seems opaque. More when I get the time. Richard Loosemore. It may be that the goals of and motivations from

Re: [agi] Language modeling

2006-10-23 Thread Richard Loosemore
-undergraduate level of comprehension. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

[agi] Motivational Systems that are stable

2006-10-25 Thread Richard Loosemore
and, as ever, I will do my best to respond to anyone who has thoughtful questions. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

[agi] Re: [singularity] Motivational Systems that are stable

2006-10-27 Thread Richard Loosemore
this is a milestone of mutual accord in a hitherto divided community. Progress! Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] Motivational Systems that are stable

2006-10-28 Thread Richard Loosemore
This is why I finished my essay with a request for comments based on an understanding of what I wrote. This is not a comment on my proposal, only a series of unsupported assertions that don't seem to hang together into any kind of argument. Richard Loosemore. Matt Mahoney wrote: My

Re: [agi] Motivational Systems that are stable

2006-10-28 Thread Richard Loosemore
it will work. Hope that helps. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

[agi] Re: Motivational Systems that are stable

2006-10-30 Thread Richard Loosemore
, or if I had started a successful lemonade-stand business, it would of course only take ten minutes to convince an investor, given the way investors operate, but, hey ho: ten years it is. :-( Enough for now. Richard Loosemore. - This list is sponsored by AGIRI: http

Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread Richard Loosemore
to implement in an AI system. Such a language would also be a member of the class fifth generation computer language. Not true. If it is too dumb to acquire a natural language then it is too dumb, period. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email

Re: [agi] Natural versus formal AI interface languages

2006-11-03 Thread Richard Loosemore
if experiences are the same. Your conclusions therefore do not follow. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] Natural versus formal AI interface languages

2006-11-03 Thread Richard Loosemore
something working, and then go from there This rationale is the very same rationale that drove researchers into Blocks World programs. Winograd and SHRDLU, etc. It was a mistake then: it is surely just as much of a mistake now. Richard Loosemore. - This list is sponsored by AGIRI

Re: [agi] The concept of a KBMS

2006-11-06 Thread Richard Loosemore
these interfaces would help ... but it would be overstating the case to say that this includes all AI designs. Just wanted to make that disclaimer, that's all. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go

Re: [agi] The concept of a KBMS

2006-11-07 Thread Richard Loosemore
question. Richard Loosemore. - Original Message - From: John Scanlon [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Monday, November 06, 2006 5:04 PM Subject: Re: [agi] The concept of a KBMS Richard Loosemore wrote: When you say that it provides ... a general AI shell, within

Re: [agi] The crux of the problem

2006-11-08 Thread Richard Loosemore
and got over it). Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] The crux of the problem

2006-11-08 Thread Richard Loosemore
of all those begged questions. Richard Loosemore. Ben Goertzel wrote: About http://www.physorg.com/news82190531.html Rabinovich and his colleague at the Institute for Nonlinear Science at the University of California, San Diego, Ramon Huerta, along with Valentin Afraimovich

Re: [agi] On What Is Thought

2006-11-09 Thread Richard Loosemore
not speak to what they might do in the future. I cannot see how anyone could come to a strong conclusion about the uselessness of deploying that internal knowledge. Richard Loosemore *** Introspection, after all, is what all AI researchers use as the original source of their algorithms

Re: [agi] On What Is Thought

2006-11-10 Thread Richard Loosemore
it is with redefinitions of the term understanding to be synonymous with a variety of compression. This is an egregious distortion of the real meaning of the term, and *everything* that follows from that distortion is just nonsense. Richard Loosemore. Richard Loosemore - This list

Re: [agi] On What Is Thought

2006-11-10 Thread Richard Loosemore
say, I will try to see if your book contains material which evades this trap my understanding of your paper made me suspect not, but I will suspend judgment. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options

Re: [agi] On What Is Thought

2006-11-10 Thread Richard Loosemore
in Bristol. occam is a beautiful language in some ways, diabolically infuriating in others. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] Natural versus formal AI interface languages

2006-11-12 Thread Richard Loosemore
(to coin a phrase) debunked every which way from sunday. ;-) Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] A question on the symbol-system hypothesis

2006-11-12 Thread Richard Loosemore
objection is not so much that it is nakedly wrong, as that it diabolically inconsistent with a lot of stuff, and untested). From what you write, I think it was the latter issue that you were referring to. Richard Loosemore. John Scanlon wrote: I get the impression that a lot of people

Re: [agi] Natural versus formal AI interface languages

2006-11-12 Thread Richard Loosemore
in the development of real world knowledge) are posited to play a significant role in the learning of grammar in humans. As such, these proofs say nothing whatsoever about the learning of NL grammars. I agree they do have other limitations, of the sort you suggest below. Richard Loosemore. Rather

Re: [agi] A question on the symbol-system hypothesis

2006-11-13 Thread Richard Loosemore
symbol grounding, perhaps other issues. I think all of us have moved on from most of the simplistic GOFAI ideas. Richard Loosemore John Scanlon wrote: I was referring to the kind of symbol-system hypothesis that Searle's Chinese room and Hubert Dreyfus's writings attack, and wondering

Re: [agi] A question on the symbol-system hypothesis

2006-11-13 Thread Richard Loosemore
Pei Wang wrote: On 11/13/06, Richard Loosemore [EMAIL PROTECTED] wrote: But Now you have me really confused, because Searle's attack would have targetted your approach, my approach and Ben's approach equally: none of us have moved on from the position he was attacking! The situation

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Richard Loosemore
degree of comprehension by quoting numbers of bits. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Richard Loosemore
Matt Mahoney wrote: Richard Loosemore [EMAIL PROTECTED] wrote: Understanding 10^9 bits of information is not the same as storing 10^9 bits of information. That is true. Understanding n bits is the same as compressing some larger training set that has an algorithmic complexity of n bits

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Richard Loosemore
of what a hurricane is. 5) I have looked at your paper and my feelings are exactly the same as Mark's theorems developed on erroneous assumptions are worthless. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options

Re: [agi] Natural versus formal AI interface languages

2006-11-16 Thread Richard Loosemore
depend on any special assumptions about the nature of learning. Richard Loosemore wrote: I beg to differ. IIRC the sense of learning they require is induction over example sentences. They exclude the use of real world knowledge, in spite of the fact that such knowledge (or at least primitives

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Richard Loosemore
Matt Mahoney wrote: Richard Loosemore [EMAIL PROTECTED] wrote: 5) I have looked at your paper and my feelings are exactly the same as Mark's theorems developed on erroneous assumptions are worthless. Which assumptions are erroneous? Marcus Hutter's work is about abstract idealizations

Re: [agi] META: Politeness

2006-11-17 Thread Richard Loosemore
Ben Goertzel wrote: Rings and Models are appropriated terms, but the mathematicians involved would never be so stupid as to confuse them with the real things. Marcus Hutter and yourself are doing precisely that. I rest my case. Richard Loosemore Please, let us avoid explicitly insulting one

Re: [agi] RSI - What is it and how fast?

2006-11-17 Thread Richard Loosemore
to infinity... a spurious argument, of course, because they can go in any direction. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] A question on the symbol-system hypothesis

2006-11-17 Thread Richard Loosemore
Ben Goertzel wrote: Rings and Models are appropriated terms, but the mathematicians involved would never be so stupid as to confuse them with the real things. Marcus Hutter and yourself are doing precisely that. I rest my case. Richard Loosemore IMO these analogies are not fair

Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-19 Thread Richard Loosemore
, said Marvin and trudged away. ** Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] Understanding Natural Language

2006-11-23 Thread Richard Loosemore
here before (Levelt's Speaking) in which the author takes apart a single conversational exchange consisting of a couple of short sentences. Richard Loosemore J. Storrs Hall, PhD. wrote: It was a true solar-plexus blow, and completely knocked out, Perkins staggered back against the instrument

Re: [agi] Natural versus formal AI interface languages

2006-11-24 Thread Richard Loosemore
something that was already stretched. But maybe that was not what you meant. I stand ready to be corrected, if it turns out I have goofed. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com

Re: [agi] Natural versus formal AI interface languages

2006-11-25 Thread Richard Loosemore
to human language really was? It sounds like Immerman is putting the significance of complexity classes on firmer ground, but not changing the nature of what they are saying. Richard Loosemore -- Ben On 11/24/06, Richard Loosemore [EMAIL PROTECTED] wrote: Ben Goertzel wrote

Re: [agi] Natural versus formal AI interface languages

2006-11-25 Thread Richard Loosemore
are making with respect to the computational complexity of processes like grammar induction and the evolutionary construction of learning systems. We are coming from similar points of view, but reaching diametrically opposed conclusions. Richard Loosemore. - This list is sponsored

Re: [agi] Understanding Natural Language

2006-11-26 Thread Richard Loosemore
no reason to suppose that such a framework heads in the direction of a system that is intelligent. You could build an entire system using the framework, and then do some experiments, and then I'd be convinced. But short of that I don't see any reason to be optimistic. Richard Loosemore

Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Richard Loosemore
Philip Goetz wrote: On 11/17/06, Richard Loosemore [EMAIL PROTECTED] wrote: I was saying that *because* (for independent reasons) these people's usage of terms like intelligence is so disconnected from commonsense usage (they idealize so extremely that the sense of the word no longer bears

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-30 Thread Richard Loosemore
, at some point in the future. Richard Loosemore wrote: The point I am heading towards, in all of this, is that we need to unpack some of these ideas in great detail in order to come to sensible conclusions. I think the best way would be in a full length paper, although I did

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore
arguments. Does that make sense? Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore
, in other words, is in the details. Richard Loosemore. */Philip Goetz [EMAIL PROTECTED]/* wrote: On 11/19/06, Richard Loosemore wrote: The goal-stack AI might very well turn out simply not to be a workable design at all! I really do mean that: it won't become

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore
Samantha Atkins wrote: On Nov 30, 2006, at 12:21 PM, Richard Loosemore wrote: Recursive Self Inmprovement? The answer is yes, but with some qualifications. In general RSI would be useful to the system IF it were done in such a way as to preserve its existing motivational priorities

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore
at least thirty years ago (with the exception of a few diehards in North Wales and Cambridge). Richard Loosemore [With apologies to Fergus, Nick and Ian, who may someday come across this message and start flaming me]. - This list is sponsored by AGIRI: http://www.agiri.org/email

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore
on a goal stack approach. You are repeating the same mistakes that I already dealt with. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Richard Loosemore
Philip Goetz wrote: On 12/1/06, Richard Loosemore [EMAIL PROTECTED] wrote: The questions you asked above are predicated on a goal stack approach. You are repeating the same mistakes that I already dealt with. Some people would call it repeating the same mistakes I already dealt with. Some

[agi] Re: Motivational Systems of an AI

2006-12-03 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: I am disputing the very idea that monkeys (or rats or pigeons or humans) have a part of the brain which generates the reward/punishment signal for operant conditioning. This is behaviorism. I find myself completely

[agi] Re: Motivational Systems of an AI

2006-12-03 Thread Richard Loosemore
J. Storrs Hall, PhD. wrote: On Friday 01 December 2006 23:42, Richard Loosemore wrote: It's a lot easier than you suppose. The system would be built in two parts: the motivational system, which would not change substantially during RSI, and the thinking part (for want of a better term

Re: [agi] The Singularity

2006-12-05 Thread Richard Loosemore
is the present approach to AI then I tend to agree with you John: ludicrous. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] Goals and subgoals

2006-12-07 Thread Richard Loosemore
just stated). Richard Loosemore. SUBGOAL PROMOTION AND ALIENATION One very common phenomenon is when a supergoal is erased, but one of its subgoals is promoted to the level of supergoal. For instance, originally one may become

Re: [agi] RE: [extropy-chat] Criticizing One's Own Goals---Rational?

2006-12-07 Thread Richard Loosemore
of repetitions of the same ideological statement). Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] Goals and subgoals

2006-12-07 Thread Richard Loosemore
. Richard Loosemore. By discussing goals, I was not trying to imply that all aspect of a mind (or even most) need to, or should, operate according to an explicit goal hierarchy. I believe that the human mind incorporates **both(( a set of goal stacks (mainly useful in deliberative thought

Re: [agi] Geoffrey Hinton's ANNs

2006-12-12 Thread Richard Loosemore
. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] Geoffrey Hinton's ANNs

2006-12-12 Thread Richard Loosemore
. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] The Edge: The Neurology of Self-Awareness by VS Ramachandran

2007-01-14 Thread Richard Loosemore
the time. Hope that helps. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

[agi] About the brain-emulation route to AGI

2007-01-22 Thread Richard Loosemore
Generation Project and (Naive) Neural Networks. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

[agi] Relevance of Probability

2007-02-04 Thread Richard Loosemore
unreasonable position, that's all ;-). Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] Relevance of Probability

2007-02-04 Thread Richard Loosemore
!) cognitive system is a direct rejection of the idea that I was asking you to consider as a hypothesis. I *know* you don't believe it to be true! ;-) What I was trying to do was to ask on what grounds you reject it. Richard Loosemore. - This list is sponsored by AGIRI: http

Re: [agi] Relevance of Probability

2007-02-04 Thread Richard Loosemore
the type of my question is). Richard Loosemore. Pei Wang wrote: Richard, The assumption is that the underlying dynamics of things at the concept level (or logical term level, if concept is not to your liking) can be meaningfully described by things that look something like probabilities. I

Re: [agi] Relevance of Probability

2007-02-04 Thread Richard Loosemore
Pei Wang wrote: On 2/4/07, Richard Loosemore [EMAIL PROTECTED] wrote: I fully accept that you don't care if the human mind does it that way, because you want NARS to do it differently. My question was at a higher level. If we knew for sure that the human mind was using something like

Re: [agi] Relevance of Probability

2007-02-04 Thread Richard Loosemore
interpretation of Oaksford and Chater is that it is actually caused by too much of it. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] Relevance of Probability

2007-02-05 Thread Richard Loosemore
, is that the possibility I raised is still completely open. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Gamblers Probability Judgements [WAS Re: [agi] Betting and multiple-component truth values]

2007-02-08 Thread Richard Loosemore
goal. Just a thought. Richard Loosemore. Charles D Hixson wrote: That's not what I meant. I don't think that people really operate on the basis of probabilistic calculations, but rather on short-range attractors. What I see them being motivated by is the dream of riches, which feels closer

Re: [agi] conjunction fallacy

2007-02-10 Thread Richard Loosemore
is the best, of the two suggested above. Hint: don't go for the dumb one, because it is not really smart enough to be an Artificial GENERAL Intelligence. Regards Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options

Re: [agi] conjunction fallacy

2007-02-11 Thread Richard Loosemore
gts wrote: On Sat, 10 Feb 2007 13:41:33 -0500, Richard Loosemore [EMAIL PROTECTED] wrote: The meat of this argument is all in what exact type of AGI you claim is the best, of the two suggested above. The best AGI in this context would be one capable of avoiding the conjunction fallacy

Re: [agi] conjunction fallacy

2007-02-12 Thread Richard Loosemore
gts wrote: On Sun, 11 Feb 2007 11:41:31 -0500, Richard Loosemore [EMAIL PROTECTED] wrote: P.S. This isn't the first time this topic has come up. For a now famous example, see my essay at http://sl4.org/archive/0605/14748.html and the follow-up at http://sl4.org/archive/0605/14773.html

Re: [agi] Enumeration of useful genetic biases for AGI

2007-02-14 Thread Richard Loosemore
of that machinery. And what is the boundary between an ontological bias and a lesser tendency to learn a certain kind of thing, which can nevertheless be overridden through experience? Richard Loosemore. Ben Goertzel wrote: Hi, In a recent offlist email dialogue with an AI researcher, he made

Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-17 Thread Richard Loosemore
, it was different. Lisp and Prolog, for example, represented particular ways of thinking about the task of building an AI. The framework for those paradigms was strongly represented by the language itself. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email

[agi] Development Environments for AI (a few non-religious comments!)

2007-02-19 Thread Richard Loosemore
the general problem. Again, apologies for coyness: possible patent pending and all that. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] The Missing Piece

2007-02-19 Thread Richard Loosemore
an alternative approach. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: Mystical Emergence/Complexity [WAS Re: [agi] The Missing Piece]

2007-02-19 Thread Richard Loosemore
Bo Morgan wrote: On Mon, 19 Feb 2007, Richard Loosemore wrote: ) Bo Morgan wrote: ) ) On Mon, 19 Feb 2007, John Scanlon wrote: ) ) ) Is there anyone out there who has a sense that most of the work being ) ) done in AI is still following the same track that has failed for ) ) fifty years

Re: [agi] The Missing Piece

2007-02-19 Thread Richard Loosemore
of the symbols being encoded at that hardware-dependent level. I haven't seen any neuroscientists who talk that way show any indication that they have a clue that there are even problems with it, let alone that they have good answers to those problems. In other words, I don't think I buy it. Richard

Re: [agi] Development Environments for AI (a few non-religious comments!)

2007-02-20 Thread Richard Loosemore
Chuck Esterbrook wrote: On 2/19/07, Richard Loosemore [EMAIL PROTECTED] wrote: Wow, I leave off email for two days and a 55-message Religious War breaks out! ;-) I promise this is nothing to do with languages I do or do not like (i.e. it is non-religious...). As many people pointed out

Re: [agi] Has anyone read On Intelligence

2007-02-21 Thread Richard Loosemore
banging the rocks together. Having said that, there is an element of truth in what Hawkins says. My personal opinion is that he has only a fragment of the truth, however, and is mistaking it for the whole deal. Richard Loosemore. - This list is sponsored by AGIRI: http

Re: [agi] Has anyone read On Intelligence

2007-02-22 Thread Richard Loosemore
construction of AI systems. Richard Loosemore Eric Baum wrote: Josh The other idea in OI worth noting is Mountcastle's Principle, Josh that all of the cortex seems to be doing the same thing. Hawkins Josh gets credit for pointing it out, but of course it was a Josh published observation

Re: [agi] Sussman robust systems paper

2007-02-27 Thread Richard Loosemore
new theme that I missed? Richard Loosemore. Mark Waser wrote: I think that it's also very important/interesting to note that his subject headings exactly specify the development environment that Richard Loosemoore and others are pushing for (i.e. An Infrastructure to Support

Re: [agi] Marvin Minsky's 2001 AI Talk on podcast

2007-03-05 Thread Richard Loosemore
-systems/complexity approach, Ben has his eclectic approach, Pei has his NARS approach and Peter Voss has something else again (does it make sense to call it a neural-gas approach, Peter?). Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe

Re: [agi] Marvin Minsky's 2001 AI Talk on podcast

2007-03-05 Thread Richard Loosemore
Bo Morgan wrote: On Mon, 5 Mar 2007, Richard Loosemore wrote: ) Rowan Cox wrote: ) Hey all, ) ) Just thought I'd breifly delurk to post a link (or three,..). I ) believe this is a talk from 2001, so everyone else has probably heard ) it already ;) ) ) Part 1: ) http

  1   2   3   4   5   6   7   8   >