Re: [agi] Circular definitions of intelligence

2007-04-26 Thread Richard Loosemore
. Consider that, folks, to be a challenge: to those who think there is such a definition, I await your reply. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id

Re: [agi] Re: Why do you think your AGI design will work?

2007-04-26 Thread Richard Loosemore
Russell Wallace wrote: On 4/26/07, *Richard Loosemore* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Russell Wallace wrote: I disagree. The human cognitive system is very closely tied to the hardware it runs on. Understanding it in anywhere near the level of detail

Re: [agi] Circular definitions of intelligence

2007-04-26 Thread Richard Loosemore
positives)? If a proposed definition fails that test, it goes squarely in my (b) category above. And if all the proposed definitions go in either 9a) or (b) or (c), then they are all, as I said before, pointless. Richard Loosemore. - This list is sponsored by AGIRI: http

Re: [agi] Circular definitions of intelligence

2007-04-26 Thread Richard Loosemore
that no formal definition is possible, and that the best we can do is produce informal definitions, that's GREAT. I think that informal lists of characteristics can sometimes be a help. But there is a big difference between that and claiming formal definitions. Richard Loosemore

Re: [agi] Circular definitions of intelligence

2007-04-26 Thread Richard Loosemore
might not be disagreeing. Richard Loosemore Pragmatically, we can posit a particular pattern-recognition system S, and talk about all the patterns in the graph of f that S can recognize. If we let S = a typical human, then I suggest that the above definition of intelligence captures a lot

Re: [agi] Circular definitions of intelligence

2007-04-26 Thread Richard Loosemore
it was brainless of me to continue ;-) Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Circular definitions of intelligence

2007-04-26 Thread Richard Loosemore
% of the AI research person-hours might be getting poured down the toilet because some people are claiming scientific validity for what they do, when in fact they are just pissing into the wind. This is no joke. Richard Loosemore. My definition covers your point: If I drop this pencil, does

[agi] Circular definitions of intelligence

2007-04-25 Thread Richard Loosemore
in terms of agents, goals etc. But when you dissect the meanings of terms like agents and goals you find the same surrepticious dependence on subjective terms. Pure nonsense. Sham science. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email

Re: [agi] Re: Why do you think your AGI design will work?

2007-04-25 Thread Richard Loosemore
DEREK ZAHN wrote: Richard Loosemore writes: The best we can do is to use the human design as a close inspiration -- we do not have to make an exact copy, we just need to get close enough to build something in the same family of systems, that's all -- and set up progress criteria based on how

Re: [agi] Re: Why do you think your AGI design will work?

2007-04-25 Thread Richard Loosemore
the only response I have gotten from anyone is the same 'don't really think it is a problem'. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415user_secret

Re: [agi] Re: Why do you think your AGI design will work?

2007-04-25 Thread Richard Loosemore
DEREK ZAHN wrote: Richard Loosemore writes: I am talking about distilling the essential facts uncovered by cognitive science into a unified formalism. Just imagine all of your favorite models and theories in Cog Sci, integrated in such a way that they become an actual system

Re: [agi] Re: Why do you think your AGI design will work?

2007-04-25 Thread Richard Loosemore
never use these to build a system. Certainly this is not the kind of stuff I was talking about. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415user_secret

Re: [agi] Circular definitions of intelligence

2007-04-25 Thread Richard Loosemore
intelligences. The trick is to find a non-circular definition that leaves out the thermostats, cuddly toys, Conway's Game of Life and the little program I once wrote in Fortran that printed out HAPPY BIRTHDAY. Very old argument. Richard Loosemore. - This list is sponsored by AGIRI

Re: [agi] Re: Why do you think your AGI design will work?

2007-04-25 Thread Richard Loosemore
that the argument is wrong, that you cannot divulge? Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Re: Why do you think your AGI design will work?

2007-04-25 Thread Richard Loosemore
is not on trying (hard) to actually do the integration. Mine is. Richard Loosemore. On 4/25/07, *Richard Loosemore* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Benjamin Goertzel wrote: Derek -- As examples I'd vote for -- Bernard Baars' global workspace

Re: [agi] Re: Why do you think your AGI design will work?

2007-04-25 Thread Richard Loosemore
close to the kind of approach that I am talking about anyway, so your conclusions about your own effort may not carry over. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member

Re: [agi] Re: Why do you think your AGI design will work?

2007-04-25 Thread Richard Loosemore
Eugen Leitl wrote: On Wed, Apr 25, 2007 at 02:02:44PM -0400, Richard Loosemore wrote: I am the one who is actually getting on with the job and doing it, and I say that not only is it doable, but as far as I can see it is converging on an extremely powerful, consistent and usable model

Re: [agi] Circular definitions of intelligence

2007-04-25 Thread Richard Loosemore
on the the claim that I actualy made. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: Goals of AGI (was Re: [agi] AGI interests)

2007-04-21 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: H... I think my point may have gotten lost in the confusion here. What I was trying to say was *suppose* I produced an AGI design that used pretty much the same principles as those that operate in the human cognitive

Re: Goals of AGI (was Re: [agi] AGI interests)

2007-04-20 Thread Richard Loosemore
= capability to compress text and video better than any technology that now exists. The existence of such compression, proved by a simple test, would prove the existence of AGI. Would an AGI with exactly my (human) intelligence be able to pass your compression test? Richard Loosemore - This list

Re: Goals of AGI (was Re: [agi] AGI interests)

2007-04-20 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Would an AGI with exactly my (human) intelligence be able to pass your compression test? Only if your intelligence was uploaded to a deterministic machine. The human brain is not deterministic. Then your test is surely

Re: Goals of AGI (was Re: [agi] AGI interests)

2007-04-20 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Would an AGI with exactly my (human) intelligence be able to pass your compression test? Only if your intelligence was uploaded to a deterministic machine

Re: [tt] [agi] Definition of 'Singularity' and 'Mind'

2007-04-18 Thread Richard Loosemore
that I would not want that. Richard Loosemore Eric B. Ramsay wrote: Actually Richard, these are the things you imagine you would like to do given your current level of intelligence. I suspect very much that the moment you went super intelligent there would be a paradigm change in what you

Re: [agi] dopamine and reward prediction error

2007-04-17 Thread Richard Loosemore
William Pearson wrote: On 13/04/07, Richard Loosemore [EMAIL PROTECTED] wrote: To convey this subtlety as simply as I can, I would suggest that you ask yourself how much intelligence is being assumed in the preprocessing system that does the work of (a) picking out patterns to be considered

[agi] Definition of 'Singularity' and 'Mind'

2007-04-17 Thread Richard Loosemore
them. Or, to be precise, it is not at all obvious that such a situation will ever exist. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Designing something smarter than yourself [WAS Re: [agi] Low I.Q. AGI]

2007-04-16 Thread Richard Loosemore
Eric Baum wrote: Richard efforts (some people seem to think that there is something Richard inherently impossible about a human being able to design Richard something smarter than itself, but that idea is really just Richard science-fiction hearsay, not grounded in any real Richard limitations).

Re: [agi] dopamine and reward prediction error

2007-04-13 Thread Richard Loosemore
, and who keep repeating mistakes and running around in circles, we are going to get absolutely nowhere. That makes me think of something else I meant to say, but I will put that in a separate message. Richard Loosemore Eugen Leitl wrote: http://scienceblogs.com

[agi] A Course on Foundations of Theoretical Psychology...

2007-04-13 Thread Richard Loosemore
. What if I organised a summer school to do such a thing? This is just my spontaneous first thought about the idea. If there was enough initial interest, I would be happy to scope it out more thoroughly. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email

Re: [agi] A Course on Foundations of Theoretical Psychology...

2007-04-13 Thread Richard Loosemore
was valuable/no valuable in each one. Still, something should be squeezable into a short course. I'm going to check out some local possible venues (e.g. by the edge of a lake in Upstate NY in the summer time. ) Will get back soon. Richard Loosemore. - This list is sponsored by AGIRI: http

Re: [agi] My proposal for an AGI agenda

2007-04-09 Thread Richard Loosemore
only 90 days of use. Alas, I fear that any attempt to build a horse (sorry, an AI) by making a committee of all the existing AI techniques is going to produce something that smells exactly as sweet as a camel Richard Loosemore - This list is sponsored by AGIRI: http

Re: [agi] Growing a Brain in Switzerland

2007-04-05 Thread Richard Loosemore
who thinks that that last bit is also encoded in the human genome has got a heck of a lot of work to do . and only 14,000 genes to write it down on. Richard Loosemore Eugen Leitl wrote: On Thu, Apr 05, 2007 at 02:03:32PM +0200, Shane Legg wrote: I didn't mean to imply that all

Re: [agi] AGI interests

2007-03-26 Thread Richard Loosemore
our minds to it. I am very impatient to get things done, and this sometimes comes out in a brusque tone that nobody should take seriously. ;-) Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2

Re: Environments and Languages for AGI [WAS Re: [agi] My proposal for an AGI agenda]

2007-03-25 Thread Richard Loosemore
David Clark wrote: - Original Message - From: Richard Loosemore [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Saturday, March 24, 2007 1:46 PM Subject: Environments and Languages for AGI [WAS Re: [agi] My proposal for an AGI agenda] As for all the other talk on this list, recently

Environments and Languages for AGI [WAS Re: [agi] My proposal for an AGI agenda]

2007-03-24 Thread Richard Loosemore
a clue about what they are trying to build, and why, the question of what language (or environment) they need to use will answer itself. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2

Re: Environments and Languages for AGI [WAS Re: [agi] My proposal for an AGI agenda]

2007-03-24 Thread Richard Loosemore
Ben Goertzel wrote: Richard Loosemore wrote: As for all the other talk on this list, recently, about programming languages and the need for math, etc., I find myself amused by the irrelevance of most of it: when someone gets a clue about what they are trying to build, and why, the question

Re: [agi] Why evolution? Why not neuroscience?

2007-03-23 Thread Richard Loosemore
the things that work? Even if it did not use DNA-like strings of symbols? If they were doing this, I'd have to pay more attention. I don't think this is happening, but I can't be sure. If any one has any more perspective on this, I'd be interested. Richard Loosemore - This list

Re: [agi] Emergence

2007-03-20 Thread Richard Loosemore
against 'emergence' and apply them in the context of this one example, a lot of them, I believe, will start to look rather silly. Richard Loosemore. Ben Goertzel wrote: Like so many other terms relevant to AGI, emergence has a lot of different meanings. Some have used a very strong

Re: [agi] Emergence

2007-03-20 Thread Richard Loosemore
;-) ) it is the fact that you try to paint some people as extremists when in fact they are just the same as you. Richard Loosemore. P.S. About Daniel Amit: I haven't read the book, but are you saying he demonstrates coherent, *meaningful* symbol processing as the transition of the dynamics

Re: [agi] Logical representation

2007-03-13 Thread Richard Loosemore
Russell Wallace wrote: On 3/12/07, *Richard Loosemore* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: I'm still not quite sure if what I said came across clearly, because some of what you just said is so far away from what I intended that I have to make some kind of response

Re: [agi] Logical representation

2007-03-13 Thread Richard Loosemore
Russell Wallace wrote: On 3/13/07, *Richard Loosemore* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Good god no. It *is* the program. It is the architecture of an AI. So it is part of the AI then, like I said. Regarding the use of readable names. The atomic units

Re: [agi] Logical representation

2007-03-13 Thread Richard Loosemore
to resist that temptation. The more concrete stuff, when it arrives, is going to take some of this stuff as a course prerequisite. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com

Re: [agi] Logical representation

2007-03-13 Thread Richard Loosemore
how the motivational system is affecting the behavior. For that, we need automatic triggers that attach to the things we consider bad. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2

Re: [agi] Logical representation

2007-03-12 Thread Richard Loosemore
too close to the material to be able to see where it is confusing, but I also do not want to release the full text until it is in proper shape. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2

Re: [agi] My proposal for an AGI agenda

2007-03-12 Thread Richard Loosemore
am not sure what is left that really deserves to be called module any more. My feeling is that there is a continuum, rather than a module versus non-module way of looking at things. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe

Re: [agi] Modules; was My proposal for an AGI agenda

2007-03-12 Thread Richard Loosemore
it exactly fits with the way I see the processes of concetual combination happening, anyway. Richard Loosemore. P.S. Off topic: Does anyone know of a really reliable brand of salad spinner? Hate the damn things: they keep breaking on me. J. Storrs Hall, PhD. wrote: On Monday 12 March

Re: [agi] Logical representation

2007-03-12 Thread Richard Loosemore
Russell Wallace wrote: On 3/12/07, *Richard Loosemore* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: I'm not sure if you're just summarizing what someone would mean if they were talking about 'logical representation,' or advocating it. I'm saying there are 5 different things

Re: [agi] Logical representation

2007-03-12 Thread Richard Loosemore
Russell Wallace wrote: On 3/12/07, *Richard Loosemore* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: This is puzzling, in a way, because this is my ammunition that you are using here! That is exactly what I am trying to do: invent an AIXML. I am a little baffled because you

[agi] Salad spinners

2007-03-12 Thread Richard Loosemore
J. Storrs Hall, PhD. wrote: On Monday 12 March 2007 10:42, Richard Loosemore wrote: P.S. Off topic: Does anyone know of a really reliable brand of salad spinner? Hate the damn things: they keep breaking on me. http://www.gmi-inc.com/Products/beckman%20tl100.htm Josh Aw, heck, I

Re: [agi] Modules; was My proposal for an AGI agenda

2007-03-12 Thread Richard Loosemore
J. Storrs Hall, PhD. wrote: On Monday 12 March 2007 10:42, Richard Loosemore wrote: ... Overlooking the practical deficiencies of actual Lego as a material for dealing with food, one could imagine a kind of neoLego that really was adequate for making all the tools in my kitchen. Grant me

Re: [agi] Logical representation

2007-03-12 Thread Richard Loosemore
names? Huh? That sounds like, after all, I communicated nothing whatsoever. I don't know if that is supposed to be a serious point or not. I will assume not. Richard Loosemore. Russell Wallace wrote: Ah! That makes your position much clearer, thanks. To paraphrase to make sure I understand

Re: [agi] Marvin Minsky's 2001 AI Talk on podcast

2007-03-05 Thread Richard Loosemore
-systems/complexity approach, Ben has his eclectic approach, Pei has his NARS approach and Peter Voss has something else again (does it make sense to call it a neural-gas approach, Peter?). Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe

Re: [agi] Marvin Minsky's 2001 AI Talk on podcast

2007-03-05 Thread Richard Loosemore
Bo Morgan wrote: On Mon, 5 Mar 2007, Richard Loosemore wrote: ) Rowan Cox wrote: ) Hey all, ) ) Just thought I'd breifly delurk to post a link (or three,..). I ) believe this is a talk from 2001, so everyone else has probably heard ) it already ;) ) ) Part 1: ) http

Re: [agi] Sussman robust systems paper

2007-02-27 Thread Richard Loosemore
new theme that I missed? Richard Loosemore. Mark Waser wrote: I think that it's also very important/interesting to note that his subject headings exactly specify the development environment that Richard Loosemoore and others are pushing for (i.e. An Infrastructure to Support

Re: [agi] Has anyone read On Intelligence

2007-02-22 Thread Richard Loosemore
construction of AI systems. Richard Loosemore Eric Baum wrote: Josh The other idea in OI worth noting is Mountcastle's Principle, Josh that all of the cortex seems to be doing the same thing. Hawkins Josh gets credit for pointing it out, but of course it was a Josh published observation

Re: [agi] Has anyone read On Intelligence

2007-02-21 Thread Richard Loosemore
banging the rocks together. Having said that, there is an element of truth in what Hawkins says. My personal opinion is that he has only a fragment of the truth, however, and is mistaking it for the whole deal. Richard Loosemore. - This list is sponsored by AGIRI: http

Re: [agi] Development Environments for AI (a few non-religious comments!)

2007-02-20 Thread Richard Loosemore
Chuck Esterbrook wrote: On 2/19/07, Richard Loosemore [EMAIL PROTECTED] wrote: Wow, I leave off email for two days and a 55-message Religious War breaks out! ;-) I promise this is nothing to do with languages I do or do not like (i.e. it is non-religious...). As many people pointed out

[agi] Development Environments for AI (a few non-religious comments!)

2007-02-19 Thread Richard Loosemore
the general problem. Again, apologies for coyness: possible patent pending and all that. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] The Missing Piece

2007-02-19 Thread Richard Loosemore
an alternative approach. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: Mystical Emergence/Complexity [WAS Re: [agi] The Missing Piece]

2007-02-19 Thread Richard Loosemore
Bo Morgan wrote: On Mon, 19 Feb 2007, Richard Loosemore wrote: ) Bo Morgan wrote: ) ) On Mon, 19 Feb 2007, John Scanlon wrote: ) ) ) Is there anyone out there who has a sense that most of the work being ) ) done in AI is still following the same track that has failed for ) ) fifty years

Re: [agi] The Missing Piece

2007-02-19 Thread Richard Loosemore
of the symbols being encoded at that hardware-dependent level. I haven't seen any neuroscientists who talk that way show any indication that they have a clue that there are even problems with it, let alone that they have good answers to those problems. In other words, I don't think I buy it. Richard

Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-17 Thread Richard Loosemore
, it was different. Lisp and Prolog, for example, represented particular ways of thinking about the task of building an AI. The framework for those paradigms was strongly represented by the language itself. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email

Re: [agi] Enumeration of useful genetic biases for AGI

2007-02-14 Thread Richard Loosemore
of that machinery. And what is the boundary between an ontological bias and a lesser tendency to learn a certain kind of thing, which can nevertheless be overridden through experience? Richard Loosemore. Ben Goertzel wrote: Hi, In a recent offlist email dialogue with an AI researcher, he made

Re: [agi] conjunction fallacy

2007-02-12 Thread Richard Loosemore
gts wrote: On Sun, 11 Feb 2007 11:41:31 -0500, Richard Loosemore [EMAIL PROTECTED] wrote: P.S. This isn't the first time this topic has come up. For a now famous example, see my essay at http://sl4.org/archive/0605/14748.html and the follow-up at http://sl4.org/archive/0605/14773.html

Re: [agi] conjunction fallacy

2007-02-11 Thread Richard Loosemore
gts wrote: On Sat, 10 Feb 2007 13:41:33 -0500, Richard Loosemore [EMAIL PROTECTED] wrote: The meat of this argument is all in what exact type of AGI you claim is the best, of the two suggested above. The best AGI in this context would be one capable of avoiding the conjunction fallacy

Re: [agi] conjunction fallacy

2007-02-10 Thread Richard Loosemore
is the best, of the two suggested above. Hint: don't go for the dumb one, because it is not really smart enough to be an Artificial GENERAL Intelligence. Regards Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options

Gamblers Probability Judgements [WAS Re: [agi] Betting and multiple-component truth values]

2007-02-08 Thread Richard Loosemore
goal. Just a thought. Richard Loosemore. Charles D Hixson wrote: That's not what I meant. I don't think that people really operate on the basis of probabilistic calculations, but rather on short-range attractors. What I see them being motivated by is the dream of riches, which feels closer

Re: [agi] Relevance of Probability

2007-02-05 Thread Richard Loosemore
, is that the possibility I raised is still completely open. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

[agi] Relevance of Probability

2007-02-04 Thread Richard Loosemore
unreasonable position, that's all ;-). Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] Relevance of Probability

2007-02-04 Thread Richard Loosemore
!) cognitive system is a direct rejection of the idea that I was asking you to consider as a hypothesis. I *know* you don't believe it to be true! ;-) What I was trying to do was to ask on what grounds you reject it. Richard Loosemore. - This list is sponsored by AGIRI: http

Re: [agi] Relevance of Probability

2007-02-04 Thread Richard Loosemore
the type of my question is). Richard Loosemore. Pei Wang wrote: Richard, The assumption is that the underlying dynamics of things at the concept level (or logical term level, if concept is not to your liking) can be meaningfully described by things that look something like probabilities. I

Re: [agi] Relevance of Probability

2007-02-04 Thread Richard Loosemore
Pei Wang wrote: On 2/4/07, Richard Loosemore [EMAIL PROTECTED] wrote: I fully accept that you don't care if the human mind does it that way, because you want NARS to do it differently. My question was at a higher level. If we knew for sure that the human mind was using something like

Re: [agi] Relevance of Probability

2007-02-04 Thread Richard Loosemore
interpretation of Oaksford and Chater is that it is actually caused by too much of it. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

[agi] About the brain-emulation route to AGI

2007-01-22 Thread Richard Loosemore
Generation Project and (Naive) Neural Networks. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] The Edge: The Neurology of Self-Awareness by VS Ramachandran

2007-01-14 Thread Richard Loosemore
the time. Hope that helps. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] Geoffrey Hinton's ANNs

2006-12-12 Thread Richard Loosemore
. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] Geoffrey Hinton's ANNs

2006-12-12 Thread Richard Loosemore
. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] Goals and subgoals

2006-12-07 Thread Richard Loosemore
just stated). Richard Loosemore. SUBGOAL PROMOTION AND ALIENATION One very common phenomenon is when a supergoal is erased, but one of its subgoals is promoted to the level of supergoal. For instance, originally one may become

Re: [agi] RE: [extropy-chat] Criticizing One's Own Goals---Rational?

2006-12-07 Thread Richard Loosemore
of repetitions of the same ideological statement). Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] Goals and subgoals

2006-12-07 Thread Richard Loosemore
. Richard Loosemore. By discussing goals, I was not trying to imply that all aspect of a mind (or even most) need to, or should, operate according to an explicit goal hierarchy. I believe that the human mind incorporates **both(( a set of goal stacks (mainly useful in deliberative thought

Re: [agi] The Singularity

2006-12-05 Thread Richard Loosemore
is the present approach to AI then I tend to agree with you John: ludicrous. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

[agi] Re: Motivational Systems of an AI

2006-12-03 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: I am disputing the very idea that monkeys (or rats or pigeons or humans) have a part of the brain which generates the reward/punishment signal for operant conditioning. This is behaviorism. I find myself completely

[agi] Re: Motivational Systems of an AI

2006-12-03 Thread Richard Loosemore
J. Storrs Hall, PhD. wrote: On Friday 01 December 2006 23:42, Richard Loosemore wrote: It's a lot easier than you suppose. The system would be built in two parts: the motivational system, which would not change substantially during RSI, and the thinking part (for want of a better term

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Richard Loosemore
Philip Goetz wrote: On 12/1/06, Richard Loosemore [EMAIL PROTECTED] wrote: The questions you asked above are predicated on a goal stack approach. You are repeating the same mistakes that I already dealt with. Some people would call it repeating the same mistakes I already dealt with. Some

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore
arguments. Does that make sense? Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore
, in other words, is in the details. Richard Loosemore. */Philip Goetz [EMAIL PROTECTED]/* wrote: On 11/19/06, Richard Loosemore wrote: The goal-stack AI might very well turn out simply not to be a workable design at all! I really do mean that: it won't become

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore
Samantha Atkins wrote: On Nov 30, 2006, at 12:21 PM, Richard Loosemore wrote: Recursive Self Inmprovement? The answer is yes, but with some qualifications. In general RSI would be useful to the system IF it were done in such a way as to preserve its existing motivational priorities

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore
at least thirty years ago (with the exception of a few diehards in North Wales and Cambridge). Richard Loosemore [With apologies to Fergus, Nick and Ian, who may someday come across this message and start flaming me]. - This list is sponsored by AGIRI: http://www.agiri.org/email

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore
on a goal stack approach. You are repeating the same mistakes that I already dealt with. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-30 Thread Richard Loosemore
, at some point in the future. Richard Loosemore wrote: The point I am heading towards, in all of this, is that we need to unpack some of these ideas in great detail in order to come to sensible conclusions. I think the best way would be in a full length paper, although I did

Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Richard Loosemore
Philip Goetz wrote: On 11/17/06, Richard Loosemore [EMAIL PROTECTED] wrote: I was saying that *because* (for independent reasons) these people's usage of terms like intelligence is so disconnected from commonsense usage (they idealize so extremely that the sense of the word no longer bears

Re: [agi] Understanding Natural Language

2006-11-26 Thread Richard Loosemore
no reason to suppose that such a framework heads in the direction of a system that is intelligent. You could build an entire system using the framework, and then do some experiments, and then I'd be convinced. But short of that I don't see any reason to be optimistic. Richard Loosemore

Re: [agi] Natural versus formal AI interface languages

2006-11-25 Thread Richard Loosemore
to human language really was? It sounds like Immerman is putting the significance of complexity classes on firmer ground, but not changing the nature of what they are saying. Richard Loosemore -- Ben On 11/24/06, Richard Loosemore [EMAIL PROTECTED] wrote: Ben Goertzel wrote

Re: [agi] Natural versus formal AI interface languages

2006-11-25 Thread Richard Loosemore
are making with respect to the computational complexity of processes like grammar induction and the evolutionary construction of learning systems. We are coming from similar points of view, but reaching diametrically opposed conclusions. Richard Loosemore. - This list is sponsored

Re: [agi] Natural versus formal AI interface languages

2006-11-24 Thread Richard Loosemore
something that was already stretched. But maybe that was not what you meant. I stand ready to be corrected, if it turns out I have goofed. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com

Re: [agi] Understanding Natural Language

2006-11-23 Thread Richard Loosemore
here before (Levelt's Speaking) in which the author takes apart a single conversational exchange consisting of a couple of short sentences. Richard Loosemore J. Storrs Hall, PhD. wrote: It was a true solar-plexus blow, and completely knocked out, Perkins staggered back against the instrument

Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-19 Thread Richard Loosemore
, said Marvin and trudged away. ** Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] META: Politeness

2006-11-17 Thread Richard Loosemore
Ben Goertzel wrote: Rings and Models are appropriated terms, but the mathematicians involved would never be so stupid as to confuse them with the real things. Marcus Hutter and yourself are doing precisely that. I rest my case. Richard Loosemore Please, let us avoid explicitly insulting one

Re: [agi] RSI - What is it and how fast?

2006-11-17 Thread Richard Loosemore
to infinity... a spurious argument, of course, because they can go in any direction. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] A question on the symbol-system hypothesis

2006-11-17 Thread Richard Loosemore
Ben Goertzel wrote: Rings and Models are appropriated terms, but the mathematicians involved would never be so stupid as to confuse them with the real things. Marcus Hutter and yourself are doing precisely that. I rest my case. Richard Loosemore IMO these analogies are not fair

Re: [agi] Natural versus formal AI interface languages

2006-11-16 Thread Richard Loosemore
depend on any special assumptions about the nature of learning. Richard Loosemore wrote: I beg to differ. IIRC the sense of learning they require is induction over example sentences. They exclude the use of real world knowledge, in spite of the fact that such knowledge (or at least primitives

<    2   3   4   5   6   7   8   >