[agi] Paper: Voodoo Correlations in Social Neuroscience

2009-01-15 Thread Mark Waser
http://machineslikeus.com/news/paper-voodoo-correlations-social-neuroscience http://www.pashler.com/Articles/Vul_etal_2008inpress.pdf --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed:

Re: [agi] The Smushaby of Flatway.

2009-01-09 Thread Mark Waser
But how can it dequark the tachyon antimatter containment field? Richard, You missed Mike Tintner's explanation . . . . You're not thinking your argument through. Look carefully at my spontaneous COW - DOG - TAIL - CURRENT CRISIS - LOCAL VS GLOBAL THINKING - WHAT A NICE DAY - MUST GET

Re: [agi] Religious attitudes to NBIC technologies

2008-12-09 Thread Mark Waser
The problem here is that WE don't have anything to point to as OUR religion Why not go with Unitarian Universalism? It's non-creedal (i.e. you don't have to believe in God -or- you can believe in any God minus any of the anit-other-religion stuff) and has a long history and already established

Re: [agi] The Future of AGI

2008-11-26 Thread Mark Waser
- Original Message - From: Mike Tintner [EMAIL PROTECTED] I should explain rationality No Mike, you *really* shouldn't. Repurposing words like you do merely leads to confusion not clarity . . . . Actual general intelligence in humans and animals is indisputably continuously

RE: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Mark Waser
Yeah. Great headline -- Man beats dead horse beyond death! I'm sure that there will be more details at 11. Though I am curious . . . . BillK, why did you think that this was worth posting? - Original Message - From: Derek Zahn To: agi@v2.listbox.com Sent: Thursday, November

Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Mark Waser
BICA community for sure . . . . - Original Message - From: BillK [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Thursday, November 20, 2008 10:37 AM Subject: **SPAM** Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory On Thu, Nov 20, 2008 at 3:06 PM, Mark

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-18 Thread Mark Waser
I mean that people are free to decide if others feel pain. Wow! You are one sick puppy, dude. Personally, you have just hit my Do not bother debating with list. You can decide anything you like -- but that doesn't make it true. - Original Message - From: Matt Mahoney [EMAIL

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Mark Waser
My problem is if qualia are atomic, with no differentiable details, why do some feel different than others -- shouldn't they all be separate but equal? Red is relatively neutral, while searing hot is not. Part of that is certainly lower brain function, below the level of consciousness, but that

Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Mark Waser
: Tuesday, November 18, 2008 6:26 PM Subject: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction) --- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote: Autobliss has no grounding, no internal feedback, and no volition. By what

Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Mark Waser
at 6:26 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote: Autobliss has no grounding, no internal feedback, and no volition. By what definitions does it feel pain? Now you are making up new rules

Re: **SPAM** Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Mark Waser
Seed AI is a myth. Ah. Now I get it. You are on this list solely to try to slow down progress as much as possible . . . . (sorry that I've been so slow to realize this) add-rule kill-file Matt Mahoney - Original Message - From: Matt Mahoney To: agi@v2.listbox.com Sent:

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mark Waser
follow them? - Original Message - From: Matt Mahoney [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Monday, November 17, 2008 9:35 AM Subject: Re: [agi] A paper that actually does solve the problem of consciousness --- On Sun, 11/16/08, Mark Waser [EMAIL PROTECTED] wrote: I wrote

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mark Waser
except ivory tower pontification. - Original Message - From: Matt Mahoney [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Monday, November 17, 2008 12:20 PM Subject: Re: [agi] A paper that actually does solve the problem of consciousness --- On Mon, 11/17/08, Mark Waser [EMAIL

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mark Waser
I have no doubt that if you did the experiments you describe, that the brains would be rearranged consistently with your predictions. But what does that say about consciousness? What are you asking about consciousness? - Original Message - From: Matt Mahoney [EMAIL PROTECTED] To:

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mark Waser
An excellent question from Harry . . . . So when I don't remember anything about those towns, from a few minutes ago on my road trip, is it because (a) the attentional mechanism did not bother to lay down any episodic memory traces, so I cannot bring back the memories and analyze them, or (b)

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Mark Waser
: Matt Mahoney [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Monday, November 17, 2008 2:17 PM Subject: Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction --- On Mon, 11/17/08, Mark Waser [EMAIL PROTECTED] wrote: No it won't, because people are free

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Mark Waser
I think the reason that the hard question is interesting at all is that it would presumably be OK to torture a zombie because it doesn't actually experience pain, even though it would react exactly like a human being tortured. That's an ethical question. Ethics is a belief system that exists

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Mark Waser
An understanding of what consciousness actually is, for starters. It is a belief. No it is not. And that statement (It is a belief) is a cop-out theory. An understanding of what consciousness is requires a consensus definition of what it is. For most people, it seems to be an

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Mark Waser
This does not mean that certain practices are good or bad. If there was such a thing, then there would be no debate about war, abortion, euthanasia, capital punishment, or animal rights, because these questions could be answered experimentally. Given a goal and a context, there is absolutely

Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Mark Waser
I've noticed lately that the paranoid fear of computers becoming intelligent and taking over the world has almost entirely disappeared from the common culture. Is this sarcasm, irony, or are you that unaware of current popular culture (i.e. Terminator Chronicles on TV, a new Terminator movie

Re: [agi] Occam's Razor and its abuse

2008-10-31 Thread Mark Waser
*. = = = = = = Where do you believe that he proves Occam's razor? - Original Message - From: Matt Mahoney [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Wednesday, October 29, 2008 10:46 PM Subject: Re: [agi] Occam's Razor and its abuse --- On Wed, 10/29/08, Mark Waser [EMAIL

Re: [agi] Occam's Razor and its abuse

2008-10-31 Thread Mark Waser
, October 31, 2008 5:54 PM Subject: Re: [agi] Occam's Razor and its abuse I think Hutter is being modest. -- Matt Mahoney, [EMAIL PROTECTED] --- On Fri, 10/31/08, Mark Waser [EMAIL PROTECTED] wrote: From: Mark Waser [EMAIL PROTECTED] Subject: Re: [agi] Occam's Razor and its abuse To: agi@v2

Re: [agi] constructivist issues

2008-10-29 Thread Mark Waser
Godel originally showed... On Tue, Oct 28, 2008 at 2:50 PM, Mark Waser [EMAIL PROTECTED] wrote: That is thanks to Godel's incompleteness theorem. Any formal system that describes numbers is doomed to be incomplete Yes, any formal system is doomed

Re: [agi] Occam's Razor and its abuse

2008-10-29 Thread Mark Waser
(1) Simplicity (in conclusions, hypothesis, theories, etc.) is preferred. (2) The preference to simplicity does not need a reason or justification. (3) Simplicity is preferred because it is correlated with correctness. I agree with (1), but not (2) and (3). I concur but would add that (4)

Re: [agi] constructivist issues

2008-10-29 Thread Mark Waser
particular case, we only need integers going up to the size of the universe ;-) On Wed, Oct 29, 2008 at 7:24 AM, Mark Waser [EMAIL PROTECTED] wrote: However, it does seem clear that the integers (for instance) is not an entity with *scientific* meaning, if you accept my formalization

Re: [agi] constructivist issues

2008-10-29 Thread Mark Waser
(a specific type of constructivist) would be a better term for the view I'm referring to. --Abram Demski On Tue, Oct 28, 2008 at 4:13 PM, Mark Waser [EMAIL PROTECTED] wrote: Numbers can be fully defined in the classical sense, but not in the constructivist sense. So, when you say fully defined question

Re: [agi] constructivist issues

2008-10-29 Thread Mark Waser
. There is some K so that we never need integers with algorithmic information exceeding K. On Wed, Oct 29, 2008 at 10:32 AM, Mark Waser [EMAIL PROTECTED] wrote: but we never need arbitrarily large integers in any particular case, we only need integers going up to the size of the universe

Re: [agi] Occam's Razor and its abuse

2008-10-29 Thread Mark Waser
.listbox.com Sent: Wednesday, October 29, 2008 11:11 AM Subject: Re: [agi] Occam's Razor and its abuse --- On Wed, 10/29/08, Mark Waser [EMAIL PROTECTED] wrote: (1) Simplicity (in conclusions, hypothesis, theories, etc.) is preferred. (2) The preference to simplicity does not need a reason

Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser
. *That* is what I was asking about when I asked which side you fell on. Do you think such extensions are arbitrary, or do you think there is a fact of the matter? --Abram On Mon, Oct 27, 2008 at 3:33 PM, Mark Waser [EMAIL PROTECTED] wrote: The number of possible descriptions is countable I disagree

Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser
Abram, I could agree with the statement that there are uncountably many *potential* numbers but I'm going to argue that any number that actually exists is eminently describable. Take the set of all numbers that are defined far enough after the decimal point that they never accurately describe

Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser
Godelian statements, we're just unable to deduce that truth from the axioms. --Abram On Tue, Oct 28, 2008 at 7:21 AM, Mark Waser [EMAIL PROTECTED] wrote: *That* is what I was asking about when I asked which side you fell on. Do you think such extensions are arbitrary, or do you think there is a fact

Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser
write down sometime soon when I get the time. My position is fairly close to yours but I think that with these sorts of issues, the devil is in the details. ben On Tue, Oct 28, 2008 at 6:53 AM, Mark Waser [EMAIL PROTECTED] wrote: Abram, I could agree with the statement

Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser
mathematical systems? --Abram On Tue, Oct 28, 2008 at 10:20 AM, Mark Waser [EMAIL PROTECTED] wrote: Hi, We keep going around and around because you keep dropping my distinction between two different cases . . . . The statement that The cat is red is undecidable by arithmetic because it can't

Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser
by logical necessity (classical), or logical deduction (constructivist)? --Abram Demski On Tue, Oct 28, 2008 at 3:28 PM, Mark Waser [EMAIL PROTECTED] wrote: In that case, shouldn't you agree with the classical perspective on Godelian incompleteness, since Godel's incompleteness theorem is about

Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser
to. --Abram Demski On Tue, Oct 28, 2008 at 4:13 PM, Mark Waser [EMAIL PROTECTED] wrote: Numbers can be fully defined in the classical sense, but not in the constructivist sense. So, when you say fully defined question, do you mean a question for which all answers are stipulated by logical

Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser
formal system that contains some basic arithmetic apparatus equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be incomplete with respect to statements about numbers... that is what Godel originally showed... On Tue, Oct 28, 2008 at 2:50 PM, Mark Waser [EMAIL PROTECTED] wrote

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-27 Thread Mark Waser
of science in The Hidden Pattern and online... ben g On Sun, Oct 26, 2008 at 9:51 AM, Mark Waser [EMAIL PROTECTED] wrote: These equations seem silly to me ... obviously science is much more than that, as Mark should know as he has studied philosophy of science extensively Mark

Re: [agi] constructivist issues

2008-10-27 Thread Mark Waser
is fully justifiable in some intellectual circles! Just don't do it when non-constructivists are around :). --Abram On Sat, Oct 25, 2008 at 6:18 PM, Mark Waser [EMAIL PROTECTED] wrote: OK. A good explanation and I stand corrected and more educated. Thank you. - Original Message - From

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-27 Thread Mark Waser
in learning about some aspect of the empirical world (as understood by this group) and formalizing their conclusions and methods I wouldn't complain as much... ben On Mon, Oct 27, 2008 at 3:13 AM, Mark Waser [EMAIL PROTECTED] wrote: Or, in other words, you can't even start to draw a clear

Re: [agi] constructivist issues

2008-10-27 Thread Mark Waser
is constructed, yet truth is absolute. Could you clarify? --Abram On Mon, Oct 27, 2008 at 10:27 AM, Mark Waser [EMAIL PROTECTED] wrote: Hmmm. I think that some of our miscommunication might have been due to the fact that you seem to be talking about two things while I think that I'm talking about third

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-27 Thread Mark Waser
to formalize the difference in general, in a way that encompasses all the cases of science and is descriptive rather than normative ... but I haven't thought about it much and have other stuff to do... ben On Mon, Oct 27, 2008 at 8:40 AM, Mark Waser [EMAIL PROTECTED] wrote: You've now

Re: [agi] constructivist issues

2008-10-27 Thread Mark Waser
of the classical persuasion, believe that arithmetic is either consistent or inconsistent. You, to the extent that you are a constructivist, should say that the matter is undecidable and therefore undefined. --Abram On Mon, Oct 27, 2008 at 12:04 PM, Mark Waser [EMAIL PROTECTED] wrote: Hi, It's interesting

Re: [agi] constructivist issues

2008-10-27 Thread Mark Waser
, Oct 27, 2008 at 1:03 PM, Mark Waser [EMAIL PROTECTED] wrote: I, being of the classical persuasion, believe that arithmetic is either consistent or inconsistent. You, to the extent that you are a constructivist, should say that the matter is undecidable and therefore undefined. I believe

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-26 Thread Mark Waser
patterns corresponding to the practice of science. -- Ben On Sun, Oct 26, 2008 at 8:08 AM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote: Would it then be accurate to saySCIENCE = LEARNING + TRANSMISSION

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
] If your AGI can't learn to play chess it is no AGI --- On Fri, 10/24/08, Mark Waser [EMAIL PROTECTED] wrote: Cool. And you're saying that intelligence is not computable. So why else are we constantly invoking AIXI? Does it tell us anything else about general intelligence? AIXI says

Re: [agi] On programming languages

2008-10-25 Thread Mark Waser
http://texai.org/blog http://texai.org 3008 Oak Crest Ave. Austin, Texas, USA 78704 512.791.7860 - Original Message From: Mark Waser [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Friday, October 24, 2008 12:28:36 PM Subject: Re

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
, 2008 1:41 PM Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI --- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote: AIXI says that a perfect solution is not computable. However, a very general principle of both scientific research and machine learning is to favor

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
/25/08, Mark Waser [EMAIL PROTECTED] wrote: AIXI says that a perfect solution is not computable. However, a very general principle of both scientific research and machine learning is to favor simple hypotheses over complex ones. AIXI justifies these practices in a formal way. It also says

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
Vladimir said I pointed out only that it doesn't follow from AIXI that ad-hoc is justified. Matt used a chain of logic that went as follows: AIXI says that a perfect solution is not computable. However, a very general principle of both scientific research and machine learning is to favor

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI On Sun, Oct 26, 2008 at 1:19 AM, Mark Waser [EMAIL PROTECTED] wrote: You are now apparently declining to provide an algorithmic solution without arguing that not doing so is a disproof of your statement. Or, in other

Re: [agi] On programming languages

2008-10-25 Thread Mark Waser
Surely a coherent reply to this assertion would involve the phrases superstitious, ignorant and FUD So why don't you try to generate one to prove your guess? Are you claiming that I'm superstitious and ignorant? That I'm fearful and uncertain or trying to generate fearfulness and

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
, October 25, 2008 5:51 PM Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI --- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote: The fact that Occam's Razor works in the real world suggests that the physics of the universe is computable. Otherwise AIXI would not apply

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
, Mark Waser [EMAIL PROTECTED] wrote: Scientists choose experiments to maximize information gain. There is no reason that machine learning algorithms couldn't do this, but often they don't. Heh. I would say that scientists attempt to do this and machine learning algorithms should do it. So

Re: [agi] constructivist issues

2008-10-25 Thread Mark Waser
can't be equivalent! So, since Godel's theorem follows so closely from Tarski's (even though Tarski's came later), it is better to invoke Tarski's by default if you aren't sure which one applies. --Abram On Sat, Oct 25, 2008 at 4:22 PM, Mark Waser [EMAIL PROTECTED] wrote: So you're saying

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
: Saturday, October 25, 2008 6:27 PM Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI On Sat, Oct 25, 2008 at 11:14 PM, Mark Waser [EMAIL PROTECTED] wrote: Anyone else want to take up the issue of whether there is a distinction between competent scientific research

Re: [agi] On programming languages

2008-10-25 Thread Mark Waser
People seem to debate programming languages and OS's endlessly, and this list is no exception. Yes. And like all other debates there are good points and bad points.:-) To make progress on AGI, you just gotta make *some* reasonable choice and start building Strongly agree. Otherwise

Re: AIXI (was Re: [agi] If your AGI can't learn to play chess it is no AGI)

2008-10-25 Thread Mark Waser
: Saturday, October 25, 2008 7:21 PM Subject: AIXI (was Re: [agi] If your AGI can't learn to play chess it is no AGI) --- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote: Ummm. It seems like you were/are saying then that because AIXI makes an assumption limiting it's own applicability/proof

Re: AIXI (was Re: [agi] If your AGI can't learn to play chess it is no AGI)

2008-10-25 Thread Mark Waser
of finite-precision data is fundamentally limited in what it can tell us about the universe ... which would really suck... -- Ben G -- Ben G On Sat, Oct 25, 2008 at 7:21 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote: Ummm

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Mark Waser
No Mike. AGI must be able to discover regularities of all kind in all domains. Must it be able to *discover* regularities or must it be able to be taught and subsequently effectively use regularities? I would argue the latter. (Can we get a show of hands of those who believe the former? I

Re: [agi] constructivist issues

2008-10-24 Thread Mark Waser
The limitations of Godelian completeness/incompleteness are a subset of the much stronger limitations of finite automata. Can we get a listing of what you believe these limitations are and whether or not you believe that they apply to humans? I believe that humans are constrained by *all*

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Mark Waser
This does not imply that people usually do not use visual patterns to solve chess. It only implies that visual patterns are not necessary. So . . . wouldn't dolphins and bats use sonar patterns to play chess? So . . . is it *vision* or is it the most developed (for the individual), highest

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Mark Waser
E.g. according to this, AIXI (with infinite computational power) but not AIXItl would have general intelligence, because the latter can only find regularities expressible using programs of length bounded by l and runtime bounded by t rant I hate AIXI because not only does it have

Re: [agi] On programming languages

2008-10-24 Thread Mark Waser
Abram, Would you agree that this thread is analogous to our debate? - Original Message - From: Vladimir Nesov [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Friday, October 24, 2008 6:49 AM Subject: **SPAM** Re: [agi] On programming languages On Fri, Oct 24, 2008 at 2:16 PM,

Re: [agi] constructivist issues

2008-10-24 Thread Mark Waser
Heger [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Friday, October 24, 2008 10:27 AM Subject: AW: [agi] constructivist issues Mark Waser wrote: Can we get a listing of what you believe these limitations are and whether or not you believe that they apply to humans? I believe that humans

Re: [agi] On programming languages

2008-10-24 Thread Mark Waser
Instead of arguing language, why don't you argue platform? Name a language and there's probably a .Net version. They are all interoperable so you can use whatever is most appropriate. Personally, the fact that you can now even easily embed functional language statements in procedural

Re: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Mark Waser
Mahoney, [EMAIL PROTECTED] --- On Fri, 10/24/08, Mark Waser [EMAIL PROTECTED] wrote: From: Mark Waser [EMAIL PROTECTED] Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI To: agi@v2.listbox.com Date: Friday, October 24, 2008, 9:51 AM

Re: [agi] constructivist issues

2008-10-24 Thread Mark Waser
continuous differentiable function (which gets back to the whole discussion with Ben). Have you heard of Tarski's undefinability theorem? It is relevant to this discussion. http://en.wikipedia.org/wiki/Indefinability_theory_of_truth --Abram On Fri, Oct 24, 2008 at 9:19 AM, Mark Waser [EMAIL PROTECTED

Re: [agi] On programming languages

2008-10-24 Thread Mark Waser
But I thought I'd mention that for OpenCog we are planning on a cross-language approach. The core system is C++, for scalability and efficiency reasons, but the MindAgent objects that do the actual AI algorithms should be creatable in various languages, including Scheme or LISP. *nods* As you

Re: [agi] On programming languages

2008-10-24 Thread Mark Waser
AGI *really* needs an environment that comes with reflection and metadata support (including persistence, accessibility, etc.) baked right in. http://msdn.microsoft.com/en-us/magazine/cc301780.aspx (And note that the referenced article is six years old and several major releases back) This

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Mark Waser
PM Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI --- On Fri, 10/24/08, Mark Waser [EMAIL PROTECTED] wrote: The value of AIXI is not that it tells us how to solve AGI. The value is that it tells us intelligence is not computable Define not computable Too many people

Re: [agi] On programming languages

2008-10-24 Thread Mark Waser
] On programming languages On Fri, Oct 24, 2008 at 5:37 PM, Mark Waser [EMAIL PROTECTED] wrote: Instead of arguing language, why don't you argue platform? Platform is certainly an interesting question. I take the view that Common Lisp has the advantage of allowing me to defer the choice

Re: [agi] On programming languages

2008-10-24 Thread Mark Waser
78704 512.791.7860 - Original Message From: Mark Waser [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Friday, October 24, 2008 12:28:36 PM Subject: Re: [agi] On programming languages AGI *really* needs an environment that comes with reflection and metadata support

Re: AW: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-23 Thread Mark Waser
I have already proved something stronger What would you consider your best reference/paper outlining your arguments? Thanks in advance. - Original Message - From: Matt Mahoney [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Wednesday, October 22, 2008 8:55 PM Subject: Re: AW: AW:

Re: [agi] constructivist issues

2008-10-23 Thread Mark Waser
which we could measure the provably-true (whatever THAT would mean). So, Godel's theorem is way overkill here in my opinion. --Abram On Wed, Oct 22, 2008 at 7:48 PM, Mark Waser [EMAIL PROTECTED] wrote: Most of what I was thinking of and referring to is in Chapter 10. Gödel's Quintessential

Re: Lojban (was Re: [agi] constructivist issues)

2008-10-23 Thread Mark Waser
the common sense to carry out disambiguation and reference resolution reliably. Also, the log of communication would provide a nice training DB for it to use in studying disambiguation. -- Ben G On Wed, Oct 22, 2008 at 12:00 PM, Mark Waser [EMAIL PROTECTED] wrote

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
You may not like Therefore, we cannot understand the math needed to define our own intelligence., but I'm rather convinced that it's correct. Do you mean to say that there are parts that we can't understand or that the totality is too large to fit and that it can't be cleanly and completely

Re: AW: AW: [agi] Re: Defining AGI

2008-10-22 Thread Mark Waser
However, the point I took issue with was your claim that a stupid person could be taught to effectively do science ... or (your later modification) evaluation of scientific results. At the time I originally took exception to your claim, I had not read the earlier portion of the thread, and

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
It doesn't, because **I see no evidence that humans can understand the semantics of formal system in X in any sense that a digital computer program cannot** I just argued that humans can't understand the totality of any formal system X due to Godel's Incompleteness Theorem but the rest of

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
I don't want to diss the personal value of logically inconsistent thoughts. But I doubt their scientific and engineering value. I doesn't seem to make sense that something would have personal value and then not have scientific or engineering value. I can sort of understand science if you're

Re: AW: AW: [agi] Re: Defining AGI

2008-10-22 Thread Mark Waser
I'm also confused. This has been a strange thread. People of average and around-average intelligence are trained as lab technicians or database architects every day. Many of them are doing real science. Perhaps a person with down's syndrome would do poorly in one of these largely practical

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
(1) We humans understand the semantics of formal system X. No. This is the root of your problem. For example, replace formal system X with XML. Saying that We humans understand the semantics of XML certainly doesn't work and why I would argue that natural language understanding is

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
Well, if you are a computable system, and if by think you mean represent accurately and internally then you can only think that odd thought via being logically inconsistent... ;-) True -- but why are we assuming *internally*? Drop that assumption as Charles clearly did and there is no

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
I disagree, and believe that I can think X: This is a thought (T) that is way too complex for me to ever have. Obviously, I can't think T and then think X, but I might represent T as a combination of myself plus a notebook or some other external media. Even if I only observe part of T at

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
You have not convinced me that you can do anything a computer can't do. And, using language or math, you never will -- because any finite set of symbols you can utter, could also be uttered by some computational system. -- Ben G Can we pin this somewhere? (Maybe on Penrose? ;-)

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
IMHO that is an almost hopeless approach, ambiguity is too integral to English or any natural language ... e.g preposition ambiguity Actually, I've been making pretty good progress. You just always use big words and never use small words and/or you use a specific phrase as a word.

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
PROTECTED] wrote: On Wed, Oct 22, 2008 at 10:51 AM, Mark Waser [EMAIL PROTECTED] wrote: I don't want to diss the personal value of logically inconsistent thoughts. But I doubt their scientific and engineering value. I doesn't seem to make sense that something would have personal

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
at 10:51 AM, Mark Waser [EMAIL PROTECTED] wrote: I don't want to diss the personal value of logically inconsistent thoughts. But I doubt their scientific and engineering value. I doesn't seem to make sense that something would have personal value and then not have scientific or engineering

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
What I meant was, it seems like humans are logically complete in some sense. In practice we are greatly limited by memory and processing speed and so on; but I *don't* think we're limited by lacking some important logical construct. It would be like us discovering some alien species whose

Re: [OpenCog] Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
for it to use in studying disambiguation. -- Ben G On Wed, Oct 22, 2008 at 12:00 PM, Mark Waser [EMAIL PROTECTED] wrote: IMHO that is an almost hopeless approach, ambiguity is too integral to English or any natural language ... e.g preposition ambiguity Actually, I've been making

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
. --Abram On Wed, Oct 22, 2008 at 12:20 PM, Mark Waser [EMAIL PROTECTED] wrote: It looks like all this disambiguation by moving to a more formal language is about sweeping the problem under the rug, removing the need for uncertain reasoning from surface levels of syntax and semantics, to remember

Re: [OpenCog] Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
with the existing link-parser/RelEx framework... If anyone wants to implement it, it seems like just some hacking with the open-source Java RelEx code... ben g On Wed, Oct 22, 2008 at 12:59 PM, Mark Waser [EMAIL PROTECTED] wrote: I think this would be a relatively pain-free way

Re: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Mark Waser
A couple of distinctions that I think would be really helpful for this discussion . . . . There is a profound difference between learning to play chess legally and learning to play chess well. There is an equally profound difference between discovering how to play chess well and being taught

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
If MW would be scientific then he would not have asked Ben to prove that MWs hypothesis is wrong. Science is done by comparing hypotheses to data. Frequently, the fastest way to handle a hypothesis is to find a counter-example so that it can be discarded (or extended appropriately to handle

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
Oh, and I *have* to laugh . . . . Hence the wiki entry on scientific method: Scientific method is not a recipe: it requires intelligence, imagination, and creativity http://en.wikipedia.org/wiki/Scientific_method This is basic stuff. In the cited wikipedia entry, the phrase Scientific

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
, October 21, 2008 10:41 AM Subject: Re: AW: AW: [agi] Re: Defining AGI On Tue, Oct 21, 2008 at 10:38 AM, Mark Waser [EMAIL PROTECTED] wrote: Oh, and I *have* to laugh . . . . Hence the wiki entry on scientific method: Scientific method is not a recipe: it requires

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
. This is the first time you speak about pre-requisites. Direct quote cut and paste from *my* e-mail . . . . . - Original Message - From: Mark Waser [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Sunday, October 19, 2008 4:01 PM Subject

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
PM, Mark Waser [EMAIL PROTECTED] wrote: Yes, but each of those steps is very vague, and cannot be boiled down to a series of precise instructions sufficient for a stupid person to consistently carry them out effectively... So -- are those stupid people still general intelligences

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
Wow! Way too much good stuff to respond to in one e-mail. I'll try to respond to more in a later e-mail but . . . . (and I also want to get your reaction to a few things first :-) However, I still don't think that a below-average-IQ human can pragmatically (i.e., within the scope of the

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
AI! :-) This is what I was trying to avoid. :-) My objection starts with How is a Bayes net going to do feature extraction? A Bayes net may be part of a final solution but as you even indicate, it's only going to be part . . . . - Original Message - From: Eric Burton

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
Incorrect things are wrapped up with correct things in peoples' minds Mark seems to be thinking of something like the checklist that the ISP technician walks through when you call with a problem. Um. No. I'm thinking that in order to integrate a new idea into your world model, you first

  1   2   3   4   5   6   7   8   >