Re: [agi] How do we know we don't know?

2008-07-28 Thread Eric Burton
I think I decided pretty quickly that I don't know any words starting with foml. I don't know if this is a clue On 7/28/08, Abram Demski [EMAIL PROTECTED] wrote: It seems like you have some valid points, but I cannot help but point out a problem with your question. It seems like any system for

Re: [agi] How do we know we don't know?

2008-07-28 Thread Eric Burton
kinds of not knowing. :| On 7/28/08, Eric Burton [EMAIL PROTECTED] wrote: I think I decided pretty quickly that I don't know any words starting with foml. I don't know if this is a clue On 7/28/08, Abram Demski [EMAIL PROTECTED] wrote: It seems like you have some valid points, but I cannot

Re: [agi] How do we know we don't know?

2008-07-28 Thread Eric Burton
You will probably never encounter the fomlepung question again, so the fact that you don't know what it means will become less and less important and eventually it will drop off the end of the list. Does it email you when this occurs? xD On 7/28/08, Eric Burton [EMAIL PROTECTED] wrote

Re: Re : [agi] Re: OpenCog Prime wikibook and roadmap posted (moderately detailed design for an OpenCog-based thinking machine)

2008-08-01 Thread Eric Burton
I don't feel like getting into an argument about whether my ideas are nutty or not. My comment was probably not well thought-out. This Google Alert I've got for 'artificial intelligence' returns all kinds of stuff, enough to make me cynical. It seems to me that if you've written any code at

Re: Some statistics on Loosemore's rudeness [WAS Re: [agi] EVIDENCE RICHARD ...]

2008-08-03 Thread Eric Burton
I apologize: 1/16. Which, to be fair, is half as many, and somewhat diminishes the point I was trying to make. ,_, On 8/3/08, Eric Burton [EMAIL PROTECTED] wrote: David, in the spirit of scientific objectivity, I just did a search for the word stupid in all of the 811 messages that I have ever

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-04 Thread Eric Burton
The Chinese Room concept became more palatable to me when I started putting the emphasis on nese and not on room. /Chinese/ Room, not Chinese /Room/. I don't know why this is. I think it changes the implied meaning from a room where Chinese happens to be spoken, to a room for the

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-05 Thread Eric Burton
Honestly I never liked the Chinese Room either. I was reading this sci-fi novella online one time about Europa I think, or it could have been a Vinge novel, anyway some Chinese Rooms showed up in that and I got mad as hell. On 8/5/08, Terren Suydam [EMAIL PROTECTED] wrote: Neither of those

Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread Eric Burton
Seriously, I'm only venturing a personal opinion but I've never even especially cared for Chinese Rooms. On 8/7/08, Valentina Poletti [EMAIL PROTECTED] wrote: yep.. isn't it amazing how long a thread is becoming based on an experiment that has no significance? On 8/6/08, Steve Richfield

Re: [agi] The Necessity of Embodiment

2008-08-09 Thread Eric Burton
But, I suggest, if you examine them, these are all actually humanoid - clear adaptations of human intelligence. Nothing wrong with that. It's just that AGI-ers often *talk* as if they are developing, or could develop, a truly non-human intelligence - a brain that could think in *fundamentally*

Re: [agi] The Necessity of Embodiment

2008-08-09 Thread Eric Burton
Yes. An electronic mind need never forget important facts. It'd enjoy instant recall and on-demand instantaneous binary-precision arithmetic and all the other upshots of the substrate. On the other hand it couldn't take, say, morphine! --- agi Archives:

Re: [agi] The Necessity of Embodiment

2008-08-09 Thread Eric Burton
It's me again, electronic minds of any architecture would also have superior extensibility and open-endedness compared to biological ones. The behaviours embarked on by such a mind could be incomprehensible to the humans its mind was modelled on. I'm sure I'm right about this. On 8/9/08, Eric

Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-23 Thread Eric Burton
These have profound impacts on AGI design. First, AIXI is (provably) not computable, which means there is no easy shortcut to AGI. Second, universal intelligence is not computable because it requires testing in an infinite number of environments. Since there is no other well accepted test of

Re: [agi] I Made a Mistake

2008-08-23 Thread Eric Burton
Stupid fundamentalist troll garbage On 8/22/08, Jim Bromer [EMAIL PROTECTED] wrote: I just discovered that I made a very obvious blunder on my theory about Logical Satisfiability last November. It was a, what was I thinking, kind of error. No sooner did I discover this error a couple of

Re: [agi] I Made a Mistake

2008-08-23 Thread Eric Burton
of thing that got people wondering a month ago whether moderation is necessary on this list. If we're all adults, moderation shouldn't be necessary. Jim, do us all a favor and don't respond to that, as tempting as it may be. Terren --- On Sat, 8/23/08, Eric Burton [EMAIL PROTECTED] wrote: Stupid

Re: [agi] The Necessity of Embodiment

2008-08-23 Thread Eric Burton
I kind of feel this way too. It should be easy to get neural nets embedded in VR to achieve the intelligence of say magpies, or finches. But the same approaches you might use, top-down ones, may not scale to human level. Given a 100x increase in workstation capacity I don't see why we can't start

Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Eric Burton
Is friendliness really so context-dependent? Do you have to be human to act friendly at the exception of acting busy, greedy, angry, etc? I think friendliness is a trait we project onto things pretty readily implying it's wired at some fundamental level. It comes from the social circuits, it's

Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Eric Burton
: [agi] The Necessity of Embodiment) Matt, What is your opinion on Goedel machines? http://www.idsia.ch/~juergen/goedelmachine.html --Abram On Sun, Aug 24, 2008 at 5:46 PM, Matt Mahoney [EMAIL PROTECTED] wrote: Eric Burton [EMAIL PROTECTED] wrote: These have profound impacts on AGI design

Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Eric Burton
physics so that we can use all their technology immediately. This kind of short-circuits the grounding problem with for instance automated research and is I think a really compelling vision On 8/27/08, Eric Burton [EMAIL PROTECTED] wrote: I think if an artificial intelligence of length n was able

Re: [agi] How Would You Design a Play Machine?

2008-08-27 Thread Eric Burton
Hi, Err ... I don't have to mention that I didn't stay dead, do I? Good. Was this the archetypal death/rebirth experience found in for instance tryptamine ecstacy or a real-life near-death experience? Eric B --- agi Archives:

Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Eric Burton
Brad, scary stuff. Dissociatives/NMDA inhibitors were secret option number three! ;D On 8/29/08, Jiri Jelinek [EMAIL PROTECTED] wrote: Terren, I don't think any kind of algorithmic approach, which is to say, un-embodied, will ever result in conscious intelligence. But an embodied agent that

Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread Eric Burton
A succesful AGI should have n methods of data-mining its experience for knowledge, I think. If it should have n ways of generating those methods or n sets of ways to generate ways of generating those methods etc I don't know. On 8/28/08, j.k. [EMAIL PROTECTED] wrote: On 08/28/2008 04:47 PM, Matt

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Eric Burton
I remember Richard Dawkins saying that group selection is a lie. Maybe we shoud look past it now? It seems like a problem. On 8/29/08, Mark Waser [EMAIL PROTECTED] wrote: OK. How about this . . . . Ethics is that behavior that, when shown by you, makes me believe that I should facilitate your

Re: [agi] Preservation of goals in strongly self-modifying AI systems

2008-08-30 Thread Eric Burton
This is a good paper. Would read it again On 8/30/08, Ben Goertzel [EMAIL PROTECTED] wrote: Hi All who interested in such topics and are willing to endure some raw speculative trains of thought, may be interested in an essay I recently posted on goal-preservation in strongly self-modifying

Re: [agi] What is Friendly AI?

2008-08-31 Thread Eric Burton
I totally agree with this guy. I don't want to be accused of going too far myself but I think he's being too conservative. On 8/31/08, Steve Richfield [EMAIL PROTECTED] wrote: Vladimir, At great risk of stepping in where angels fear to tread... This is an IMPORTANT discussion which several

[agi] Rensselaer Shows Up In Google Alert

2008-09-01 Thread Eric Burton
Hey look at this. I was asking about Rensselaer's work just the other day, I think. Second Life has a special guest: artificial intelligence: USATODAY By Boone USA today is running an article about en experiment being run on the Second Life online community by researchers at Rensselaer

[agi] Re: Rensselaer Shows Up In Google Alert

2008-09-01 Thread Eric Burton
(Sorry for the quoted text) --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51

[agi] This guy has valid concerns

2008-09-02 Thread Eric Burton
I don't know who this is, but he has some good thought experiments concerning the impact of AI for mobs on online games. http://wowriot.gameriot.com/blogs/Thinking-out-loud/Artificial-intelligence-in-PvE/ --- agi Archives:

Re: [agi] This guy has valid concerns

2008-09-02 Thread Eric Burton
of smart NPC's will require a lot of adaptation of other aspects of game design as well... ben g On Tue, Sep 2, 2008 at 5:29 AM, Eric Burton [EMAIL PROTECTED] wrote: I don't know who this is, but he has some good thought experiments concerning the impact of AI for mobs on online games

Re: [agi] Recursive self-change: some definitions

2008-09-02 Thread Eric Burton
I don't understand how mimicry in specific occurs without some kind of turing-complete GA spawning a huge number of possible paths. I'm thinking of humanoid robots mapping the movements of a human trainer onto their motor cortex. I've certainly heard somewhere that this is one way to do it and I

Re: [agi] Recursive self-change: some definitions

2008-09-02 Thread Eric Burton
I really see a number of algorithmic breakthroughs as necessary for the development of strong general AI but it seems like an imminent event to me regardless. Nonetheless much of what we learn about the brain in the meantime may be nonsense until we fundamentally grok the mind.

[agi] Bootris

2008-09-07 Thread Eric Burton
--- snip --- [1220390007] receive [EMAIL PROTECTED] bootris, invoke mathematica [1220390013] told #love cool hand luke is like a comic heroic jesus [1220390034] receive [EMAIL PROTECTED] bootris, solve russell's paradox [1220390035] told #love invoke mathematica [1220390066] receive

[agi] Re: Bootris

2008-09-07 Thread Eric Burton
and watching the moods of people around it to assess its success and modify its behaviour, it ought to be able to pass as human without having most of the internal processes that characterize one... I don't know if there's a lesson here. Eric B On 9/7/08, Eric Burton [EMAIL PROTECTED] wrote: --- snip

[agi] Re: Bootris

2008-09-07 Thread Eric Burton
, but it really was not finished. Ok, that's all. On 9/7/08, Eric Burton [EMAIL PROTECTED] wrote: One thing I think is kind of notable is that the bot puts everything it says, including phrases that are invented or mutated, into a personality database or list of possible favourite phrases, then takes six

[agi] Re: Bootris

2008-09-07 Thread Eric Burton
(see: irc.racrew.us) On 9/7/08, Eric Burton [EMAIL PROTECTED] wrote: Oh, thanks for helping me get this off my chest, everyone. If I ever finish the thing I'm definitely going to freshmeat it. I think this kind of bot, which is really quite trainable, and creative to boot -- it falls back

Re: [agi] Does prior knowledge/learning cause GAs to converge too fast on sub-optimal solutions?

2008-09-07 Thread Eric Burton
I'd just keep a long list of high scorers for regression and occasionally reset the high score to zero. You can add random specimens to the population as well... On 9/7/08, Benjamin Johnston [EMAIL PROTECTED] wrote: Hi, I have a general question for those (such as Novamente) working on AGI

Re: [agi] Will AGI Be Stillborn?

2008-09-08 Thread Eric Burton
I've reflected that superintelligence could emerge through genetic or pharmaceutical options before cybernetic ones, maybe by necessity. I am really rooting for cybernetic enlightenment to guide our use of the other two, though. On 9/8/08, Brad Paulsen [EMAIL PROTECTED] wrote: From the

Re: [agi] Does prior knowledge/learning cause GAs to converge too fast on sub-optimal solutions?

2008-09-08 Thread Eric Burton
You can implement a new workaround to bootstrap your organisms past each local maximum, like catalyzing the transition from water to land over and over. I find this leads to cheats that narrow the search in unpredictable ways, though. This problem comes up again and again. Maybe some kind of

Re: [agi] Artificial humor

2008-09-10 Thread Eric Burton
Couldn't one use fine-grained collision detection in something like OpenSim to feed tactile information into a neural net via a simulated nervous system? The extent to which a simulated organism 'actually feels' is certainly a point on a scale or a spectrum, just as it would appear to be with

Re: [agi] Artificial humor

2008-09-10 Thread Eric Burton
I've seen humour modelled as a form of mental dissonance, when an expectation is defied, especially a grave one. It may arise, then, as a higher-order recognition of bizarreness in the overall state of the mind at that point. Humour seems to me to be somehow fundamental to intelligence, rather

Re: [agi] Artificial humor

2008-09-10 Thread Eric Burton
for biochemical states revealed intellectually as inappropriate? A deep subject! On 9/10/08, Eric Burton [EMAIL PROTECTED] wrote: I've seen humour modelled as a form of mental dissonance, when an expectation is defied, especially a grave one. It may arise, then, as a higher-order recognition

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-19 Thread Eric Burton
I think the whole idea of a semantic layer is to provide the kind of mechanism for abstract reasoning that evolution seems to have built into the human brain. You could argue that those faculties are acquired during one's life, using only a weighted neural net (brain), but it seems reasonable to

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Eric Burton
Hmm. My bot mostly repeats what it hears. bot Monie: haha. r u a bot ? bot cyberbrain: not to mention that in a theory complex enough with a large enough number of parameters. one can interpret anything. even things that are completely physically inconsistent with each other. i suggest actually

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Eric Burton
Ok, most of its replies here seem to be based on the first word of what it's replying to. But it's really capable of more lateral connections. wijnand yeah i use it to add shortcuts for some menu functions i use a lot bot wijnand: TOMACCO!!! On 9/21/08, Eric Burton [EMAIL PROTECTED] wrote: Hmm

Re: [agi] Re: AGI for a quadrillion

2008-09-22 Thread Eric Burton
Note that $1 quadrillion is only a few orders of magnitude more expensive than the war in Iraq. It could well be less than the price of bringing democracy to every podunk and backwater on the globe! On 9/21/08, Matt Mahoney [EMAIL PROTECTED] wrote: -- Matt Mahoney, [EMAIL PROTECTED] --- On

[agi] Perceptrons Movie

2008-09-22 Thread Eric Burton
CREATIVITY MACHINE http://video.google.ca/videoplay?docid=4007105149032380914ei=PvTXSJONKI_8-gHFhOi-Agq=artificial+lifevt=lf --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify

Re: [agi] Perceptrons Movie

2008-09-22 Thread Eric Burton
Creativity machine: http://www.imagination-engines.com/cm.htm Six layers, though? Perhaps the result is magic! On 9/22/08, Vladimir Nesov [EMAIL PROTECTED] wrote: On Mon, Sep 22, 2008 at 11:40 PM, Eric Burton [EMAIL PROTECTED] wrote: CREATIVITY MACHINE http://video.google.ca/videoplay?docid

Re: [agi] Perceptrons Movie

2008-09-22 Thread Eric Burton
Results such as these are exactly why the Creativity Machine has been heralded by top NASA officials as AI's best bet and the primary tool for building the AI predicted by Kurzweil and others in 30 years, now! :O On 9/22/08, Eric Burton [EMAIL PROTECTED] wrote: Creativity machine: http

Re: [agi] Perceptrons Movie

2008-09-22 Thread Eric Burton
-Original Message- From: Eric Burton [mailto:[EMAIL PROTECTED] Sent: Monday, September 22, 2008 3:40 PM To: agi@v2.listbox.com Subject: [agi] Perceptrons Movie CREATIVITY MACHINE http://video.google.ca/videoplay?docid=4007105149032380914ei=PvTXSJONKI_8-g HFhOi-Agq=artificial+lifevt

Re: [agi] Perceptrons Movie

2008-09-22 Thread Eric Burton
Supreme On 9/22/08, Trent Waddington [EMAIL PROTECTED] wrote: On Tue, Sep 23, 2008 at 7:57 AM, Eric Burton [EMAIL PROTECTED] wrote: Are Geoffrey Hinton's neural nets available as a library somewhere? I'd like to try them myself if possible. What I'm doing now closely approximates character

Re: [agi] Cost estimation methods, was AGI for a quadrillion

2008-09-23 Thread Eric Burton
I am trying to recover an old GMail account right now, and I agree. On 9/23/08, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Mon, 9/22/08, Steve Richfield [EMAIL PROTECTED] wrote: My proposal: Much like the College of Science at nearly all univesities was subsequently chopped up into

Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-24 Thread Eric Burton
OK, we're done with AGI, time to move on to discussion of psychic powers 8-D Lacking as they do a pineal gland, how could AI have such an ability? Perhaps an uplink to HAARP, or some kind of a magnetron? On 9/24/08, Ben Goertzel [EMAIL PROTECTED] wrote: If we have Ben == AGI-author s1 Dude

Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-24 Thread Eric Burton
An REG might work as a psychic receiver, closing the loop. But do we want it to have that power? On 9/24/08, Abram Demski [EMAIL PROTECTED] wrote: Use a REG unit: http://noosphere.princeton.edu/reg.html :) On Wed, Sep 24, 2008 at 2:33 PM, Eric Burton [EMAIL PROTECTED] wrote: OK, we're

Re: [agi] universal logical form for natural language

2008-09-27 Thread Eric Burton
The purpose of YKY's invocation of Helen Keller is interestingly at odds with the usage that appears in the Jargon File. Helen Keller mode /n./ 1. State of a hardware or software system that is deaf, dumb, and blind, i.e., accepting no input and generating no output, usually due to an infinite

Re: [agi] universal logical form for natural language

2008-09-28 Thread Eric Burton
Having a vision-assisted training process would be extremely compelling. Then the user can provide information relevant to comprehending a scene as well as adding word/object associations. Robust sight and sound processing are still kind of a frontier for software, I think. A little good work in

Re: [agi] universal logical form for natural language

2008-09-29 Thread Eric Burton
Thanks! Fascinating On 9/29/08, Lukasz Stafiniak [EMAIL PROTECTED] wrote: On Mon, Sep 29, 2008 at 11:33 PM, Eric Burton [EMAIL PROTECTED] wrote: It uses something called MontyLingua. Does anyone know anything about this? There's a site at http://web.media.mit.edu/~hugo/montylingua

Re: [agi] universal logical form for natural language

2008-09-29 Thread Eric Burton
http://video.google.ca/videoplay?docid=-7933698775159827395ei=Z1rhSJz7CIvw-QHQyNkCq=nltkvt=lf NLTK video ;O On 9/29/08, Mike Tintner [EMAIL PROTECTED] wrote: David, Thanks for reply. Like so many other things, though, working out how we understand texts is central to understanding GI - and

Re: [agi] universal logical form for natural language

2008-09-29 Thread Eric Burton
Extracting meaning from text requires context-sensitivity to do correctly. Natural language parsers necessarily don't reason about things. An AGI whose natural-language interface was abstracted via some good parser could make suppositions about the constructs it returned by interpreting them

Re: [agi] universal logical form for natural language

2008-09-29 Thread Eric Burton
*in an ,_, On 9/29/08, Eric Burton [EMAIL PROTECTED] wrote: Extracting meaning from text requires context-sensitivity to do correctly. Natural language parsers necessarily don't reason about things. An AGI whose natural-language interface was abstracted via some good parser could make

Re: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread Eric Burton
Well, for the purpose of creating the first human-level AGI, it seems important **to** wire in humanlike bias about space and time ... this will greatly ease the task of teaching the system to use our language and communicate with us effectively... The same thing occurred to me while browsing

Re: [agi] Super-Human friendly AGI

2008-10-05 Thread Eric Burton
Really, really comprehensive. What stage would you say your work is at today? On 10/5/08, John LaMuth [EMAIL PROTECTED] wrote: Greetings All I am pleased to announce the upcoming expanded edition of my original reference work A Diagnostic Classification of the Emotions of which I serve as

Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-12 Thread Eric Burton
I think it's normal for tempers to flare during a depression. This kind of technology really pays for itself. The only thing that matters is the code Eric B On 10/12/08, Ben Goertzel [EMAIL PROTECTED] wrote: No idea, Mentifex ... I haven't filtered out any of your messages (or anyone's) ...

[agi] Hugh Loebner talks AI

2008-10-12 Thread Eric Burton
Hugh Loebner talks AI http://developers.slashdot.org/article.pl?sid=08/10/11/2137200 I may have written my signature twice on the OpenCog list, earlier today. I'm going to try to not do that. Otherwise I have nothing to report --- agi Archives:

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-13 Thread Eric Burton
On Mon, Oct 13, 2008 at 1:29 PM, Matt Mahoney [EMAIL PROTECTED] wrote: That's odd. Maybe you should run Windows :-( No. You should not run Windows --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed:

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Eric Burton
An AI that is twice as smart as a human can make no more progress than 2 humans. Actually I'll argue that we can't make predictions about what a greater-than-human intelligence would do. Maybe the summed intelligence of 2 humans would be sufficient to do the work of a dozen. Maybe

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Eric Burton
I suppose it's a bit ambiguous. There's computer modelling of mind, and then there's the implementation of an actual mind using actual computation, then there's the implementation of a brain using computation, in which a mind may be said to be operating. All sorts of misdirection. I think IBM is

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Eric Burton
On Wed, Oct 15, 2008 at 6:26 AM, Colin Hales [EMAIL PROTECTED] wrote: Hi, I am aware of 'blue brain'. It, and the distributed processor in the other link are still COMP and therefore subject to all the arguments I have been making, and therefore not on the path to real AGI. It's interesting

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Eric Burton
I also agree with Vladimir, mailing list format is more convenient and more fun. On 10/15/08, Eric Burton [EMAIL PROTECTED] wrote: I also agree the list should focus on specific approaches and not on hifalutin denials of achievability. I don't know why non-human, specifically electronic

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Eric Burton
I also agree the list should focus on specific approaches and not on hifalutin denials of achievability. I don't know why non-human, specifically electronic intelligence is such a hot button issue for some folks. It's like they'd be happier if it never happened. But why? On 10/15/08, Terren

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Eric Burton
would compel you to use such a tone? For my part I'd like to see less trolls fed, and more bugs squished On 10/15/08, Eric Burton [EMAIL PROTECTED] wrote: I also agree with Vladimir, mailing list format is more convenient and more fun. On 10/15/08, Eric Burton [EMAIL PROTECTED] wrote: I also

Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-15 Thread Eric Burton
but I don't want to discuss the details about the algorithms until I have gotten a chance to see if they work or not, Hearing this makes my teeth gnash. GO AND IMPLEMENT THEM. THEN TELL US On 10/15/08, Colin Hales [EMAIL PROTECTED] wrote: David Hart wrote: On Wed, Oct 15, 2008 at 5:52 PM,

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Eric Burton
One day the process of discovery will be automated, and all we'll have to deal with will be graphs and charts and other abstract representations of aggregated data, not reams and reams of undigested text. Until that point I guess it's wise to do whatever you can. I for one welcome our

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Eric Burton
Honestly, if the idea is to wave our hands at one another's ideas then let's at least see something on the table. I'm happy to discuss my work with natural language parsing and mood evaluation for low-bandwidth human mimicry, for instance, because it has amounted to thousands of lines of

Re: [agi] Who is smart enough to answer this question?

2008-10-15 Thread Eric Burton
Is anybody on this list smart and/or knowledgeable enough to come up with a formula for the following (I am not): I don't think I'm the person to answer this for you. But I do have some insights. Given N neural net nodes, what is the number A of unique node assemblies (i.e., separate subsets of

Re: [agi] mailing-list / forum software

2008-10-15 Thread Eric Burton
Is the agiri.org forum a PHPBB? That's what mail2forum is for. It doesn't seem worthwhile reimplementing both the forum and the list if not. It looks kind of like ultimatebb to me ._. On 10/15/08, Ben Goertzel [EMAIL PROTECTED] wrote: This widget seems to integrate mailing lists and forums in

Re: [agi] Who is smart enough to answer this question?

2008-10-15 Thread Eric Burton
Even if he wants fault tolerance (from cell damage) through redundancy? Why model neuron attrition? These kinds of calculations are normally done in production mode, that is, within computing setups not prone to component failure. Maybe you're thinking of neural nets that map onto a large number

Re: [agi] Advocacy Is no Excuse for Consciousness

2008-10-17 Thread Eric Burton
Good to see someone is still up. Can you link your paper again? I can't find the URL. Eric B --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription:

[agi] Will Wright's Five Artificial Intelligence Prophecies

2008-10-18 Thread Eric Burton
http://www.popularmechanics.com/technology/industry/4287680.html?series=60 :O --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription:

[agi] Re: Will Wright's Five Artificial Intelligence Prophecies

2008-10-18 Thread Eric Burton
It looks like if Sim City isn't a lie then machines -will- bootstrap themselves to sentience but -will not- reach human intelligence. I'm not too sure what this means. Maybe that we'll never see a faithful duplication of a characteristically human distribution of abilities in a machine. But I

Re: AW: [agi] Re: Defining AGI

2008-10-19 Thread Eric Burton
I've been on some message boards where people only ever came back with a formula or a correction. I didn't contribute a great deal but it is a sight for sore eyes. We could have an agi-tech and an agi-philo list and maybe they'd merit further recombination (more lists) after that.

Re: AW: [agi] Re: Defining AGI

2008-10-19 Thread Eric Burton
I've been thinking. agi-phil might suffice. Although it isn't as explicit. On Sun, Oct 19, 2008 at 6:52 PM, Eric Burton [EMAIL PROTECTED] wrote: I've been on some message boards where people only ever came back with a formula or a correction. I didn't contribute a great deal but it is a sight

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Eric Burton
No, surely this is mostly outside the purview of the AGI list. I'm reading some of this material and not getting a lot out of it. There are channels on freenode for this stuff. But we have got to agree on something if we are going to do anything. Can animals do science? They can not.

Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Eric Burton
Ben Goertzel says that there is no true defined method to the scientific method (and Mark Waser is clueless for thinking that there is). This is pretty profound. I never saw Ben Goertzel abolish the scientific method. I think he explained that its implementation is intractable, with reference

Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Eric Burton
You and MW are clearly as philosophically ignorant, as I am in AI. But MW and I have not agreed on anything. Hence the wiki entry on scientific method: Scientific method is not a recipe: it requires intelligence, imagination, and creativity http://en.wikipedia.org/wiki/Scientific_method This

Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Eric Burton
I could have conveyed the nuances of the argument better as I understood them. s/as I/inasmuch as I/ ,_, --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Eric Burton
My apologies if I've misconstrued you. Regardless of any fault, the basic point was/is important. Even if a considerable percentage of science's conclusions are v. hard, there is no definitive scientific method for reaching them . I think I understand.

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Eric Burton
I think I see what's on the table here. Does all this mean a Bayes net, properly motivated, could be capable of performing scientific inquiry? Maybe in combination with a GA that tunes itself to maximize adaptive mutations in the input based on scores from the net, which seeks superior product

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Eric Burton
Post #101 :V Somehow this hit the wrong thread :| --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription:

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Eric Burton
Post #101 :V --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34 Powered by

Re: [agi] constructivist issues

2008-10-24 Thread Eric Burton
Forget consensus! I don't even see a discussion forming. This is all quite long and impenetrable. What have we learned here? If possible I want to catch up. Eric B --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed:

Re: [agi] On programming languages

2008-10-24 Thread Eric Burton
Due to a characteristic paucity of datatypes, all powerful, and a terse, readable syntax, I usually recommend Python for any project that is just out the gate. It's my favourite way by far at present to mangle huge tables. By far! --- agi Archives:

Re: [agi] On programming languages

2008-10-24 Thread Eric Burton
Just a great way to deal with data. I'm barely into list comprehension yet and I still usually can't believe what I can squirt through a single line of Python code. Just big big transforms that would be whole blocks in most languages. In many instances it's v. handy

Re: [agi] constructivist issues

2008-10-24 Thread Eric Burton
I know I've expressed frustration with this thread in the past. But I don't want to discourage its development. If someone wants to hit me with a summary off-list maybe I can contribute something _ --- agi Archives:

Re: [agi] On programming languages

2008-10-25 Thread Eric Burton
I'll even go so far as to use myself as an example. I can easily do C++ (since I've done so in the past) but all the baggage around it make me consider it not worth my while. I certainly won't hesitate to use what is learned on that architecture but I'll be totally shocked if you aren't

Re: [agi] On programming languages

2008-10-25 Thread Eric Burton
MW, mine was an editorial reply to what struck me as a superficial pronouncement on a subject not amenable to treatment so cursory. But I like it less now, and I apologize. Eric B --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed:

Re: AIXI (was Re: [agi] If your AGI can't learn to play chess it is no AGI)

2008-10-26 Thread Eric Burton
Cause is a time-bound notion. These processes work both ways in time -- does a virus cause a disease? Or is the existence of a host a more significant factor? --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed:

Re: AIXI (was Re: [agi] If your AGI can't learn to play chess it is no AGI)

2008-10-26 Thread Eric Burton
(Note, I also am unfamiliar with the absence of formal causation from rigorous scientific fields. So I guessed) --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your

Re: [agi] the universe is computable ..PS

2008-10-30 Thread Eric Burton
I actually emailed a gentleman at Sandia one time asking why don't they use their molecular dynamics setup to extrapolate novel instances and classes of high-temperature superconductor etc. What I came away with is you really want to be simulating sub-molecular interactions in order to extrapolate

Re: [agi] General musings on AI, humans and empathy...

2008-11-08 Thread Eric Burton
This was a good read, bgoertzel! Blog well done :D On 11/8/08, Ben Goertzel [EMAIL PROTECTED] wrote: http://multiverseaccordingtoben.blogspot.com/2008/11/in-search-of-machines-of-loving-grace.html -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL

Re: [agi] Chaogate chips: Yum!

2008-11-14 Thread Eric Burton
HATED IT On 11/14/08, Olie Lamb [EMAIL PROTECTED] wrote: Mmmm... Chaoglate-chip cookie processing! On 11/6/08, Richard Loosemore [EMAIL PROTECTED] wrote: A report about research to build chaotic logic: http://technology.newscientist.com/article/mg20026801.800

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Eric Burton
There are procedures in place for experimenting on humans. And the biologies of people and animals are orthogonal! Much of this will be simulated soon On 11/17/08, Trent Waddington [EMAIL PROTECTED] wrote: On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney [EMAIL PROTECTED] wrote: I mean that

  1   2   >