[agi] Re: [agi] P≠NP

2010-08-12 Thread Kaj Sotala
2010/8/12 John G. Rose johnr...@polyplexic.com BTW here is the latest one: http://www.win.tue.nl/~gwoegi/P-versus-NP/Deolalikar.pdf See also: http://www.ugcs.caltech.edu/~stansife/pnp.html - brief summary of the proof Discussion about whether it's correct:

Re: [agi] [Science Daily] Our Unconscious Brain Makes The Best Decisions Possible

2008-12-29 Thread Kaj Sotala
On Mon, Dec 29, 2008 at 10:15 PM, Lukasz Stafiniak lukst...@gmail.com wrote: http://www.sciencedaily.com/releases/2008/12/081224215542.htm Nothing surprising ;-) So they have a result saying that we're good at subconsciously estimating the direction in which dots on a screen are moving in.

[agi] Creativity and Rationality (was: Re: Should I get a PhD?)

2008-12-19 Thread Kaj Sotala
On Fri, Dec 19, 2008 at 1:47 AM, Mike Tintner tint...@blueyonder.co.uk wrote: Ben, I radically disagree. Human intelligence involves both creativity and rationality, certainly. But rationality - and the rational systems of logic/maths and formal languages, [on which current AGI depends] -

Re: [agi] Remembering Caught in the Act

2008-09-05 Thread Kaj Sotala
On Fri, Sep 5, 2008 at 11:21 AM, Brad Paulsen [EMAIL PROTECTED] wrote: http://www.nytimes.com/2008/09/05/science/05brain.html?_r=3partner=rssnytemc=rssoref=sloginoref=sloginoref=slogin http://www.sciencemag.org/cgi/content/short/1164685 for the original study.

[agi] Coin-flipping duplicates (was: Breaking Solomonoff induction (really))

2008-06-23 Thread Kaj Sotala
On 6/23/08, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Sun, 6/22/08, Kaj Sotala [EMAIL PROTECTED] wrote: On 6/21/08, Matt Mahoney [EMAIL PROTECTED] wrote: Eliezer asked a similar question on SL4. If an agent flips a fair quantum coin and is copied 10 times if it comes up heads

Re: [agi] Breaking Solomonoff induction (really)

2008-06-22 Thread Kaj Sotala
On 6/21/08, Matt Mahoney [EMAIL PROTECTED] wrote: Eliezer asked a similar question on SL4. If an agent flips a fair quantum coin and is copied 10 times if it comes up heads, what should be the agent's subjective probability that the coin will come up heads? By the anthropic principle, it

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-07 Thread Kaj Sotala
On 5/7/08, Steve Richfield [EMAIL PROTECTED] wrote: Story: I recently attended an SGI Buddhist meeting with a friend who was a member there. After listening to their discussions, I asked if there was anyone there (from ~30 people) who had ever found themselves in a position of having to

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-07 Thread Kaj Sotala
On 5/7/08, Kaj Sotala [EMAIL PROTECTED] wrote: Certainly a rational AGI may find it useful to appear irrational, but that doesn't change the conclusion that it'll want to think rationally at the bottom, does it? Oh - and see also http://www.saunalahti.fi/~tspro1/reasons.html , especially

Re: [agi] Evaluating Conference Quality [WAS Re: Symbol Grounding ...]

2008-05-07 Thread Kaj Sotala
On 5/7/08, Stefan Pernar [EMAIL PROTECTED] wrote: What follows are wild speculations and grand pie-in-the-sky plans without substance with a letter to investors attached. Oh, come on! Um, people, is this list really the place for fielding personal insults? For what it's worth, my two cents:

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-05 Thread Kaj Sotala
to messages in what seem to be blinks of an eye to me. :-) On 3/11/08, Richard Loosemore [EMAIL PROTECTED] wrote: Kaj Sotala wrote: On 3/3/08, Richard Loosemore [EMAIL PROTECTED] wrote: Kaj Sotala wrote: Alright. But previously, you said that Omohundro's paper, which to me seemed

Re: [agi] Instead of an AGI Textbook

2008-03-26 Thread Kaj Sotala
On 3/26/08, Ben Goertzel [EMAIL PROTECTED] wrote: A lot of students email me asking me what to read to get up to speed on AGI. Ben, while we're on the topic, could you elaborate a bit on what kind of prerequisite knowledge the books you've written/edited require? For instance, I've been

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-03-11 Thread Kaj Sotala
On 3/3/08, Richard Loosemore [EMAIL PROTECTED] wrote: Kaj Sotala wrote: Alright. But previously, you said that Omohundro's paper, which to me seemed to be a general analysis of the behavior of *any* minds with (more or less) explict goals, looked like it was based on a 'goal-stack

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-03-02 Thread Kaj Sotala
On 2/16/08, Richard Loosemore [EMAIL PROTECTED] wrote: Kaj Sotala wrote: Well, the basic gist was this: you say that AGIs can't be constructed with built-in goals, because a newborn AGI doesn't yet have built up the concepts needed to represent the goal. Yet humans seem tend to have

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-02-15 Thread Kaj Sotala
Gah, sorry for the awfully late response. Studies aren't leaving me the energy to respond to e-mails more often than once in a blue moon... On Feb 4, 2008 8:49 PM, Richard Loosemore [EMAIL PROTECTED] wrote: They would not operate at the proposition level, so whatever difficulties they have,

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-02-03 Thread Kaj Sotala
On 1/30/08, Richard Loosemore [EMAIL PROTECTED] wrote: Kaj, [This is just a preliminary answer: I am composing a full essay now, which will appear in my blog. This is such a complex debate that it needs to be unpacked in a lot more detail than is possible here. Richard]. Richard,

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-01-30 Thread Kaj Sotala
On Jan 29, 2008 6:52 PM, Richard Loosemore [EMAIL PROTECTED] wrote: Okay, sorry to hit you with incomprehensible technical detail, but maybe there is a chance that my garbled version of the real picture will strike a chord. The message to take home from all of this is that: 1) There are

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-01-29 Thread Kaj Sotala
On 1/29/08, Richard Loosemore [EMAIL PROTECTED] wrote: Summary of the difference: 1) I am not even convinced that an AI driven by a GS will ever actually become generally intelligent, because of the self-contrdictions built into the idea of a goal stack. I am fairly sure that whenever anyone

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Kaj Sotala
On 1/24/08, Richard Loosemore [EMAIL PROTECTED] wrote: Theoretically yes, but behind my comment was a deeper analysis (which I have posted before, I think) according to which it will actually be very difficult for a negative-outcome singularity to occur. I was really trying to make the point

Re: [agi] Ben's Definition of Intelligence

2008-01-12 Thread Kaj Sotala
On 1/12/08, Mike Tintner [EMAIL PROTECTED] wrote: The primary motivation behind the Novamente AI Engine is to build a system that can achieve complex goals in complex environments, a synopsis of the definition of intelligence given in (Goertzel 1993). The emphasis is on the This is not just

Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Kaj Sotala
On 11/10/07, Bryan Bishop [EMAIL PROTECTED] wrote: On Saturday 10 November 2007 09:29, Derek Zahn wrote: On such a chart I think we're supposed to be at something like mouse level right now -- and in fact we have seen supercomputers beginning to take a shot at simulating mouse-brain-like

Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Kaj Sotala
On 11/10/07, Robin Hanson [EMAIL PROTECTED] wrote: skeptical. Specifically, after ten years as an AI researcher, my inclination has been to see progress as very slow toward an explicitly-coded AI, and so to guess that the whole brain emulation approach would succeed first if, as it seems,

Re: [agi] Religion-free technical content

2007-09-30 Thread Kaj Sotala
On 9/29/07, Russell Wallace [EMAIL PROTECTED] wrote: On 9/29/07, Kaj Sotala [EMAIL PROTECTED] wrote: I'd be curious to see these, and I suspect many others would, too. (Even though they're probably from lists I am on, I haven't followed them nearly as actively as I could've.) http

Re: [agi] Religion-free technical content

2007-09-30 Thread Kaj Sotala
On 9/30/07, Don Detrich - PoolDraw [EMAIL PROTECTED] wrote: So, let's look at this from a technical point of view. AGI has the potential of becoming a very powerful technology and misused or out of control could possibly be dangerous. However, at this point we have little idea of how these

Re: [agi] Religion-free technical content

2007-09-29 Thread Kaj Sotala
On 9/29/07, Russell Wallace [EMAIL PROTECTED] wrote: I've been through the specific arguments at length on lists where they're on topic, let me know if you want me to dig up references. I'd be curious to see these, and I suspect many others would, too. (Even though they're probably from lists I