2010/8/12 John G. Rose johnr...@polyplexic.com
BTW here is the latest one:
http://www.win.tue.nl/~gwoegi/P-versus-NP/Deolalikar.pdf
See also:
http://www.ugcs.caltech.edu/~stansife/pnp.html - brief summary of the proof
Discussion about whether it's correct:
On Mon, Dec 29, 2008 at 10:15 PM, Lukasz Stafiniak lukst...@gmail.com wrote:
http://www.sciencedaily.com/releases/2008/12/081224215542.htm
Nothing surprising ;-)
So they have a result saying that we're good at subconsciously
estimating the direction in which dots on a screen are moving in.
On Fri, Dec 19, 2008 at 1:47 AM, Mike Tintner tint...@blueyonder.co.uk wrote:
Ben,
I radically disagree. Human intelligence involves both creativity and
rationality, certainly. But rationality - and the rational systems of
logic/maths and formal languages, [on which current AGI depends] -
On Fri, Sep 5, 2008 at 11:21 AM, Brad Paulsen [EMAIL PROTECTED] wrote:
http://www.nytimes.com/2008/09/05/science/05brain.html?_r=3partner=rssnytemc=rssoref=sloginoref=sloginoref=slogin
http://www.sciencemag.org/cgi/content/short/1164685 for the original study.
On 6/23/08, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Sun, 6/22/08, Kaj Sotala [EMAIL PROTECTED] wrote:
On 6/21/08, Matt Mahoney [EMAIL PROTECTED] wrote:
Eliezer asked a similar question on SL4. If an agent
flips a fair quantum coin and is copied 10 times if it
comes up heads
On 6/21/08, Matt Mahoney [EMAIL PROTECTED] wrote:
Eliezer asked a similar question on SL4. If an agent flips a fair quantum
coin and is copied 10 times if it comes up heads, what should be the agent's
subjective probability that the coin will come up heads? By the anthropic
principle, it
On 5/7/08, Steve Richfield [EMAIL PROTECTED] wrote:
Story: I recently attended an SGI Buddhist meeting with a friend who was a
member there. After listening to their discussions, I asked if there was
anyone there (from ~30 people) who had ever found themselves in a position of
having to
On 5/7/08, Kaj Sotala [EMAIL PROTECTED] wrote:
Certainly a rational AGI may find it useful to appear irrational, but
that doesn't change the conclusion that it'll want to think rationally
at the bottom, does it?
Oh - and see also http://www.saunalahti.fi/~tspro1/reasons.html ,
especially
On 5/7/08, Stefan Pernar [EMAIL PROTECTED] wrote:
What follows are wild speculations and grand pie-in-the-sky plans without
substance with a letter to investors attached. Oh, come on!
Um, people, is this list really the place for fielding personal insults?
For what it's worth, my two cents:
to messages in what seem to be blinks of an eye to me. :-)
On 3/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Kaj Sotala wrote:
On 3/3/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Kaj Sotala wrote:
Alright. But previously, you said that Omohundro's paper, which to me
seemed
On 3/26/08, Ben Goertzel [EMAIL PROTECTED] wrote:
A lot of students email me asking me what to read to get up to speed on AGI.
Ben,
while we're on the topic, could you elaborate a bit on what kind of
prerequisite knowledge the books you've written/edited require? For
instance, I've been
On 3/3/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Kaj Sotala wrote:
Alright. But previously, you said that Omohundro's paper, which to me
seemed to be a general analysis of the behavior of *any* minds with
(more or less) explict goals, looked like it was based on a
'goal-stack
On 2/16/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Kaj Sotala wrote:
Well, the basic gist was this: you say that AGIs can't be constructed
with built-in goals, because a newborn AGI doesn't yet have built up
the concepts needed to represent the goal. Yet humans seem tend to
have
Gah, sorry for the awfully late response. Studies aren't leaving me
the energy to respond to e-mails more often than once in a blue
moon...
On Feb 4, 2008 8:49 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
They would not operate at the proposition level, so whatever
difficulties they have,
On 1/30/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Kaj,
[This is just a preliminary answer: I am composing a full essay now,
which will appear in my blog. This is such a complex debate that it
needs to be unpacked in a lot more detail than is possible here. Richard].
Richard,
On Jan 29, 2008 6:52 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Okay, sorry to hit you with incomprehensible technical detail, but maybe
there is a chance that my garbled version of the real picture will
strike a chord.
The message to take home from all of this is that:
1) There are
On 1/29/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Summary of the difference:
1) I am not even convinced that an AI driven by a GS will ever actually
become generally intelligent, because of the self-contrdictions built
into the idea of a goal stack. I am fairly sure that whenever anyone
On 1/24/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Theoretically yes, but behind my comment was a deeper analysis (which I
have posted before, I think) according to which it will actually be very
difficult for a negative-outcome singularity to occur.
I was really trying to make the point
On 1/12/08, Mike Tintner [EMAIL PROTECTED] wrote:
The primary motivation behind the Novamente AI Engine
is to build a system that can achieve complex goals in
complex environments, a synopsis of the definition of intelligence
given in (Goertzel 1993). The emphasis is on the
This is not just
On 11/10/07, Bryan Bishop [EMAIL PROTECTED] wrote:
On Saturday 10 November 2007 09:29, Derek Zahn wrote:
On such a chart I think we're supposed to be at something like mouse
level right now -- and in fact we have seen supercomputers beginning
to take a shot at simulating mouse-brain-like
On 11/10/07, Robin Hanson [EMAIL PROTECTED] wrote:
skeptical. Specifically, after ten years as an AI researcher, my
inclination has been to see progress as very slow toward an explicitly-coded
AI, and so to guess that the whole brain emulation approach would succeed
first if, as it seems,
On 9/29/07, Russell Wallace [EMAIL PROTECTED] wrote:
On 9/29/07, Kaj Sotala [EMAIL PROTECTED] wrote:
I'd be curious to see these, and I suspect many others would, too.
(Even though they're probably from lists I am on, I haven't followed
them nearly as actively as I could've.)
http
On 9/30/07, Don Detrich - PoolDraw [EMAIL PROTECTED] wrote:
So, let's look at this from a technical point of view. AGI has the potential
of becoming a very powerful technology and misused or out of control could
possibly be dangerous. However, at this point we have little idea of how
these
On 9/29/07, Russell Wallace [EMAIL PROTECTED] wrote:
I've been through the specific arguments at length on lists where
they're on topic, let me know if you want me to dig up references.
I'd be curious to see these, and I suspect many others would, too.
(Even though they're probably from lists I
24 matches
Mail list logo