On Fri, Sep 5, 2008 at 11:21 AM, Brad Paulsen <[EMAIL PROTECTED]> wrote:
> http://www.nytimes.com/2008/09/05/science/05brain.html?_r=3&partner=rssnyt&emc=rss&oref=slogin&oref=slogin&oref=slogin
http://www.sciencemag.org/cgi/content/short/1164685 for the original study.
--
On Fri, Dec 19, 2008 at 1:47 AM, Mike Tintner wrote:
> Ben,
>
> I radically disagree. Human intelligence involves both creativity and
> rationality, certainly. But rationality - and the rational systems of
> logic/maths and formal languages, [on which current AGI depends] - are
> fundamentall
On Mon, Dec 29, 2008 at 10:15 PM, Lukasz Stafiniak wrote:
> http://www.sciencedaily.com/releases/2008/12/081224215542.htm
>
> Nothing surprising ;-)
So they have a result saying that we're good at subconsciously
estimating the direction in which dots on a screen are moving in.
Apparently this can
On 11/10/07, Bryan Bishop <[EMAIL PROTECTED]> wrote:
> On Saturday 10 November 2007 09:29, Derek Zahn wrote:
> > On such a chart I think we're supposed to be at something like mouse
> > level right now -- and in fact we have seen supercomputers beginning
> > to take a shot at simulating mouse-brain
On 11/10/07, Robin Hanson <[EMAIL PROTECTED]> wrote:
> skeptical. Specifically, after ten years as an AI researcher, my
> inclination has been to see progress as very slow toward an explicitly-coded
> AI, and so to guess that the whole brain emulation approach would succeed
> first if, as it seem
On 12/24/07, Bryan Bishop <[EMAIL PROTECTED]> wrote:
> The entire idea of mindreading is peculiar. Haven't you ever had a
> moment when you've wondered if you like somebody? When you realize that
> such simple separations just don't matter and apply, that you can't
> even read your own mind in that
On 1/12/08, Mike Tintner <[EMAIL PROTECTED]> wrote:
> "The primary motivation behind the Novamente AI Engine
> is to build a system that can achieve complex goals in
> complex environments, a synopsis of the definition of intelligence
> given in (Goertzel 1993). The emphasis is on the
This is not
On 1/24/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Theoretically yes, but behind my comment was a deeper analysis (which I
> have posted before, I think) according to which it will actually be very
> difficult for a negative-outcome singularity to occur.
>
> I was really trying to make the
On 1/29/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Summary of the difference:
>
> 1) I am not even convinced that an AI driven by a GS will ever actually
> become generally intelligent, because of the self-contrdictions built
> into the idea of a goal stack. I am fairly sure that whenever
On Jan 29, 2008 6:52 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Okay, sorry to hit you with incomprehensible technical detail, but maybe
> there is a chance that my garbled version of the real picture will
> strike a chord.
>
> The message to take home from all of this is that:
>
> 1) There
On 1/30/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Kaj,
>
> [This is just a preliminary answer: I am composing a full essay now,
> which will appear in my blog. This is such a complex debate that it
> needs to be unpacked in a lot more detail than is possible here. Richard].
Richard,
[
Gah, sorry for the awfully late response. Studies aren't leaving me
the energy to respond to e-mails more often than once in a blue
moon...
On Feb 4, 2008 8:49 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> They would not operate at the "proposition level", so whatever
> difficulties they have
On 2/16/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Kaj Sotala wrote:
> > Well, the basic gist was this: you say that AGIs can't be constructed
> > with built-in goals, because a "newborn" AGI doesn't yet have built up
> > the concepts nee
On 3/3/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Kaj Sotala wrote:
> > Alright. But previously, you said that Omohundro's paper, which to me
> > seemed to be a general analysis of the behavior of *any* minds with
> > (more or less) explict goal
On 3/26/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> A lot of students email me asking me what to read to get up to speed on AGI.
Ben,
while we're on the topic, could you elaborate a bit on what kind of
prerequisite knowledge the books you've written/edited require? For
instance, I've been putt
ong, well-written
replies to messages in what seem to be blinks of an eye to me. :-)
On 3/11/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Kaj Sotala wrote:
>
> > On 3/3/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> >
> > > Kaj Sotala wrote:
> >
On 5/7/08, Steve Richfield <[EMAIL PROTECTED]> wrote:
> Story: I recently attended an SGI Buddhist meeting with a friend who was a
> member there. After listening to their discussions, I asked if there was
> anyone there (from ~30 people) who had ever found themselves in a position of
> having t
On 5/7/08, Kaj Sotala <[EMAIL PROTECTED]> wrote:
> Certainly a rational AGI may find it useful to appear irrational, but
> that doesn't change the conclusion that it'll want to think rationally
> at the bottom, does it?
Oh - and see also http://www.saunalaht
On 5/7/08, Stefan Pernar <[EMAIL PROTECTED]> wrote:
> What follows are wild speculations and grand pie-in-the-sky plans without
> substance with a letter to investors attached. Oh, come on!
Um, people, is this list really the place for fielding personal insults?
For what it's worth, my two cen
On 6/21/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> Eliezer asked a similar question on SL4. If an agent flips a fair quantum
> coin and is copied 10 times if it comes up heads, what should be the agent's
> subjective probability that the coin will come up heads? By the anthropic
> principle
On 6/23/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- On Sun, 6/22/08, Kaj Sotala <[EMAIL PROTECTED]> wrote:
> > On 6/21/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > >
> > > Eliezer asked a similar question on SL4. If an agent
> > fli
On 4/26/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Can you point to an objective definition that is clear about which
things are more intelligent than others, and which does not accidentally
include things that manifestly conflict with the commonsense definition
(by false negatives or false
sults. But why is this a problem? Do we
even *need* a definition of intelligence that works even when it's
applied to thermostats, or is it enough if we have a definition that
works when we apply it to things that are actually *minds*?
On 4/27/07, Richard Loosemore <[EMAIL PROTECTED]>
On 9/29/07, Russell Wallace <[EMAIL PROTECTED]> wrote:
> I've been through the specific arguments at length on lists where
> they're on topic, let me know if you want me to dig up references.
I'd be curious to see these, and I suspect many others would, too.
(Even though they're probably from list
On 9/29/07, Russell Wallace <[EMAIL PROTECTED]> wrote:
> On 9/29/07, Kaj Sotala <[EMAIL PROTECTED]> wrote:
> > I'd be curious to see these, and I suspect many others would, too.
> > (Even though they're probably from lists I am on, I haven't followed
On 9/30/07, Don Detrich - PoolDraw <[EMAIL PROTECTED]> wrote:
> So, let's look at this from a technical point of view. AGI has the potential
> of becoming a very powerful technology and misused or out of control could
> possibly be dangerous. However, at this point we have little idea of how
> thes
2010/8/12 John G. Rose
>
> BTW here is the latest one:
>
> http://www.win.tue.nl/~gwoegi/P-versus-NP/Deolalikar.pdf
See also:
http://www.ugcs.caltech.edu/~stansife/pnp.html - brief summary of the proof
Discussion about whether it's correct:
http://rjlipton.wordpress.com/2010/08/08/a-proof-that
27 matches
Mail list logo