Vladamir,
On 5/7/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
See http://www.overcomingbias.com/2008/01/newcombs-proble.html
This is a PERFECT talking point for the central point that I have been
trying to make. Belief in the Omega discussed early in that article is
essentially a religious
Steve,
I suspect I'll regret asking, but...
Does this rational belief make a difference to intelligence? (For the
moment confining the idea of intelligence to making good choices.)
If the AGI rationalized the existence of a higher power, what ultimate
bad choice do you see as a result?
Brad Paulsen wrote:
I happened to catch a program on National Geographic Channel today
entitled Accidental Genius. It was quite interesting from an AGI
standpoint.
One of the researchers profiled has invented a device that, by sending
electromagnetic pulses through a person's skull to the
Stefan Pernar wrote:
Richard, there is no substance behind your speculations - zero. Zip. And
all the fantasy and imagination you so clearly demonstrated here on the
board wont make up for that. You make stuff up as you go along and as
you need it and you clearly have enough time at your hand
You may want to check out the background material on this issue. Harnad
invented the idea that there is a 'symbol grounding problem', so that is
why I quoted him. His usage of the word 'symbol' is the one that is
widespread in cognitive science, but it appears that you are missing
this,
Richard, there is no substance behind your speculations - zero. Zip. And all
the fantasy and imagination you so clearly demonstrated here on the board
wont make up for that. You make stuff up as you go along and as you need it
and you clearly have enough time at your hand to do so.
http://www.dundee.ac.uk/psychology/taharley/pcgn_harley_review.pdf
Richard's cowriter above reviews the state of cognitive neuropsychology,
[and the Handbook of Cognitive Neuropsychology] painting a picture of v.
considerable disagreement in the discipline. I'd be interested if anyone can
On Fri, May 9, 2008 at 12:44 AM, Mark Waser [EMAIL PROTECTED] wrote:
Richard, there is no substance behind your speculations - zero. Zip.
And all the fantasy and imagination you so clearly demonstrated here on the
board wont make up for that. You make stuff up as you go along and as you
[EMAIL PROTECTED] wrote:
Hello
I am writing a literature review on AGI and I am mentioning the
definition of pattern as explained by Ben in his work.
A pattern is a representation of an object on a simpler scale. For
example, a pattern in a drawing of a mathematical curve could be a
Stefan,
I would prefer that you not remain quiet. I would prefer that you pick
*specific* points and argue them -- that's the way that science is done. The
problem is that AGI is an extremely complex subject and mailing lists are a
horrible forum for discussing such unless all
Something similar with respect to Social Neuroscience would also be
interesting, since it being an emerging field is bound to be heavily
criticized. It is definitely still in a very nascent stage but growing
rapidly.
http://www.dundee.ac.uk/psychology/taharley/pcgn_harley_review.pdf
Richard's
Radhika Tibrewal wrote:
Something similar with respect to Social Neuroscience would also be
interesting, since it being an emerging field is bound to be heavily
criticized. It is definitely still in a very nascent stage but growing
rapidly.
I am actually not familiar with Scoial Neuroscience:
Mike Tintner wrote:
http://www.dundee.ac.uk/psychology/taharley/pcgn_harley_review.pdf
Richard's cowriter above reviews the state of cognitive neuropsychology,
[and the Handbook of Cognitive Neuropsychology] painting a picture of v.
considerable disagreement in the discipline. I'd be
Here are a few,
http://serendip.brynmawr.edu/exchange/morton/socialneuroscience
http://www.psypress.com/socialneuroscience/introduction.asp
Radhika Tibrewal wrote:
Something similar with respect to Social Neuroscience would also be
interesting, since it being an emerging field is bound to
On 5/7/08, Mike Tintner [EMAIL PROTECTED] wrote:
YKY : Logic can deal with almost everything, depending on how much effort
you put in it =)
LES sanglots longs. des violons. de l'automne.
Blessent mon cour d'une langueur monotone.
You don't just read those words, (and most words), you hear
On Fri, May 9, 2008 at 3:02 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
I have a vague memory of coming across this research to duplicate savant
behavior, and I seem to remember thinking that the conclusion seems to be
that there is a part of the brain that is responsible for 'damping down'
--- Steve Richfield [EMAIL PROTECTED] wrote:
On 5/7/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
See http://www.overcomingbias.com/2008/01/newcombs-proble.html
After many postings on this subject, I still assert that
ANY rational AGI would be religious.
Not necessarily. You execute a
Actually, the sound of language isn't just a subtle thing - it's
foundational. Language is sounds first, and letters second (or third/fourth
historically).
And the sounds aren't just sounds - they express emotions about what is
being said. Not just emphases per one earlier post.
You could
A nice analogy occurs to me for NLP - processing language without the
sounds.
It's like processing songs without the music.
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
On Fri, May 9, 2008 at 2:13 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
A rational agent only has to know that there are some things it cannot
compute. In particular, it cannot understand its own algorithm.
Matt,
(I don't really expect you to give an answer to this question, as you
didn't on
On Thu, May 8, 2008 at 3:21 PM, Mike Tintner [EMAIL PROTECTED] wrote:
It oughtn't to be all neuro- though. There is a need for some kind of
corporate science - that studies whole body simulation and not just the
cerebral end,.After all, a lot of the simulations being talked about are v.
Hi Mike,
I've spent some time working with the CMU Sphinx automatic speech recognition
software, as well as the Festival text-to-speech software. From the Texai
SourceForge source code repository, anyone interested can inspect and download
an echo application that recognizes a spoken
No, a symbol is simply anything abstract that stands for an object - word
sounds, alphabetic words, numbers, logical variables etc. The earliest
proto-symbols may well have been emotions.
My point is that Harnad clearly talks of two intermediate visual/sensory
levels of processing - the
- Original Message
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, May 8, 2008 8:16:32 PM
Subject: Re: Symbol Grounding [WAS Re: [agi] AGI-08 videos]
No, a symbol is simply anything abstract that stands for an object - word
sounds, alphabetic words,
- Original Message
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, May 8, 2008 8:29:02 PM
Subject: Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI
Dangers)
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
Matt,
(I don't really expect you to
Hi Jim,
Funny, I was just thinking re the reply to your point, the second before I
read it. What I was going to say was: I read a lot of Harnad many years ago,
and I was a bit confused then about exactly what he was positing re the
intermediate levels of processing - iconic/categorical.
On Thu, May 8, 2008 at 10:02 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
Anyhow it is very interesting. Perhaps savantism is an attention mechanism
disorder? Like, too much attention.
Yes.
Autism is a devastating neurodevelopmental disorder with a
polygenetic predisposition that seems to
Entities must not be multiplied unnecessarily. William of Okkam.
A pattern is a set of matching inputs.
A match is a partial identity of the comparands.
The comparands for general intelligence must incrementally indefinitely
scale in complexity.
The scaling must start from the bottom:
--- Jim Bromer [EMAIL PROTECTED] wrote:
I don't want to get into a quibble fest, but understanding is not
necessarily constrained to prediction.
What would be a good test for understanding an algorithm?
-- Matt Mahoney, [EMAIL PROTECTED]
---
agi
Matt,
On 5/8/08, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Steve Richfield [EMAIL PROTECTED] wrote:
On 5/7/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
See http://www.overcomingbias.com/2008/01/newcombs-proble.html
After many postings on this subject, I still assert that
ANY
On Fri, May 9, 2008 at 1:51 AM, Jim Bromer [EMAIL PROTECTED] wrote:
I don't want to get into a quibble fest, but understanding is not
necessarily constrained to prediction.
Indeed, understanding is a fuzzy word that means lots of different
things in different contexts. In the context of
31 matches
Mail list logo