Thanks for the more specific answer. It was the most illuminating of the
ones I've gotten. I realize that this isn't really the right list for
questions about human subjects experiments; just thought I'd give it a try.
Richard Loosemore wrote:
Harry Chesley wrote:
On 1/9/2009 9:45 AM, Richard
On 1/9/2009 9:28 AM, Vladimir Nesov wrote:
You need to name those parameters in a sentence only because it's
linear, in a graph they can correspond to unnamed nodes. Abstractions
can have structure, and their applicability can depend on how their
structure matches the current scene. If you
On 1/9/2009 9:45 AM, Richard Loosemore wrote:
There are certainly experiments that might address some of your
concerns, but I am afraid you will have to acquire a general
knowledge of what is known, first, to be able to make sense of what
they might tell you. There is nothing that can be
On 12/3/2008 8:11 AM, Richard Loosemore wrote:
Am I right in thinking that what these people:
http://www.newscientist.com/article/mg20026845.000-memories-may-be-stored-on-your-dna.html
are saying is that memories can be stored as changes in the DNA
inside neurons?
If so, that would
Ben Goertzel wrote:
...my own belief that consciousness is the underlying
reality, and physical and computational systems merely *focus* this
consciousness in particular ways, is also not something that can be
proven empirically or logically...
For what it's worth, let me throw out a random
Richard Loosemore wrote:
Harry Chesley wrote:
Richard Loosemore wrote:
I completed the first draft of a technical paper on consciousness
the other day. It is intended for the AGI-09 conference, and it
can be found at:
http://susaro.com/wp-content/uploads/2008/11
Trent Waddington wrote:
As I believe the is that conciousness? debate could go on forever,
I think I should make an effort here to save this thread.
Setting aside the objections of vegetarians and animal lovers, many
hard nosed scientists decided long ago that jamming things into the
brains
Mark Waser wrote:
My problem is if qualia are atomic, with no differentiable details,
why do some feel different than others -- shouldn't they all be
separate but equal? Red is relatively neutral, while searing
hot is not. Part of that is certainly lower brain function, below
the level of
On 11/14/2008 9:27 AM, Richard Loosemore wrote:
I completed the first draft of a technical paper on consciousness the
other day. It is intended for the AGI-09 conference, and it can be
found at:
http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf
Good paper.
A
Richard Loosemore wrote:
Harry Chesley wrote:
A related question: How do you explain the fact that we sometimes
are aware of qualia and sometimes not? You can perform the same
actions paying attention or on auto pilot. In one case, qualia
manifest, while in the other they do not. Why
Richard Loosemore wrote:
I completed the first draft of a technical paper on consciousness the
other day. It is intended for the AGI-09 conference, and it can be
found at:
http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf
One other point: Although this is a
This thread has gone back and forth several times concerning the reality
of consciousness. So at the risk of extending it further unnecessarily,
let me give my view, which seems self-evident to me, but I'm sure isn't
to others (meaning they may reasonably disagree with me, not that
they're
Matt Mahoney wrote:
2) It is real, as it clearly influences our thoughts. On the other
hand, though it feels subjectively like it is qualitatively
different from other aspects of the world, it probably isn't (but
I'm open to being wrong here).
The correct statement is that you believe it is
Matt Mahoney wrote:
If you don't define consciousness in terms of an objective test, then
you can say anything you want about it.
We don't entirely disagree about that. An objective test is absolutely
crucial. I believe where we disagree is that I expect there to be such a
test one day, while
On 11/4/2008 2:53 PM, YKY (Yan King Yin) wrote:
Personally, I'm not making an AGI that has emotions...
So you take the view that, despite our minimal understanding of the
basis of emotions, they will only arise if designed in, never
spontaneously as an emergent property? So you can safely
On 11/4/2008 3:31 PM, Matt Mahoney wrote:
To answer your (modified) question, consciousness is detected by the
activation of a large number of features associated with living
humans. The more of these features are activated, the greater the
tendency to apply ethical guidelines to the target
The question of when it's ethical to do AGI experiments has bothered me
for a while. It's something that every AGI creator has to deal with
sooner or later if you believe you're actually going to create real
intelligence that might be conscious. The following link is a blog essay
on the subject,
On 10/18/2008 9:27 AM, Mike Tintner wrote:
What rational computers can't do is find similarities between
disparate, irregular objects - via fluid transformation - the essence
of imagination.
So you don't believe that this is possible by finding combinations of
abstract shapes (lines,
I find myself needing to more thoroughly understand reasoning by
analogy. (I've read/thought about it to a degree, but would like more.)
Anyone have any recommendation for books and/or papers on the subject?
Thanks.
---
agi
Archives:
On 10/15/2008 8:01 AM, Ben Goertzel wrote:
What are your thoughts on this?
A narrower focus of the list would be better for me personally.
I've been convinced for a long time that computer-based AGI is possible,
and am working toward it. As such, I'm no longer interested in arguments
about
I think we would all agree that context is crucial to understanding.
Kill them! means something quite different if you're at a soccer game,
in a military battle, or playing a FPS video game.
But in a pragmatic, let's implement it, sense, I'm not as clear what
context means. Let me try to
On 8/9/2008 12:43 AM, Brad Paulsen wrote:
Mike Tintner wrote:
That illusion is partly the price of using language, which
fragments into pieces what is actually a continuous common sense,
integrated response to the world.
Excellent observation. I've said it many times before: language is
James Ratcliff wrote:
Every AGI, but the truly most simple AI must run in a simulated
environment of some sort.
Not necessarily, but in most cases yes. To give a counter example, a
human scholar reads Plato and publishes an analysis of what he has read.
There is no interaction with the
Terren Suydam wrote:
Harry,
--- On Wed, 8/6/08, Harry Chesley [EMAIL PROTECTED] wrote:
I'll take a stab at both of these...
The Chinese Room to me simply states that understanding cannot be
decomposed into sub-understanding pieces. I don't see it as
addressing grounding, unless you
Terren Suydam wrote:
Harry,
--- On Wed, 8/6/08, Harry Chesley [EMAIL PROTECTED] wrote:
But it's a preaching to the choir argument: Is there anything more
to the argument than the intuition that automatic manipulation
cannot create understanding? I think it can, though I have yet to
show
Terren Suydam wrote:
Unfortunately, I have to take a break from the list (why are people
cheering??).
No cheering here.
Actually, I'd like to say thanks to everyone. This thread has been very
interesting. I realize that much of it is old hat and boring to some of
you, but it's been useful
(as in an AI program), rote can conceivably provide everything.
On Wed, Aug 6, 2008 at 11:36 PM, Harry Chesley [EMAIL PROTECTED]
wrote:
I'm not at all sure that understanding much be active. It may be
that a text book on physics understands physics. But it doesn't do
anything
Mark Waser wrote:
The critical point that most people miss -- and what is really
important for this list (and why people shouldn't blindly dismiss
Searle) is that it is *intentionality* that defines understanding.
If a system has goals/intentions and it's actions are modified by the
On 8/5/2008 6:53 AM, YKY (Yan King Yin) wrote:
On 8/5/08, Mike Tintner [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Jeez, there is NO concept that is not dependent on context. There is
NO concept that is not infinitely fuzzy and open-ended in itself,
period - which is the principal
I'll take a stab at both of these...
The Chinese Room to me simply states that understanding cannot be
decomposed into sub-understanding pieces. I don't see it as addressing
grounding, unless you believe that understanding can only come from the
outside world, and must become part of the
As I've come out of the closet over the list tone issues, I guess I
should post something AI-related as well -- at least that will make me
net neutral between relevant and irrelevant postings. :-)
One of the classic current AI issues is grounding, the argument being
that a dictionary cannot
Terren Suydam wrote:
...
Without an internal
sense of meaning, symbols passed to the AI are simply arbitrary data
to be manipulated. John Searle's Chinese Room (see Wikipedia)
argument effectively shows why manipulation of ungrounded symbols is
nothing but raw computation with no
Vladimir Nesov wrote:
It's too fuzzy an argument.
You're right, of course. I'm not being precise, and though I'll try to
improve on that here, I probably still won't be. But here's my attempt:
There are essentially three types of grounding: embodiment, hierarchy
base nodes, and
or
on-topic-ness. Assuming the rules are spelled out and warnings are given and
behavior is enforced fairly and consistently, moderation can help. But it takes
a fairly proactive moderator to do all that.
Terren
--- On Sun, 8/3/08, Harry Chesley [EMAIL PROTECTED] wrote:
From: Harry Chesley
34 matches
Mail list logo