On 5/6/07, Mike Tintner [EMAIL PROTECTED] wrote:
YKY: Consciousness is not central to AGI .
The human mind consists of a two-tier structure. On top, you have this
conscious, executive mind that takes most of the decisions about which way
the system will go - basically does the steering. On
to find a word in a big list you should really use a dictionary / hash
table instead of binary search... ;-)
(ok i know that wasnt the point you were trying to make :)
Jean-Paul
PS: [META] - people pls to cut off long message includes - some of us
don't enjoy always on high bandwidth :(
a
Mike,
The extent to which there is a rigid distinction between these two tiers in
the human brain/mind is not entirely clear. The human brain seems to have
some distinct memory subsystems associated with various sorts of short term
memory or working memory, but the notion of executive
Well, there obviously IS a conscious, executive mind, separate from the
unconscious mind, whatever the enormous difficulties cognitive sicentists had
in first admitting its existence and now in identifying its correlates! And you
still seem to be sharing some of those old difficulties in
Mike,
The conscious mind thinks literally, freely. How long it will spend on any
given decision, and what course of thought it will pursue in reaching that
decision are definitely NOT set, but free.
Ah, well, I'm glad to see the age-old problem of free will versus
determinism is solved now!
On Saturday 05 May 2007 23:29, Matt Mahoney wrote:
About programming languages. I do most of my programming in C++ with a
little bit of assembler. AGI needs some heavy duty number crunching. You
really need assembler to do most any kind of vector processing, especially
if you use a
Consider a ship. From one point of view, you could separate the people aboard
into two groups: the captain and the crew. But another just as reasonable
point of view is that captain is just one member of the crew, albeit one
distinguished in some ways.
One could reasonably take the point of
As Nietzsche put it, from a functional point of view, consciousness is like
the general who, after the fact, takes responsibility for the largely
autonomous actions of his troops ;-)
However, none of these metaphors addresses the issue of first vs. third
person perspectives
I hate to trumpet
PS: [META] - people pls to cut off long message includes - some of us
don't enjoy always on high bandwidth :(
[META] Yes, that is a very important point for me as well. As this list is
getting more and more active I'm wasting more and more time scrolling through
messages (often top-posted) to
On Sunday 06 May 2007 07:49, Benjamin Goertzel wrote:
As Nietzsche put it, from a functional point of view, consciousness is like
the general who, after the fact, takes responsibility for the largely
autonomous actions of his troops ;-)
That's actually pretty close to the way (I think) it
Consider a ship. From one point of view, you could separate the people
aboard
into two groups: the captain and the crew. But another just as reasonable
point of view is that captain is just one member of the crew, albeit one
distinguished in some ways.
Really? Bush? Browne [BP, just
Mike,
Since you mentioned me and NARS, I feel the need to clarify my
position on the related issues.
*. I agree with you that in many situations, the decision-making
procedure doesn't follow predetermined algorithm, which give people
the feeling of free will. On the other hand, at a deeper
I find that freedom is one of those folk-psychology/philosophy concepts
that isn't really much use for scientific and engineering thinking about
either
human or machine intelligence...
As for concentration, this gets into what I call attention allocation --
an
area we've paid a lot of attention
J Storrs Hall, PhD. writes:
As long as the trumpets are blaring, Beyond AI is coming out this month,
with
the coolest cover I've seen on any non-fiction book (he says modestly):
http://www.amazon.com/Beyond-AI-Creating-Conscience-Machine/dp/1591025117
Cool! I just pre-ordered my copy!
Look
Mike,
Bit of confusion here. Consciousness is best used to refer to the
thing that Chalmers refers to as the Hard Problem issues.
The thing you are mainly referring to is what cog psych people would
talk about as executive processing (as opposed to automatic
processing). Big literature
Mike Tintner wrote:
There is a crashingly obvious difference between a rational computer and
a human mind - and the only way cognitive science has managed not to
see it is by resolutely refusing to look at it, just as it resolutely
refused to look at the conscious mind in the first place. The
Mike Tintner wrote:
Now to the rational philosopher and scientist and to the classical AI
person, this is all terrible (as well as flatly contradicting one of the
most fundamental assumptions of cognitive science, i.e. that humans
think rationally). We are indeed only human not [rational,
Mike Tintner wrote:
And if you're a betting man, pay attention to Dennett. He wrote about
Consciousness in the early 90's, together with Crick helped make it
scientifically respectable. About five years later, consciousness
studies swept science and philosophy.
Nonsense.
Dennett's approach
Er nonsense to you too. :}
Part of my asserting myself boldly here, was to say: look, I may be a
schmuck on AI but I know a lot, here ( in fact I'll stand by the rest of my
claims, - although if you guys can't recognize, for example, that free
thinking opens up a new dimension on free will,
Cognitive science treats humans as thinking like computers - rationally, if
boundedly rationally.
Which part of cognitive science treats humans as thinking irrationally, as I
have described ? (There may be some misunderstandings here which hve to be
ironed out, but I don't think my claim at
If you are a nondeterminist - i.e. a believer in nondeterministic
programming - je t'embrasse. (see my forthcoming reply to Pei).
However, having being thoroughly attacked by Ai-ers including Minsky on his
group, for adopting such a position - on the basis that nondeterministic
programs can
On 5/6/07, Mark Waser [EMAIL PROTECTED] wrote:
Yes, I'll match my understanding and knowledge of, and ideas on, the
free will issue against anyone's.
Arrogant much?
I just introduced an entirely new dimension to the free will debate. You
literally won't find it anywhere. Including Dennett.
Pei,
Thanks for stating your position (which I simply didn't know about before -
NARS just looked at a glance as if it MIGHT be nondeterministic).
Basically, and very briefly, my position is that any AGI that is to deal
with problematic decisions, where there is no right answer, will have to
Well I will go with the high level of intelligence condition,
and I would think it is pretty obvious.
We know already that among humans there is a grading or levels of intelligence,
so unless there is some specific thing you must have to be intelligent,
I would consider a 20 yr old, a 10, and
Without getting into what consciousness is in humans, and how that works,
some type of controller or attention module must be done in an AGI, because
given a wide range of options and goals, it must allocate its time and enery
into what it should be doign at any one point in time.
The design
Mike,
I believe many of the confusions on this topic is caused by the
following self-evident belief: A system is fundamentally either
deterministic or non-deterministic. The human mind, with free will, is
fundamentally non-deterministic; a conventional computer, being Turing
Machine, is
One goal or project I was considering (for profit) is a research tool,
basically a KB that scans in teh newspapers and articles and extracts pertinent
information for others to query against and use.
This would help build up a large world knowledge base, and would also be
salable to research
Mike Tintner wrote:
Cognitive science treats humans as thinking like computers - rationally,
if boundedly rationally.
Which part of cognitive science treats humans as thinking irrationally,
as I have described ? (There may be some misunderstandings here which
hve to be ironed out, but I
Mike Tintner wrote:
Er nonsense to you too. :}
Part of my asserting myself boldly here, was to say: look, I may be a
schmuck on AI but I know a lot, here ( in fact I'll stand by the rest
of my claims, - although if you guys can't recognize, for example, that
free thinking opens up a new
What about a brain damaged person with alzeihmers?
At the risk of being politically incorrect, on a bad day -- pretty much
unintelligent (though still capable to some degree)
A savant that can be trained to water the flowers in a garden? eh cant do
anything else btu this one function, but
On Sunday 06 May 2007 10:18, Mike Tintner wrote:
Consider a ship. From one point of view, you could separate the people
aboard into two groups: the captain and the crew. But another just as
reasonable point of view is that captain is just one member of the crew,
albeit one distinguished in
On Sunday 06 May 2007 09:47, Mike Tintner wrote:
And if you're a betting man, pay attention to Dennett. He wrote about
Consciousness in the early 90's, together with Crick helped make it
scientifically respectable.
Actually, the serious study of consciousness was made respectable by Julian
On Sunday 06 May 2007 09:47, Mike Tintner wrote:
For example - and this is the real issue that concerns YOU and AGI - I just
introduced an entirely new dimension to the free will debate.
Everybody and his dog, especially the philosophers, thinks that they have some
special insight into free
Richard,
I don't think I'm not getting it at all.
What you have here is a lot of good questions about how the graphics level
of processing that I am proposing, might work. And I don't have the answers,
and haven't really thought about them yet. What I have proposed is a general
idea loosely
Pei,
I don't think there's any confusion here. Your system as you describe it IS
deterministic. Whether an observer might be confused by it is irrelevant.
Equally the fact that it is determined by a complex set of algorithms
applying to various tasks and domains and not by one task-specific
If there's any confusion, think about many women and dieting. They will be
confronted by much the same decisions about whether to eat or not to eat
on
possibly thousands of occasions throughout their lives. And over and over,
throughout their entire lives, they will - freely - decide now this
On May 6, 2007, at 2:27 PM, J. Storrs Hall, PhD. wrote:
The only person, for my money, who has really seen through it is Drew
McDermott, Yale CS prof (former student of Minsky). He points out
that almost
any straightforward mental architecture for a robot that models the
world for
planning
Mark,
Indeed. Many confusions are caused by the ambiguity and context
dependency of terms in natural languages.
For this reason, it is not a good idea to simply label a system as
deterministic or non-deterministic without clarifying the sense of
the term.
Pei
On 5/6/07, Mark Waser [EMAIL
On 5/6/07, Mike Tintner [EMAIL PROTECTED] wrote:
Pei,
I don't think there's any confusion here. Your system as you describe it IS
deterministic. Whether an observer might be confused by it is irrelevant.
Equally the fact that it is determined by a complex set of algorithms
applying to various
On Sunday 06 May 2007 17:59, J. Andrew Rogers wrote:
On May 6, 2007, at 2:27 PM, J. Storrs Hall, PhD. wrote:
The only person, for my money, who has really seen through it is Drew
McDermott, Yale CS prof (former student of Minsky). ...
Eh? Unless McDermott first came up with that idea long
J. Storrs Hall, PhD. writes:
I'm intending to do lo-level vision on (one) 8800 and everything else on my
(dual) Clovertowns.
Do you have any particular architectures / algorithms you're working on?
Your
approach and mine sound like there could be valuable shared effort...
First I'm going
On May 6, 2007, at 4:08 PM, J. Storrs Hall, PhD. wrote:
On Sunday 06 May 2007 17:59, J. Andrew Rogers wrote:
On May 6, 2007, at 2:27 PM, J. Storrs Hall, PhD. wrote:
The only person, for my money, who has really seen through it is
Drew
McDermott, Yale CS prof (former student of Minsky). ...
Mike,
I really don't know what to say any more.
Too much of what you suggest has been considered in great depth by other
people. It is an insult to them, if you ignore what they did.
You need to learn about cognitive science, THEN come back and argue
about it.
Richard Loosemore.
My comment stemmed from my experience as a professional cognitive
scientist. Please don't pull this kind of stunt.
Mike Tintner wrote:
Richard,
Welcome to the Virtual Home for
the NCSU Cognitive Science Program!
Cognitive Science is an exciting area of interdisciplinary research that
Richard,
I have taken your point that you are pissed off with me do not wish to
talk to me. However, you are being unwarrantedly insulting to me, if you
think I am pulling a stunt. I was making a genuinely meant point - it is
no problem to produce an endless series of cognitive science
Pei,
I assumed your system is determinisitc from your posts, not your papers. So
I'm still really, genuinely confused by your position. You didn't actually
answer my question (unless I've missed something in all these posts) re how
your system could have a choice and yet not be arbitrary at
--- YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
On 5/6/07, Matt Mahoney [EMAIL PROTECTED] wrote:
YKY, what do you mean by scruffie? Is that anyone who doesn't think
FOPL
should be the core of an AGI?
Scruffies tend to think AGI consists of a large number of
heterogeneous modules.
47 matches
Mail list logo