David Butler [EMAIL PROTECTED] wrote:Would two AGI's with the same initial
learning program, same hardware in a controlled environment (same access to a
specific learning base-something like an encyclopedia) learn at different rates
and excel in different tasks?
How would an AGI choose
Mike Tinter,
If you really do not think that digital computers can be creative by
definition, I do not understand why you would like to join a mailing list
with AGI researchers? Computers operate by using software, thus, they need
to be programmed. It just seems to me that you do not understand
On 07/01/2008, Robert Wensman [EMAIL PROTECTED] wrote:
I think what you really want to use is the
concept of adaptability, or maybe you could say you want an AGI system that
is programmed in an indirect way (meaning that the program instructions are
very far away from what the system actually
On Jan 7, 2008 9:12 AM, Mike Tintner [EMAIL PROTECTED] wrote:
Robert,
Look, the basic reality is that computers have NOT yet been creative in any
significant way, and have NOT yet achieved AGI - general intelligence, - or
indeed any significant rulebreaking adaptivity; (If you disagree,
Mike,
This discussion is just another a repetition of a common fallacy, namely
that computers cannot be creative (or flexible, adaptive, original etc.)
because they are programmed.
The fallacy can be illustrated by considering the following set of
situations.
1) If I tell a child how to
Robert,
Look, the basic reality is that computers have NOT yet been creative in any
significant way, and have NOT yet achieved AGI - general intelligence, - or
indeed any significant rulebreaking adaptivity; (If you disagree, please
provide examples. Ben keeps claiming/implying he's solved
Mike,
Let me clarify further. What me and other computer scientists mean by
program, is probably something like *A formal and non-ambigous description
of a deterministic system that operates over time*. Thus, if you can
describe something in nature with enough detail, your description is a
Mike,
To put my question in another way. Would you like to understand
intelligence? Understand it to such a degree, that you can give a detailed
and non-ambiguous description of how an intelligent system operates over
time? Well, if you do want that, then you want -using standard
terminology- to
Would two AGI's with the same initial learning program, same hardware
in a controlled environment (same access to a specific learning base-
something like an encyclopedia) learn at different rates and excel in
different tasks?
Mike,
To put my question in another way. Would you like to
.
-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Monday, January 07, 2008 10:09 AM
To: agi@v2.listbox.com
Subject: Can Computers Be Creative? [WAS Re: [agi] A Simple Mathematical
Test of Cog Sci.]
Mike,
This discussion is just another a repetition of a common
On Jan 7, 2008 12:08 PM, David Butler [EMAIL PROTECTED] wrote:
Would two AGI's with the same initial learning program, same hardware in a
controlled environment (same access to a specific learning base-something
like an encyclopedia) learn at different rates and excel in different tasks?
Yes
How would an AGI choose which things to learn first if given enough
data so that it would have to make a choice? If two AGI's (again-same
hardware, learning programs and controlled environment) were given
the same data would they make different choices?
On Jan 7, 2008, at 11:15 AM,
2008/1/7, David Butler [EMAIL PROTECTED]:
How would an AGI choose which things to learn first if given enough
data so that it would have to make a choice?
This is a simple question that demands a complex answer. It is like asking
How can a commercial airliner fly across the Atlantic?. Well,
Robert,
Thank you for your time. I am not a scientist nor do I have an
opinion or agenda on weather a successful AGI can be built. I am
just really curious and exited about the prospects.
On Jan 7, 2008, at 12:39 PM, Robert Wensman wrote:
2008/1/7, David Butler [EMAIL PROTECTED]:
Mike,
You have mischaracterized cog sci. It does not say the things you
claim it does.
What you are actually trying to attack was a particular view of AI (not
cog sci) in which everything is symbolic in a particular kind of way.
That stuff is just a straw man.
Cog sci in general
David Butler wrote:
I would say that the best way to simulate human intelligence with
diversity and creativity is to create not one AGI but many. The only way
to insure diversity and natural selection like our own evolution is to
simultaneously create multiple AGI's so that we have a better
On Jan 5, 2008 10:52 PM, Mike Tintner [EMAIL PROTECTED] wrote:
I think I've found a simple test of cog. sci.
I take the basic premise of cog. sci. to be that the human mind - and
therefore its every activity, or sequence of action - is programmed.
No. This is one perspective taken by some
I don't really understand what you mean by programmed ... nor by creative
You say that, according to your definitions, a GA is programmed and
ergo cannot be creative...
How about, for instance, a computer simulation of a human brain? That
would be operated via program code, hence it would be
Benjamin Goertzel wrote:
I don't really understand what you mean by programmed ... nor by creative
You say that, according to your definitions, a GA is programmed and
ergo cannot be creative...
How about, for instance, a computer simulation of a human brain? That
would be operated via program
On Jan 6, 2008 3:07 PM, a [EMAIL PROTECTED] wrote:
Creativity is a byproduct of analogical reasoning, or abstraction. It
has nothing to do with symbols or genetic algorithms! GA is too
computationally complex to generate creative solutions.
care to explain what sounds so absolute as to
Ben,
Sounds like you may have missed the whole point of the test - though I mean
no negative comment by that - it's all a question of communication.
A *program* is a prior series or set of instructions that shapes and
determines an agent's sequence of actions. A precise itinerary for a
On Jan 6, 2008 4:00 PM, Mike Tintner [EMAIL PROTECTED] wrote:
Ben,
Sounds like you may have missed the whole point of the test - though I mean
no negative comment by that - it's all a question of communication.
A *program* is a prior series or set of instructions that shapes and
determines
Benjamin Goertzel wrote:
So, is your argument that digital computer programs can never be creative,
since you have asserted that programmed AI's can never be creative
Hard-wired AI (such as KB, NLP, symbol systems) cannot be creative.
-
This list is sponsored by AGIRI:
Mike,
The short answer is that I don't believe that computer *programs* can be
creative in the hard sense, because they presuppose a line of enquiry, a
predetermined approach to a problem -
...
But I see no reason why computers couldn't be briefed rather than
programmed, and freely associate
Well we (Penrose co) are all headed in roughly the same direction, but
we're taking different routes.
If you really want the discussion to continue, I think you have to put out
something of your own approach here to spontaneous creativity (your terms)
as requested.
Yes, I still see the
If you believe in principle that no digital computer program can ever
be creative, then there's no point in me or anyone else rambling on at
length about their own particular approach to digital-computer-program
creativity...
One question I have is whether you would be convinced that digital
I think I've found a simple test of cog. sci.
I take the basic premise of cog. sci. to be that the human mind - and
therefore its every activity, or sequence of action - is programmed. Eric
Baum epitomises cog. sci.Baum proposes [in What Is Thought] that
underlying mind is a complex but
I would say that the best way to simulate human intelligence with
diversity and creativity is to create not one AGI but many. The only
way to insure diversity and natural selection like our own evolution
is to simultaneously create multiple AGI's so that we have a better
chance of the
28 matches
Mail list logo