Sorry about the late reply.
snip some stuff sorted out
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
On Tue, Jul 1, 2008 at 2:02 AM, William Pearson [EMAIL PROTECTED] wrote:
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
If internals are programmed by humans, why do you need automatic
system to
Terren,
This is going too far. We can reconstruct to a considerable extent how
humans think about problems - their conscious thoughts. Artists have been
doing this reasonably well for hundreds of years. Science has so far avoided
this, just as it avoided studying first the mind, with
On Wed, Jul 2, 2008 at 2:48 PM, William Pearson [EMAIL PROTECTED] wrote:
Okay let us clear things up. There are two things that need to be
designed, a computer architecture or virtual machine and programs that
form the initial set of programs within the system. Let us call the
internal
John G. Rose wrote:
[snip]
Building a complex based intelligence much different from the human brain
design but still basically dependant on complexity is not impossible just
formidable. Working with software systems that have designed complexity and
getting predicted emergence and in this case
So yes, I think there are perfectly fine, rather simple
definitions for computing machines that can (it seems
like) perform calculations that turing machines cannot.
It should really be noted that quantum computers fall
into this class.
This is very interesting. Previously, I had heard (but not
Mike,
This is going too far. We can reconstruct to a considerable
extent how humans think about problems - their conscious thoughts.
Why is it going too far? I agree with you that we can reconstruct thinking, to
a point. I notice you didn't say we can completely reconstruct how humans
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Ah, but now you are stating the Standard Reply, and what you have to
understand is that the Standard Reply boils down to this: We are so
smart that we will figure a way around this limitation, without having
to do any so crass as just
Terren,
Obviously, as I indicated, I'm not suggesting that we can easily construct a
total model of human cognition. But it ain't that hard to reconstruct
reasonable and highly informative, if imperfect, models of how humans
consciously think about problems. As I said, artists have been
The standard model of quantum computation as defined by Feynman and
Deutsch is Turing computable (based on the concept of qubits). As
proven by Deutsch they compute the same set of functions than Turing
machines but faster (if they are feasible).
Non-standard models of quantum computation are not
2008/7/2 Terren Suydam [EMAIL PROTECTED]:
Mike,
This is going too far. We can reconstruct to a considerable
extent how humans think about problems - their conscious thoughts.
Why is it going too far? I agree with you that we can reconstruct thinking,
to a point. I notice you didn't say
Mike,
That's a rather weak reply. I'm open to the possibility that my ideas are
incorrect or need improvement, but calling what I said nonsense without further
justification is just hand waving.
Unless you mean this as your justification:
Your conscious, inner thoughts are not that different
Will,
My plan is go for 3) Usefulness. Cognition is useful from
an
evolutionary point of view, if we try to create systems
that are
useful in the same situations (social, building world
models), then we
might one day stumble upon cognition.
Sure, that's a valid approach for creating
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
On Wed, Jul 2, 2008 at 2:48 PM, William Pearson [EMAIL PROTECTED] wrote:
Okay let us clear things up. There are two things that need to be
designed, a computer architecture or virtual machine and programs that
form the initial set of programs within
Hector Zenil said:
and that is one of the many issues of hypercomputation: each time one
comes up with a standard model of hypercomputation there is always
another not equivalent model of hypercomputation that computes a
different set of functions, i.e. there is no convergence in models
unlike
On Wed, Jul 2, 2008 at 1:30 PM, Abram Demski [EMAIL PROTECTED] wrote:
Hector Zenil said:
and that is one of the many issues of hypercomputation: each time one
comes up with a standard model of hypercomputation there is always
another not equivalent model of hypercomputation that computes a
Yes, I was not claiming that there was just one type of hypercomputer,
merely that some initially very different-looking types do turn out to
be equivalent.
You seem quite knowledgeable about the subject. Can you recommend any
books or papers?
On Wed, Jul 2, 2008 at 1:42 PM, Hector Zenil [EMAIL
How do you assign credit to programs that are good at generating good
children? Particularly, could a program specialize in this, so that it
doesn't do anything useful directly but always through making highly
useful children?
On Wed, Jul 2, 2008 at 1:09 PM, William Pearson [EMAIL PROTECTED]
2008/7/2 Abram Demski [EMAIL PROTECTED]:
How do you assign credit to programs that are good at generating good
children?
I never directly assign credit, apart from the first stage. The rest
of the credit assignment is handled by the vmprograms, er,
programming.
Particularly, could a program
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
On Wed, Jul 2, 2008 at 9:09 PM, William Pearson [EMAIL PROTECTED] wrote:
They would get less credit from the human supervisor. Let me expand on
what I meant about the economic competition. Let us say vmprogram A
makes a copy of itself, called A', with
On Wed, Jul 2, 2008 at 3:39 PM, Abram Demski [EMAIL PROTECTED] wrote:
Yes, I was not claiming that there was just one type of hypercomputer,
merely that some initially very different-looking types do turn out to
be equivalent.
You seem quite knowledgeable about the subject. Can you recommend
On Thu, Jul 3, 2008 at 12:59 AM, William Pearson [EMAIL PROTECTED] wrote:
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
On Wed, Jul 2, 2008 at 9:09 PM, William Pearson [EMAIL PROTECTED] wrote:
They would get less credit from the human supervisor. Let me expand on
what I meant about the economic
WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?
Here is an important practical, conceptual problem I am having trouble with.
In an article entitled Are Cortical Models Really Bound by the 'Binding
Problem'? Tomaso Poggio's group at MIT takes the position that there
22 matches
Mail list logo