On Wed, Dec 3, 2008 at 1:51 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Hector,
>
> Yes, it's possible that the brain uses uncomputable neurons to predict
> uncomputable physical dynamics in the observed world
>
> However, even if this is the case, **there is no possible way to
> verify or falsify this hypothesis using science**, if science is
> construed to involve evaluation of theories based on finite sets of
> finite-precision data ...
>
> So, this hypothesis has much the same status as the hypothesis that
> the brain has an ineffable soul inside it, which can never be
> measured.  This is certainly possible too, but we have no way to
> verify or falsify it using science.
>
> You may say the hypothesis of neural hypercomputing valid in the sense
> that it helps guide you to interesting, falsifiable theories.  That's
> fine.  But, then you  must admit that the hypothesis of souls could be
> valid in the same sense, right?  It could guide some other people to
> interesting, falsifiable theories -- even though, in itself, it stands
> outside the domain of scientific validation/falsification.
>

I understand the point, but I insist that it is not that trivial. You
could  apply the same argument against the automated proof of the
four-color theorem. Since there is no human capable of verifying it in
a lifetime (and even if a group of people try to verify it, no single
mind would ever have the intellectual capacity to get convinced by its
own), then the four-color proof is not science... and me, I am pretty
convinced that it is, including computer science and proof theory.
Actually I think that that kind of proofs and approaches to science
will happen more and more often, as we can already witness.

Just as the four-color theorem was proved and then verified by another
computer program, the outcome of a hypercomputer could be verified by
another hypercomputer. And just as for the finite case of the
four-color theorem, you would not be able to verify it but by trusting
on another system.

I am not hypercomputationalist, all the opposite! but  closed
definitions about what is science and people trying to have the good
definition of science, look to me pretty narrow. However, if I were
director of a computer science department, I wouldn't probably put any
money into hypercomputationism research. But even if it is "just"
philosophy, that doesn't make it less valid or less plausible. On the
other hand, the "scientific" arguments against it often sound very
weak, perhaps just as weak as the arguments in favor, but sometimes
even weaker.

What if a hypercomputer provides you, each time you ask, the answer to
whether a Turing machine halts. You effectively cannot verify that it
works for all cases (it is of course a problem of induction very
spread in science in general), but I am pretty sure you would believe
that it is what it says it is, if for any Turing machine, as
complicated as you may want, it tells you whether it halts and when
(you could argue for example that it is just simulating the Turing
machine extremely fast, but let's suppose it does it instantaneously).
How this prediction power would make it less science than, let's say,
quantum mechanics? To me, that would be much more scientific than
people doing string theory...

The same about noise. People use to think about it as a constraint,
but some of recent results in computational complexity and serious
interpretations suggest that actually, as I was telling before, if it
nature is indeterministic, noise is actually a computation carried out
by something more powerful (even if it seems meaningful) than a
universal Turing machine, so by itself, rather than subtracting
computational power, it might add up! One would need of course to
conciliate this with thermodynamics, but there are actually some
interpretations that would easily allow this interpretation of noise.
However I don't think I will take that thread of discussion.

Together with the bibliography I've provided before, I recommend also
a very recent paper by Karl Svozil in the Complex Systems journal
about whether hypercomputation is falsifiable.


> It is possible that the essence of intelligence lies in something that
> can't be scientifically addressed.  If so, no matter how many
> finite-precision measurements of the brain we record and analyze,
> we'll never get at the core of intelligence that way.  So, in that
> hypothesis, if we succeed at making AGI, it will be due to some
> non-scientific, non-computable force somehow guiding us.  However, I
> doubt this is the case.  I strongly suspect the essence of
> intelligence lies in properties of systems that can be measured, and
> therefore *not* in hypercomputing.
>
> Consciousness is another issue -- I do happen to think there is an
> aspect of consciousness that, like hypercomputing, lies outside the
> realm of science.  However, I don't fall for the argument that X and Y
> must be equal just because they're both outside the realm of
> science...
>
> -- Ben G
>
> On Tue, Dec 2, 2008 at 6:54 PM, Hector Zenil <[EMAIL PROTECTED]> wrote:
>> Suppose that the gravitational constant is a non-computable number (it
>> might be, we don't know because as you say, we can only measure with
>> finite precision). Planets compute G as part of the law of gravitation
>> that rules their movement (you can of course object, that G is part of
>> a model that has been replaced by a another theory --General
>> Relativity-- and that neither one nor the other can be taken as full
>> and ultimate descriptions, but then I can change my argument to
>> whichever theory turns out to be the ultimate and true, even if we
>> never have access to it). Planets don't necessarily have to encode and
>> decode G, because it is given by granted, it is already naturally
>> encoded, they just follow the law in which it is given. The same, if a
>> non-computable number is already encoded in the brain, to compute with
>> such a real number the neuron would not need necessarily to encode or
>> decode the number. The neuron could then carry out a non-computable
>> computation (no measurement involved) and then give a "no"/"yes"
>> answer, just as a planet would hit or not another a planet by
>> following a non-computable gravitational constant.
>>
>> But even in the case of need of measurement, it is only the most
>> significant part relevant to the computation that is performing that
>> is actually needed, since we are not interested in infinitely long
>> computations, that's also why, even when noise is of course a
>> practical problem, it is not an infrangible one. Now you can argue
>> that if only a finite (the most significant part) of the real number
>> is necessary to perform the computation, it would have sufficed to
>> store only a rational (computable) number since the beginning, rather
>> than a non-computable number. However, it is this potential access to
>> an infinite number that makes the system more powerful and not the
>> fact of be able to infinite precision measurements.
>>
>> For more about these results you can take a look at Hava Siegelman's
>> work on Recurrent Analogical Neural Networks, which more than a work
>> on hypercomputation, I consider it a work on computational complexity
>> with pretty nice scientific results. On the other hand, I would say
>> that I may have many objections, mainly those pointed out by Davis in
>> his paper The Myth of Hypercomputation, which I also recommend you in
>> case you haven't read it. The only thing that from my point of view
>> Davis is trivializing is that whether there are non-computable numbers
>> in nature, taking advantage of their computational power, is an open
>> question, so it is still plausible.
>>
>>
>> On Wed, Dec 3, 2008 at 12:17 AM, Ed Porter <[EMAIL PROTECTED]> wrote:
>>> Hector,
>>>
>>>
>>>
>>> Thank you for your reply saying my description of your paper was much better
>>> than clueless.
>>>
>>>
>>>
>>> I am, however, clueless about how to interpret the second paragraph of your
>>> reply (all of which is copied below).
>>>
>>>
>>>
>>> For example, I am confused by your statements that:
>>>
>>>
>>>
>>> "such a power beyond the computational power of Turing machines, does not
>>> require to communicate, encode or decode any infinite value in order to
>>> compute a non-computable function."
>>>
>>>
>>>
>>> considering that you then state:
>>>
>>>
>>>
>>> "A characteristic function is one of the type "yes" or "no", so it only
>>> needs to transmit a finite amount of information even if the answer required
>>> an infinite amount."
>>>
>>>
>>>
>>> What I don't understand is how a system
>>>
>>>
>>>
>>> "does not require to communicate, encode or decode any infinite value in
>>> order to compute a non-computable function"
>>>
>>>
>>>
>>> if its
>>>
>>>
>>>
>>> "answer required an infinite amount [of information]."
>>>
>>>
>>>
>>> It seems like the computing of an infinite amount of information was
>>> required somewhere, even if not in communicating the answer, so how does
>>> such a system not¸ as you said
>>>
>>>
>>>
>>> "require to communicate, encode or decode any infinite value in order to
>>> compute a non-computable function"
>>>
>>>
>>>
>>> even if only internally?
>>>
>>>
>>>
>>> Ed Porter
>>>
>>>
>>>
>>>
>>>
>>> -----Original Message-----
>>> From: Hector Zenil [mailto:[EMAIL PROTECTED]
>>> Sent: Tuesday, December 02, 2008 5:14 PM
>>> To: [email protected]
>>> Subject: Re: >> RE: FW: [agi] A paper that actually does solve the problem
>>> of consciousness
>>>
>>>
>>>
>>> Hi Ed,
>>>
>>>
>>>
>>> I am glad you have read the paper with such detail. You have
>>>
>>> summarized quite well what it is about. I have no objection to the
>>>
>>> points you make. It is only important to bear in mind that the paper
>>>
>>> is about studying the possible computational power of the mind by
>>>
>>> using the model of an artificial neural network. The question of
>>>
>>> whether the mind is something else was not in the scope of that paper.
>>>
>>> Assuming that the brain is a neural network we wanted to see what
>>>
>>> features may take the neural network to achieve certain computational
>>>
>>> power. We found, effectively, that either an encoding at the level of
>>>
>>> the neuron (space, e.g. a natural encoding of a real number) or at the
>>>
>>> neuron firing time. In both cases, to reach any computational power
>>>
>>> beyond the Turing limit one would need either infinite or
>>>
>>> infinitesimal space or time, assuming finite brain resources (number
>>>
>>> of neurons and connections). My personal opinion (perhaps not
>>>
>>> reflected in the paper itself) is that  such super capabilities does
>>>
>>> not really hold, but the idea was to explore all the possibilities.
>>>
>>>
>>>
>>> It is also very important to highlight, that such a power beyond the
>>>
>>> computational power of Turing machines, does not require to
>>>
>>> communicate, encode or decode any infinite value in order to compute a
>>>
>>> non-computable function. It suffices to posit a natural encoding
>>>
>>> either in the space or time in which the neurons work, and then make
>>>
>>> questions in the form of characteristic functions encoding a
>>>
>>> non-computable function. A characteristic function is one of the type
>>>
>>> "yes" or "no", so it only needs to transmit a finite amount of
>>>
>>> information even if the answer required an infinite amount. So a set
>>>
>>> of neurons may be capable of taking advantage of infinitesimals, and
>>>
>>> answer yes or no to a non-computable function, even if I think that is
>>>
>>> not the case it might be. That seems perhaps compatible with your
>>>
>>> ideas about consciousness.
>>>
>>>
>>>
>>> - Hector
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Tue, Dec 2, 2008 at 5:31 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
>>>
>>>> Hector,
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>> I skimmed your paper linked to in the post below.
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>> From my quick read it appears the only meaningful way it suggests a brain
>>>
>>>> might be infinite was that since the brain used analogue values --- such
>>>> as
>>>
>>>> synaptic weights, or variable time intervals between spikes (and
>>>> presumably
>>>
>>>> since those analogue values would be determined by so many factors, each
>>>> of
>>>
>>>> which might modify their values slightly) --- the brain would be capable
>>>> of
>>>
>>>> computing many values each of which could arguably have infinite gradation
>>>
>>>> in value.  So arguably its computations would be infinitely complex, in
>>>
>>>> terms of the number of bits that would be required to describe them
>>>> exactly.
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>> If course, it is not clear the universe itself supports infinitely fine
>>>
>>>> gradation in values, which your paper admits is a questions.
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>> But even if the universe and the brain did support infinitely fine
>>>
>>>> gradations in value, it is not clear computing with weights or signals
>>>
>>>> capable of such infinitely fine gradations, necessarily yields computing
>>>
>>>> that is meaningfully much more powerful, in terms of the sense of
>>>> experience
>>>
>>>> it can provide --- unless it has mechanisms that can meaningfully encode
>>>> and
>>>
>>>> decode much more information in such infinite variability.  You can only
>>>
>>>> communicate over a very broad bandwidth communication medium as much as
>>>> your
>>>
>>>> transmitting and receiving mechanisms can encode and decode.
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>> For example, it is not clear a high definition TV capable of providing an
>>>
>>>> infinite degree of variation in its colors, rather than only say 8, 16,
>>>> 32,
>>>
>>>> or 64 bits for each primary color, would provide any significantly greater
>>>
>>>> degree of visual experience, even though one could claim the TV was
>>>> sending
>>>
>>>> out a signal of infinite complexity.
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>> I have read and been told by neural net designers that typical neural nets
>>>
>>>> operate by dividing a high dimensional space into subspaces.  If this is
>>>
>>>> true, then it is not clear that merely increasing the resolution at which
>>>
>>>> such neural nets were computed, say beyond 64 bits, would change the
>>>> number
>>>
>>>> of subspaces that could be represented with a given number, say 100
>>>> billion,
>>>
>>>> of nodes --- or that the minute changes in boundaries, or the occasional
>>>
>>>> difference in tipping points that might result from infinite precision
>>>> math,
>>>
>>>> if it were possible, would be of that great a significance with regard to
>>>
>>>> the overall capabilities of the system.  Thus, it is not clear that
>>>> infinite
>>>
>>>> resolution in neural weights and spike timing would greatly increase the
>>>
>>>> meaningful (i.e., having grounding), rememberable, and actionable number
>>>> of
>>>
>>>> states the brain could represent.
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>> My belief --- and it is only a belief at this point in time --- is that
>>>> the
>>>
>>>> complexity a finite human brain could deliver is so great --- arguably
>>>> equal
>>>
>>>> to 1000 millions simultaneous DVD signals that interact with each other
>>>> and
>>>
>>>> memories --- that such a finite computation is enough to create the sense
>>>> of
>>>
>>>> experiential awareness we humans call consciousness.
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>> I am not aware of anything that modern science says with authority about
>>>
>>>> external reality --- or that I have sensed from my own experiences of my
>>>> own
>>>
>>>> consciousness --- that would seem to require infinite resources.
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>> Something can have a complexity far beyond human comprehension, far beyond
>>>
>>>> even the most hyperspeed altered imaginings of a drugged mind, arguably
>>>> far
>>>
>>>> beyond the complexity of the observable universe, without requiring for
>>>> its
>>>
>>>> representation more than an infinitesimal fraction of anything that could
>>>> be
>>>
>>>> accurately called infinite.
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>> Ed Porter
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>> -----Original Message-----
>>>
>>>> From: Hector Zenil [mailto:[EMAIL PROTECTED]
>>>
>>>> Sent: Sunday, November 30, 2008 10:42 PM
>>>
>>>> To: [email protected]
>>>
>>>> Subject: Re: >> RE: FW: [agi] A paper that actually does solve the problem
>>>
>>>> of consciousness
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>> On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>>>
>>>>
>>>
>>>>>> But quantum theory does appear to be directly related to limits of the
>>>
>>>>
>>>
>>>>>> computations of physical reality.  The uncertainty theory and the
>>>
>>>>
>>>
>>>>>> quantization of quantum states are limitations on what can be computed
>>>>>> by
>>>
>>>>
>>>
>>>>>> physical reality.
>>>
>>>>
>>>
>>>>>
>>>
>>>>
>>>
>>>>> Not really.  They're limitations on what  measurements of physical
>>>
>>>>
>>>
>>>>> reality can be simultaneously made.
>>>
>>>>
>>>
>>>>>
>>>
>>>>
>>>
>>>>> Quantum systems can compute *exactly* the class of Turing computable
>>>
>>>>
>>>
>>>>> functions ... this has been proved according to standard quantum
>>>
>>>>
>>>
>>>>> mechanics math.  however, there are some things they can compute
>>>
>>>>
>>>
>>>>> faster than any Turing machine, in the average case but not the worst
>>>
>>>>
>>>
>>>>> case.
>>>
>>>>
>>>
>>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>> Sorry, I am not really following the discussion but I just read that
>>>
>>>>
>>>
>>>> there is some misinterpretation here. It is the standard model of
>>>
>>>>
>>>
>>>> quantum computation that effectively computes exactly the Turing
>>>
>>>>
>>>
>>>> computable functions, but that was almost hand tailored to do so,
>>>
>>>>
>>>
>>>> perhaps because adding to the theory an assumption of continuum
>>>
>>>>
>>>
>>>> measurability was already too much (i.e. distinguishing infinitely
>>>
>>>>
>>>
>>>> close quantum states). But that is far from the claim that quantum
>>>
>>>>
>>>
>>>> systems can compute exactly the class of Turing computable functions.
>>>
>>>>
>>>
>>>> Actually the Hilbert space and the superposition of particles in an
>>>
>>>>
>>>
>>>> infinite number of states would suggest exactly the opposite. While
>>>
>>>>
>>>
>>>> the standard model of quantum computation only considers a
>>>
>>>>
>>>
>>>> superposition of 2 states (the so-called qubit, capable of
>>>
>>>>
>>>
>>>> entanglement in 0 and 1). But even if you stick to the standard model
>>>
>>>>
>>>
>>>> of quantum computation, the "proof" that it computes exactly the set
>>>
>>>>
>>>
>>>> of recursive functions [Feynman, Deutsch] can be put in jeopardy very
>>>
>>>>
>>>
>>>> easy : Turing machines are unable to produce non-deterministic
>>>
>>>>
>>>
>>>> randomness, something that quantum computers do as an intrinsic
>>>
>>>>
>>>
>>>> property of quantum mechanics (not only because of measure limitations
>>>
>>>>
>>>
>>>> of the kind of the Heisenberg principle but by quantum non-locality,
>>>
>>>>
>>>
>>>> i.e. the violation of Bell's theorem). I just exhibited a non-Turing
>>>
>>>>
>>>
>>>> computable function that standard quantum computers compute...
>>>
>>>>
>>>
>>>> [Calude, Casti]
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>>>> But, I am old fashioned enough to be more interested in things about the
>>>
>>>>
>>>
>>>>>> brain and AGI that are supported by what would traditionally be
>>>
>>>>>> considered
>>>
>>>>
>>>
>>>>>> "scientific evidence" or by what can be reasoned or designed from such
>>>
>>>>
>>>
>>>>>> evidence.
>>>
>>>>
>>>
>>>>>>
>>>
>>>>
>>>
>>>>>> If there is any thing that would fit under those headings to support the
>>>
>>>>
>>>
>>>>>> notion of the brain either being infinite, or being an antenna that
>>>
>>>>>> receives
>>>
>>>>
>>>
>>>>>> decodable information from some infinite-information-content source, I
>>>
>>>>>> would
>>>
>>>>
>>>
>>>>>> love to hear it.
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>> You and/or other people might be interested in a paper of mine
>>>
>>>>
>>>
>>>> published some time ago on the possible computational power of the
>>>
>>>>
>>>
>>>> human mind and the way to encode infinite information in the brain:
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>> http://arxiv.org/abs/cs/0605065
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>>> the key point of the blog post you didn't fully grok, was a careful
>>>
>>>>
>>>
>>>>> argument that (under certain, seemingly reasonable assumptions)
>>>
>>>>
>>>
>>>>> science can never provide evidence in favor of infinite mechanisms...
>>>
>>>>
>>>
>>>>>
>>>
>>>>
>>>
>>>>> ben g
>>>
>>>>
>>>
>>>>>
>>>
>>>>
>>>
>>>>>
>>>
>>>>
>>>
>>>>> -------------------------------------------
>>>
>>>>
>>>
>>>>> agi
>>>
>>>>
>>>
>>>>> Archives: https://www.listbox.com/member/archive/303/=now
>>>
>>>>
>>>
>>>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>>
>>>>
>>>
>>>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>>
>>>>
>>>
>>>>> Powered by Listbox: http://www.listbox.com
>>>
>>>>
>>>
>>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>> --
>>>
>>>>
>>>
>>>> Hector Zenil                        http://www.mathrix.org
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>>
>>>
>>>> -------------------------------------------
>>>
>>>>
>>>
>>>> agi
>>>
>>>>
>>>
>>>> Archives: https://www.listbox.com/member/archive/303/=now
>>>
>>>>
>>>
>>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>>
>>>>
>>>
>>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>>
>>>>
>>>
>>>> Powered by Listbox: http://www.listbox.com
>>>
>>>>
>>>
>>>> ________________________________
>>>
>>>> agi | Archives | Modify Your Subscription
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>>
>>> Hector Zenil                        http://www.mathrix.org
>>>
>>>
>>>
>>>
>>>
>>> -------------------------------------------
>>>
>>> agi
>>>
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>>
>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>>
>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>>
>>> Powered by Listbox: http://www.listbox.com
>>>
>>> ________________________________
>>> agi | Archives | Modify Your Subscription
>>
>>
>>
>> --
>> Hector Zenil                            http://www.mathrix.org
>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> Director of Research, SIAI
> [EMAIL PROTECTED]
>
> "I intend to live forever, or die trying."
> -- Groucho Marx
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Hector Zenil                            http://www.mathrix.org


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to