Roger, Do you think that humans do not function
in accord with pre-ordained hardware and software?
Richard

On Wed, Aug 29, 2012 at 7:31 AM, Roger Clough <rclo...@verizon.net> wrote:

>  ROGER: Hi Bruno Marchal
>
> I don't agree. Machines must function according to their software and
> hardware,
> neither of which are their own.
> BRUNO: A robot can already answer questions ,and talk, about its own
> software and hardware. The language Smalltalk makes this explicit by a
> command "self", but this can be done in all programming language by the use
> of a famous diagonalization trick, which I sum up often by: if Dx gives
> "x"x"", then D"D" gives "D"D"". D"D" gives a description of itself.
> You get self-duplicators and other self-referential construct by
> generalization of that constructive diagonal. A famous theorem by Kleene
> justifies its existence for all universal systems.
>
> ROGER: Either the operation follows pre-established rules or it does not.
>
> If any operation follows rules, then it cannot come up with anything new,
> it is merely following
> instructions so that any such result can be traced back in principle to
> some algorithm.
>
> If any operation does not follow rules, it can only generate gibberish.
> Which is to say that
> synthetic statements cannot be generated by analytic thought.
>
> More below, but I will stop here for now.
>
> ------------------------------------------------------------------------------------------------------------------------------------------------------
> Did the robot design its hardware ? No. So it is constrained by the
> hardware.
> Did the robot write the original software that can self-construct
> (presumably according to some rules of construction) ? No.
> And so, machines cannot do anything not intended by the software author in
> his software program and constrained by the hardware.
>
> What you are missing here is the aspect of free will or at least partly
> free will.
> Intelligence is the ability to make choices on one's own. That means
> freely, of
> its own free will. Following no rules of logic. Transcending logic, not
> limited by it.
>
>
> BRUNO:  Do you really believe that Mandelbrot expected the Mandelbrot set?
> He said itself that it has come as a surprise, despite years of observation
> of fractals in nature.
>
> ROGER:  OK, it came intuitively, freely, he did not arrive at it  by
> logic, although it no doubt has its own logic.
>
> BRUNO: Very simple program ("simple" meaning few Ks), can lead to
> tremendously complex behavior. If you understand the basic of computer
> science, you understand that by building universal machine, we just don't
> know what we are doing. To keep them slaves will be the hard work, and the
> wrong work.
>
> This was the issue you brought up before, which at that time I thought was
> miraculous, the Holy Grail I had been seeking.
> But on reflection, I no longer believe that.  IMHO anything
> that a computer does still must follow its own internal logic,
> contrained by its hardware constraints and the constraint of its language,
> even if those calculations are of infinite complexity.
> Nothing magical can happen. There ought to be a theorem showing that that
> must be true.
>
> So machines cannot make autonomous decisions, they can only
> make decisions intended by the software programmer.
>
>
> BRUNO: You hope.
>
>
> Bruno
>
>
>
>
>
>
>
>
> Roger Clough, rclo...@verizon.net <+rclo...@verizon.net>
> 8/28/2012
> Leibniz would say, "If there's no God, we'd have to invent him so
> everything could function."
> ----- Receiving the following content -----
> From: Bruno Marchal
> Receiver: everything-list
> Time: 2012-08-27, 09:52:32
> Subject: Re: Two reasons why computers IMHO cannot exhibit intelligence
>
>
>
>
> On 27 Aug 2012, at 13:07, Roger Clough wrote:
>
>
> Hi meekerdb
>
> IMHO I don't think that computers can have intelligence
> because intelligence consists of at least one ability:
> the ability to make autonomous choices (choices completely
> of one's own). Computers can do nothing on their own,
> they can only do what softward and harfdware tells them to do.
>
> Another, closely related, reason, is that there must be an agent that does
> the choosing,
> and IMHO the agent has to be separate from the system.
> Godel, perhaps, I speculate.
>
>
> I will never insist on this enough. All the G?el's stuff shows that
> machines are very well suited for autonomy. In a sense, most of applied
> computer science is used to help controlling what can really become
> uncontrollable and too much autonomous, a bit like children education.
>
>
> Computers are not stupid, we work a lot for making them so.
>
>
> Bruno
>
>
>
>
>
>
>
>
> Roger Clough, rclo...@verizon.net <+rclo...@verizon.net>
> 8/27/2012
> Leibniz would say, "If there's no God, we'd have to invent him so
> everything could function."
> ----- Receiving the following content -----
> From: meekerdb
> Receiver: everything-list
> Time: 2012-08-26, 14:56:29
> Subject: Re: Simple proof that our intelligence transcends that of
> computers
>
>
> On 8/26/2012 10:25 AM, Bruno Marchal wrote:
> >
> > On 25 Aug 2012, at 12:35, Jason Resch wrote:
> >
> >>
> >> I agree different implementations of intelligence have different
> capabilities and
> >> roles, but I think computers are general enough to replicate any
> intelligence (so long
> >> as infinities or true randomness are not required).
> >
> > And now a subtle point. Perhaps.
> >
> > The point is that computers are general enough to replicate intelligence
> EVEN if
> > infinities and true randomness are required for it.
> >
> > Imagine that our consciousness require some ORACLE. For example under
> the form of a some
> > non compressible sequence 11101000011101100011111101010110100001...
> (say)
> >
> > Being incompressible, that sequence cannot be part of my brain at my
> substitution level,
> > because this would make it impossible for the doctor to copy my brain
> into a finite
> > string. So such sequence operates "outside my brain", and if the doctor
> copy me at the
> > right comp level, he will reconstitute me with the right "interface" to
> the oracle, so I
> > will survive and stay conscious, despite my consciousness depends on
> that oracle.
> >
> > Will the UD, just alone, or in arithmetic, be able to copy me in front
> of that oracle?
> >
> > Yes, as the UD dovetails on all programs, but also on all inputs, and in
> this case, he
> > will generate me successively (with large delays in between) in front of
> all finite
> > approximation of the oracle, and (key point), the first person
> indeterminacy will have
> > as domain, by definition of first person, all the UD computation where
> my virtual brain
> > use the relevant (for my consciousness) part of the oracle.
> >
> > A machine can only access to finite parts of an oracle, in course of a
> computation
> > requiring oracle, and so everything is fine.
>
> That's how I imagine COMP instantiates the relation between the physical
> world and
> consciousness; that the physical world acts like the oracle and provides
> essential
> interactions with consciousness as a computational process. Of course that
> doesn't
> require that the physical world be an oracle - it may be computable too.
>
> Brent
>
> >
> > Of course, if we need the whole oracular sequence, in one step, then
> comp would be just
> > false, and the brain need an infinite interface.
> >
> > The UD dovetails really on all programs, with all possible input, even
> infinite non
> > computable one.
> >
> > Bruno
> >
> > http://iridia.ulb.ac.be/~marchal/
> >
> >
> >
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to 
> everything-list@googlegroups.com.<+everything-list@googlegroups.com.>
> To unsubscribe from this group, send email to everything-list+
> unsubscr...@googlegroups.com. <+unsubscr...@googlegroups.com.>
> For more options, visit this group at 
> http://groups.google.com/group/everything-list?hl=en.
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to 
> everything-list@googlegroups.com.<+everything-list@googlegroups.com.>
> To unsubscribe from this group, send email to everything-list+
> unsubscr...@googlegroups.com. <+unsubscr...@googlegroups.com.>
> For more options, visit this group at 
> http://groups.google.com/group/everything-list?hl=en.
>
>
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to 
> everything-list@googlegroups.com.<+everything-list@googlegroups.com.>
> To unsubscribe from this group, send email to everything-list+
> unsubscr...@googlegroups.com. <+unsubscr...@googlegroups.com.>
> For more options, visit this group at 
> http://groups.google.com/group/everything-list?hl=en.
>
>
>
>
> http://iridia.ulb.ac.be/~marchal/
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to