Science is a religion by itself.

2013-03-07 Thread socra...@bezeqint.net
   The unity of geometry and physics.
=..
My questions are:
Can 'dirac's virtual particles' have geometrical form of circle?
Can we use Euler equation to this circle- particle ?
Which physical laws can we use to this circle- particle ?
How can be tied Euler equation, physical laws and
circle- particle into one theory ?
==..
I say that there is circle-particle that can change /
transformed into sphere-particle and vice versa
and Euler's equationcosx + isinx in = e^ix can explain
this transformation / fluctuation of quantum particle
I try to understand more details.
I have circle- particle with two infinite numbers: (pi) and (e).
I say that this circle-particle that can change into sphere-particle
 and vice versa.  Then I need third number for these changes.
The third number, in my opinion,  is infinite  a=1/137
( the fine structure constant = the limited volume coefficient)
 This coefficient (a=1/137) is the border between two
conditions of quantum particle. This coefficient (a=1/137) is
responsible  for these changes. This coefficient (a=1/137) unite
 geometry with  the physics ( e^2=ah*c)
=..
If physicists use string-particle (particle that has length but
 hasn't thickness -volume) to understand reality
(and have some basic problems to solve this task) then why don't
use circle-particle for this aim.
It is a pity that I am not physicist or mathematician.
If I were mathematician or physicist I wouldn't lost the chance
to test this hypothesis.
=..
Best wishes.
Israel Sadovnik  Socratus

==...


On Mar 7, 8:22 am, "socra...@bezeqint.net" 
wrote:
>  Dear MarkCC.
> Thank you for paying attention on my crackpottery article.
> I like your comment.
> Very like.
> ==.
> You say:
> Create a universe with no matter, a universe with different kinds
>  of matter, a universe with 300 forces instead of the four that
>  we see - and e and π won't change.
> =..
> Now Euler's equation plays a role in quantum theory.
> In quantum theory there isn't constant firm quant particle.
> The Pi says  that a point-particle or string-particle cannot  be
>  a quant particle. The Pi says that that quant particle
>  can be a circle and it cannot be a perfect circle.
> If e and π  belong to quant particle then these numbers
> can mutually change.
> Doesn't it mean that Pi ( a circle ) can be changed into sphere?
> Doesn't Euler's equationcosx + isinx in = e^ix can explain
> this transformation / fluctuation of quant particle ?
> You say:
> What things like e and π, and their relationship via Euler's equation
> tell us is that there's a fundamental relationship between numbers
> and shapes on a two-dimensional plane which does not and cannot
> really exist in the world we live in.
> =.
>
> But this 'a fundamental relationship between numbers and
>  shapes on a two-dimensional plane' can really exist
>  in two-dimensional vacuum.
>
> All the best.
> socratus.
>
> ==.
>
> On Mar 5, 9:57 pm, "socra...@bezeqint.net" 
> wrote:
>
>
>
> > Euler's Equation Crackpottery
> > Feb 18 2013 Published by MarkCC under Bad Math, Bad Physics
>
> > One of my twitter followers sent me an interesting piece of
> > crackpottery.
> >  I debated whether to do anything with it. The thing about
> > crackpottery
> >  is that it really needs to have some content.
> > Total incoherence isn't amusing. This bit is, frankly, right on the
> > line.
> > ==.
> > Euler's Equation and the Reality of Nature.
> > a) Euler's Equation as a mathematical reality.
> > Euler's identity is "the gold standard for mathematical beauty'.
> > Euler's identity is "the most famous formula in all mathematics".
> > ' . . . this equation is the mathematical analogue of Leonardo
> > da Vinci's Mona Lisa painting or Michelangelo's statue of David'
> > 'It is God's equation', 'our jewel ', ' It is a mathematical icon'.
> > . . . . etc.
> > b) Euler's Equation as a physical reality.
> > "it is absolutely paradoxical; we cannot understand it,
> > and we don't know what it means, . . . . .'
> > ' Euler's Equation reaches down into the very depths of existence'
> > ' Is Euler's Equation about fundamental matters?'
> > 'It would be nice to understand Euler's Identity as a physical process
> > using physics.'
> > ' Is it possible to unite Euler's Identity with physics, quantum
> > physics ?'
> > My aim is to understand the reality of nature.
> > Can Euler's equation explain me something about reality?
> > To give the answer to this. question I need to bind Euler's equation
> >  with an object - particle. Can it be math- point or string- particle
> > or triangle-particle? No, Euler's formula has quantity (pi) which
> > says me that the particle must be only a circle .
> > Now I want to understand the behavior of circle - particle and
> >  therefore I need to use spatial relativity and quantum theories.
> >  These two theories say me that the reason of circle - particle's
> > movement is its own inner impulse (h) or (h*=h/2pi).
> > a) Using its own inner impulse (h) circle - particle moves
> >

Brain teaser

2013-03-07 Thread Stephen P. King

Hi,

What is the difference between a random sequence of bits and a 
meaningful message? The correct decryption scheme.


--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Cats fall for illusions too

2013-03-07 Thread Stephen P. King

On 3/7/2013 11:53 PM, Terren Suydam wrote:
That's interesting to me too. Actually I'm surprised you are not more 
embracing of Bruno's ideas because they give life to the idea of 
conscious software.

Hi,

Oh, I do firmly believe that our minds are "conscious software"! I 
am trying to figure out why are they need this particular kind of 
hardware, aka brain. I think that it is not an accident, i.e. that 
evolution shaped the brain toward an end, of sorts. ) No, there is no 
intelligent design going on other than what can fit into laws, but Pratt 
makes a good case that Nature reasons both forward and backward in 
time... http://boole.stanford.edu/pub/ortho.pdf , 
http://boole.stanford.edu/pub/seqconc.pdf



You seem to me to be reluctant to give up materialism, but 
philosophically speaking I think materialism dooms AI.


No. I give up neither materialism nor immaterialism. I give up 
non-neutral monism.




On the more theoretical side of things, I will say this. It occurred 
to me the other day that the trace of the UD (aka UD*) is a fractal,


Actually it is a Multifractal. See 
http://rsb.info.nih.gov/ij/plugins/fraclac/FLHelp/Multifractals.htm and 
http://www.itsec.gov.cn/docs/20090507164047604667.pdf


in that many of the programs executed by the UD are themselves 
universal dovetailers.


Ad infinitum!

It is reminiscent of the Mandelbrot set, in that there are many such 
paths (an infinite number) that replicate the UD but alter it in some 
small way.


But think about this further, is there actually any alteration of 
the UD possible? It spans all possible computations, so no. It never 
changes at all.


Every program generated by the UD in fact is replicated an infinite 
number of times, and also altered slightly an infinite number of times.


Right.

I wonder if there are clues to the measure problem hidden in the 
fractal characteristics of the UD*. But that's wild-ass speculation. I 
don't have the mathematical chops to take that idea any further.


I suspect that it has a pattern (mostly a random pattern) for/at 
every possible measure. I am not a mathematician, sadly...





On Thu, Mar 7, 2013 at 11:46 PM, Stephen P. King 
mailto:stephe...@charter.net>> wrote:


On 3/7/2013 11:37 PM, Terren Suydam wrote:

Ah. That's above my pay grade unfortunately. But I don't think
our immediate failure to solve that problem dooms the idea that a
cat's experience of the world is explainable in terms of
mechanism. Conversely, even if we did solve it, there would still
be doubts. For the time being, comp remains for me the most
fruitful assumption about reality, such as it is. It assumes so
little and opens up such incredible vistas.

Terren



Hi,

I agree. I think that it becomes more open to applications
once it is aligned with, say, David Chalmers and Ben Goertzel's
ideas. I am interested in applications
. ;-)




On Thu, Mar 7, 2013 at 11:17 PM, Stephen P. King
mailto:stephe...@charter.net>> wrote:

On 3/7/2013 10:40 PM, Terren Suydam wrote:

I'm game. Which puzzle are we figuring out?


A solution to Bruno's 'arithmetic body problem'.





--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Cats fall for illusions too

2013-03-07 Thread Terren Suydam
That's interesting to me too. Actually I'm surprised you are not more
embracing of Bruno's ideas because they give life to the idea of conscious
software. You seem to me to be reluctant to give up materialism, but
philosophically speaking I think materialism dooms AI.

On the more theoretical side of things, I will say this. It occurred to me
the other day that the trace of the UD (aka UD*) is a fractal, in that many
of the programs executed by the UD are themselves universal dovetailers. It
is reminiscent of the Mandelbrot set, in that there are many such paths (an
infinite number) that replicate the UD but alter it in some small way.
Every program generated by the UD in fact is replicated an infinite number
of times, and also altered slightly an infinite number of times. I wonder
if there are clues to the measure problem hidden in the fractal
characteristics of the UD*. But that's wild-ass speculation. I don't have
the mathematical chops to take that idea any further.


On Thu, Mar 7, 2013 at 11:46 PM, Stephen P. King wrote:

>  On 3/7/2013 11:37 PM, Terren Suydam wrote:
>
> Ah. That's above my pay grade unfortunately. But I don't think our
> immediate failure to solve that problem dooms the idea that a cat's
> experience of the world is explainable in terms of mechanism. Conversely,
> even if we did solve it, there would still be doubts. For the time being,
> comp remains for me the most fruitful assumption about reality, such as it
> is. It assumes so little and opens up such incredible vistas.
>
>  Terren
>
>
> Hi,
>
> I agree. I think that it becomes more open to applications once it is
> aligned with, say, David Chalmers and Ben Goertzel's ideas. I am interested
> in applications . ;-)
>
>
>
> On Thu, Mar 7, 2013 at 11:17 PM, Stephen P. King wrote:
>
>>  On 3/7/2013 10:40 PM, Terren Suydam wrote:
>>
>> I'm game. Which puzzle are we figuring out?
>>
>>
>>  A solution to Bruno's 'arithmetic body problem'.
>>
>>
>>
>
>
> --
> Onward!
>
> Stephen
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Cats fall for illusions too

2013-03-07 Thread Stephen P. King

On 3/7/2013 11:37 PM, Terren Suydam wrote:
Ah. That's above my pay grade unfortunately. But I don't think our 
immediate failure to solve that problem dooms the idea that a cat's 
experience of the world is explainable in terms of mechanism. 
Conversely, even if we did solve it, there would still be doubts. For 
the time being, comp remains for me the most fruitful assumption about 
reality, such as it is. It assumes so little and opens up such 
incredible vistas.


Terren



Hi,

I agree. I think that it becomes more open to applications once it 
is aligned with, say, David Chalmers and Ben Goertzel's ideas. I am 
interested in applications . ;-)




On Thu, Mar 7, 2013 at 11:17 PM, Stephen P. King 
mailto:stephe...@charter.net>> wrote:


On 3/7/2013 10:40 PM, Terren Suydam wrote:

I'm game. Which puzzle are we figuring out?


A solution to Bruno's 'arithmetic body problem'.






--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Stathis Papaioannou
On Fri, Mar 8, 2013 at 10:26 AM, Stephen P. King  wrote:
> On 3/7/2013 4:15 PM, Stathis Papaioannou wrote:
>
>
>
> On 08/03/2013, at 2:58 AM, Craig Weinberg  wrote:
>
>> I must disagree. It is baked into the topology of classical mechanics
>> that a system cannot semantically act upon itself. There is no way to define
>> intentionality in classical physics. This is what Bruno proves with his
>> argument.
>>
>
> Exactly Stephen. What are we talking about here? How is a deterministic
> system that has preferences and makes choices and considers options
> different from free will. If something can have a private preference which
> cannot be determined from the outside, then it is determined privately, i.e.
> the will of the private determiner.
>
>
> As I said, it depends on how you define "free will".
>
>> It is also not logically inconsistent with choice and free will,  unless
>> you define these terms as inconsistent with determinism, in which case in a
>> deterministic world we would have to create new words meaning pseudo-choice
>> and pseudo-free will to avoid misunderstanding, and then go about our
>> business as usual with this minor change to the language.
>>
>>
>> So you say...
>
>
> Yeah, right. Why would a deterministic world need words having anything to
> do with choice or free will? At what part of a computer program is something
> like a choice made? Every position on the logic tree is connected to every
> other by unambiguous prior cause or intentionally generated (pseudo)
> randomness. It makes no choices, has no preferences, just follows a sequence
> of instructions.
>
>
> In general, the existence of words for something does not mean it has an
> actual referent; consider "fairy" or "God". An adequate response to your
> position is that you're right - we don't really have choices. Another
> response is that your definition of "choice" is not the only possible one.
> --
>
>
> How is linguistic analysis going to help your case? You seem to miss the
> point that it is not the symbols on the page that 'contain' meaningfulness,
> it is your mental act of interpretation from whence the meaning emerges.
> Without a conscious mind you are as much a zombie as John Clark and his
> mechanical pony.

We could be arguing about whether Pluto is a planet but won't get
anywhere unless we agree on what "planet" means. It's the same with
free will. We might agree on all the facts of the matter but still
disagree on free will, because different people mean different things
by it.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Cats fall for illusions too

2013-03-07 Thread Terren Suydam
Ah. That's above my pay grade unfortunately. But I don't think our
immediate failure to solve that problem dooms the idea that a cat's
experience of the world is explainable in terms of mechanism. Conversely,
even if we did solve it, there would still be doubts. For the time being,
comp remains for me the most fruitful assumption about reality, such as it
is. It assumes so little and opens up such incredible vistas.

Terren


On Thu, Mar 7, 2013 at 11:17 PM, Stephen P. King wrote:

>  On 3/7/2013 10:40 PM, Terren Suydam wrote:
>
> I'm game. Which puzzle are we figuring out?
>
>
> A solution to Bruno's 'arithmetic body problem'.
>
>
>
>
> On Thu, Mar 7, 2013 at 10:21 PM, Stephen P. King wrote:
>
>>  On 3/7/2013 9:14 PM, Terren Suydam wrote:
>>
>> Right, we basically agree. At the low level where optics are being
>> processed, it seems to me to be more accurate to say the brain is creating
>> the constructions. Another way to say it is that kittens and babies are
>> probably born with the neural circuits that implement those shortcuts -
>> optimizations implemented through genetics. Whereas with the kind of
>> construction that is created by the mind, it seems to me that those
>> constructions live at a higher level - the psychological - and arise as a
>> result of experience and learning. I don't really think that is what's
>> going on with optical illusions since they are so universal. But that is
>> quibbling - whichever of us is more correct, it's beside the point
>> regarding whether optical illusions have a mechanistic explanation.
>>
>>  Hi,
>>
>> OK then, I would rather work with you on figuring this puzzle out
>> than spar with you over "who has the best explanation".  ;-)
>>
>>
>>
>>  Terren
>>
>>
>> On Thu, Mar 7, 2013 at 6:57 PM, Stephen P. King wrote:
>>
>>>  On 3/7/2013 6:09 PM, Terren Suydam wrote:
>>>
>>> The same way it explains it for humans. The cat is not sensing the world
>>> directly, but the constructions created by its brain.
>>>
>>>
>>>  Hi Terren,
>>>
>>> I almost agree, I only add that it is not just the brain of the cat
>>> (or human or whatever) that is being sensed, the mind is involved in the
>>> construction as well.
>>>
>>>
>>>  Those constructions involve shortcuts of various kinds (e.g. edge
>>> detection) optimized for the kinds of environments that cats have thrived
>>> in, from an evolutionary standpoint. Those shortcuts are what lead to
>>> optical illusions. Optical illusions are stimuli that expose the shortcuts
>>> for what they are.  There is nothing about the fact that it's a cat that
>>> makes this any harder to explain in mechanistic terms.
>>>
>>>
>>>  Sure, and the mind as well.
>>>
>>>
>>>
>>>  It is interesting because it suggests that cats employ at least one of
>>> the same shortcuts as we do, which further suggests that the visual
>>> optimizations that lead to optical illusions are much older than humans.
>>> And while that is not a very controversial claim, it is cool to have some
>>> evidence for it.
>>>
>>>
>>>  Yes, I have to show this to my friends that are studying pattern
>>> recognition.
>>>
>>>
>>>
>>>  Terren
>>>
>>>
>>> On Thu, Mar 7, 2013 at 5:14 PM, Stephen P. King 
>>> wrote:
>>>
  On 3/7/2013 11:36 AM, Terren Suydam wrote:

 I have no doubt that Craig will somehow see this as a vindication of
 his theory and a refutation of mechanism.

  Terren


  On Wed, Mar 6, 2013 at 5:27 PM, Stephen P. King >>> > wrote:

> https://www.youtube.com/watch?feature=player_embedded&v=CcXXQ6GCUb8
>
> --
>

   Hi Terren,

How does Mechanism explain this? Will *The Amazing 
 Randy*be pushed forward to 
 loudly claim that the cat was really chasing a laser
 dot that the video camera could not capture?

 --


>>>
>
>
> --
> Onward!
>
> Stephen
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Thin Client

2013-03-07 Thread meekerdb

On 3/7/2013 6:40 PM, Craig Weinberg wrote:



On Thursday, March 7, 2013 8:58:29 PM UTC-5, Brent wrote:

On 3/7/2013 4:57 PM, Craig Weinberg wrote:



On Thursday, March 7, 2013 7:33:46 PM UTC-5, Brent wrote:

On 3/7/2013 3:01 PM, Craig Weinberg wrote:



On Thursday, March 7, 2013 5:45:14 PM UTC-5, Brent wrote:

On 3/7/2013 2:21 PM, Stephen P. King wrote:

On 3/7/2013 12:04 PM, Craig Weinberg wrote:

If you have ever worked with Terminal Servers, RDP, Citrix 
Metaframe, or
the like (and that's what I have been doing professionally every 
day for
the last 14 years), you will understand the idea of a Thin Client
architecture. Thin clients are as old as computing, and some of you
remember as I do, devices like acoustic couplers where you can 
attach a
telephone handset to a telephone cradle, so that the mouth ends of 
the
handset and the earpiece ends could squeal to each other. In this 
way,
you could, with nothing but a keyboard and a printer, use your 
telephone
to allow you access to a mainframe computer at some university.

The relevance here is that the client end is thin computationally. 
It
passes nothing but keystrokes and printer instructions back and 
forth as
acoustic codes.

This is what an mp3 file does as well. It passes nothing but binary
instructions that can be used by an audio device to vibrate. 
Without a
person's ear there to be vibrated, this entire event is described by
linear processes where one physical record is converted into another
physical record. Nothing is encoded or decoded, experienced or
appreciated. There is no sound.

Think about those old plastic headphones in elementary school that 
just
had hollow plastic tubes as connectors - a system like that 
generates
sound from the start, and the headphones are simply funnels for our
ears. That's a different thing from an electronic device which 
produces
sound only in the earbuds.

All of these discussions about semiotics, free will, consciousness,
AI...all come down to understanding the Thin Client. The Thin 
Client is
Searle's Chinese Room in actual fact. You can log into a massive 
server
from some mobile device and use it like a glove, but that doesn't 
mean
that the glove is intelligent. We know that we can transmit only
mouseclicks and keystrokes across the pipe and that it works without
having to have some sophisticated computing environment (i.e. 
qualia)
get communicated. The Thin Client exposes Comp as misguided because 
it
shows that instructions can indeed exist as purely instrumental 
forms
and require none of the semantic experiences which we enjoy. No 
matter
how much you use the thin client, it never needs to get any thicker.
It's just a glove and a window.

-- 

Hi Craig,

Excellent post! You have nailed computational immaterialism 
where it
really hurts. Computations cannot see, per the Turing 
neo-Platonists, any
hardward at all. This is their view of computational universality. 
But
here in the thing, it is the reason why they have a 'body problem'. 
For a
Platonistic Machine, there is no hardware or physical world at all. 
So,
why do I have the persistent illusion that I am in a body and 
interacting
with another computation via its body?

The physical delusion is the thin client, to use your words and
discussion.



I'm fairly sure Bruno will point out that a delusion is a thought 
and so
is immaterial.  You have an immaterial experience fo being in a 
body.

But the analogy of the thin client is thin indeed.  In the example 
of the
Mars rover it corresponds to looking a computer bus and saying, 
"See there
are just bits being transmitted over this wire, therefore this Mars 
rover
can't have qualia."  It's nothing-buttery spread thin.


Why? What's your argument other than you don't like it? Of course the 
Mars
rover has no qualia.


That's your careful reasoning?


My reasoning is that in constructing thin client architectures we find that 
we save
processing overhead by treating the i/o as a simple bitstream applied to 
extend
just the keyboard, mouse, and video data.  We understand that there is a 
great deal
less processing than if we actually tried to network a computer at the 
application
level, or use the resources of the server as a mapped remote drive. What 
accounts
for this lower o

Re: Cats fall for illusions too

2013-03-07 Thread Stephen P. King

On 3/7/2013 10:40 PM, Terren Suydam wrote:

I'm game. Which puzzle are we figuring out?


A solution to Bruno's 'arithmetic body problem'.




On Thu, Mar 7, 2013 at 10:21 PM, Stephen P. King 
mailto:stephe...@charter.net>> wrote:


On 3/7/2013 9:14 PM, Terren Suydam wrote:

Right, we basically agree. At the low level where optics are
being processed, it seems to me to be more accurate to say the
brain is creating the constructions. Another way to say it is
that kittens and babies are probably born with the neural
circuits that implement those shortcuts - optimizations
implemented through genetics. Whereas with the kind of
construction that is created by the mind, it seems to me that
those constructions live at a higher level - the psychological -
and arise as a result of experience and learning. I don't really
think that is what's going on with optical illusions since they
are so universal. But that is quibbling - whichever of us is more
correct, it's beside the point regarding whether optical
illusions have a mechanistic explanation.

Hi,

OK then, I would rather work with you on figuring this puzzle
out than spar with you over "who has the best explanation".  ;-)




Terren


On Thu, Mar 7, 2013 at 6:57 PM, Stephen P. King
mailto:stephe...@charter.net>> wrote:

On 3/7/2013 6:09 PM, Terren Suydam wrote:

The same way it explains it for humans. The cat is not
sensing the world directly, but the constructions created by
its brain.


Hi Terren,

I almost agree, I only add that it is not just the brain
of the cat (or human or whatever) that is being sensed, the
mind is involved in the construction as well.



Those constructions involve shortcuts of various kinds (e.g.
edge detection) optimized for the kinds of environments that
cats have thrived in, from an evolutionary standpoint. Those
shortcuts are what lead to optical illusions. Optical
illusions are stimuli that expose the shortcuts for what
they are.  There is nothing about the fact that it's a cat
that makes this any harder to explain in mechanistic terms.


Sure, and the mind as well.




It is interesting because it suggests that cats employ at
least one of the same shortcuts as we do, which further
suggests that the visual optimizations that lead to optical
illusions are much older than humans. And while that is not
a very controversial claim, it is cool to have some evidence
for it.


Yes, I have to show this to my friends that are studying
pattern recognition.




Terren


On Thu, Mar 7, 2013 at 5:14 PM, Stephen P. King
mailto:stephe...@charter.net>> wrote:

On 3/7/2013 11:36 AM, Terren Suydam wrote:

I have no doubt that Craig will somehow see this as a
vindication of his theory and a refutation of mechanism.

Terren


On Wed, Mar 6, 2013 at 5:27 PM, Stephen P. King
mailto:stephe...@charter.net>>
wrote:


https://www.youtube.com/watch?feature=player_embedded&v=CcXXQ6GCUb8

--



 Hi Terren,

   How does Mechanism explain this? Will /The Amazing
Randy/  be
pushed forward to loudly claim that the cat was really
chasing a laser dot that the video camera could not capture?

-- 








--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




For The Critics

2013-03-07 Thread Craig Weinberg
Added this to my site if anyone is interested:

*Common Criticisms of Multisense Realism*

The most common issues that people have tend not to be with the content of 
my ideas themselves, but the way that I present them or argue them. From my 
perspective, it seems clear that they have no intention of entertaining a 
new set of ideas about consciousness, so my admittedly wordy and often 
overwrought writing style becomes the reason why my ideas are 
objectionable.  I generally hear that they 1) don't make any sense, 2) are 
wrong, and 3) are unfalsifiable. This is an interesting complaint, since 
they are all mutually exclusive. Ideas which don't make sense can't be 
wrong, and ideas which are wrong can't be unfalsifiable.

Let's begin with 

*1) They don't make any sense. *

I don't expect that *all* of the ideas will make sense to everyone 
immediately.  All of the ideas do, however, make sense to me, even if I 
come to realize later that the way I wrote about them is in need of editing 
or re-working. I'm not saying that I'm not crazy, but I have never been so 
crazy that I have looked back on my own writing and not been able to figure 
out what I was trying to say. What I write makes sense to me, and it does, 
believe it or not, make sense to enough people who have expressed to me 
that they understand it that I am not threatened by this #1 accusation. 
Ultimately, it is just an accusation, as being unable to make sense of an 
unfamiliar idea says nothing about the merits of the idea, or the author of 
the idea. 


*2) They are wrong.*

Once people have tired themselves out yelling about how my writing 
irritates them, they often will find a way to make enough sense of my 
writing to announce that I make this or that 'claim' which contradicts this 
or that Law.  Of course that's nonsense. Nothing that I propose here can be 
construed as contradicting any natural observation. Not only do my ideas 
about the relation between body and mind or matter and sense not require 
any additional force within public physics, but they explicitly avoid it by 
definition. My interpretation is a commentary on the 
umbilical-symmetric-nested nature of the relation of public bodies and 
private experience, not a squeezing of private experience into public 
mechanics. If you cannot grasp this concept, I suggest that you stop 
reading now. You will never be able to understand Multisense Realism and 
you will be wasting your time to go on. 

Another criticism along these lines is the mistaken impression that some 
make that I am a naive idealist. Because I say that physics and sense are 
in fact the same thing, and that there is no 'existence' independent of 
sense, many people cannot get the idea out of their mind that Multisense 
Realism is built on a Berkeleyan straw man where the tree falls in the 
forest and doesn't make a sound unless a human being hears it. Not so. Lots 
of organisms have ears, and the event of a tree crashing to the ground has 
lots of sensory opportunities with or without he benefit of the presence of 
Homo sapiens. If you get rid of all ears, however, the you would have 
eliminated all possible experiences of sound. Physics, in my view, does not 
merely depend on both public and private transmitter-receivers of 
experience, physics is that which twists itself into public and private 
ontologies (or 'verses') in the first place.

In the context of Artificial Intelligence, I get a lot of flack for 
insisting that mechanical approaches to assembling consciousness are doomed 
to failure. People assume that my ideas are sentimental and reflect some 
sort of patriotic attachment to human beings, or an aversion to technology. 
Nothing could be further from the truth. I have always been a both a 
technophile and a misanthrope so that nothing would please me more than a 
Kurzweilian singularity in which I could be uploaded out of this nasty 
human civilization. Unfortunately, in the course of developing Multisense 
Realism, I could not avoid that the nature of the juxtaposition between 
private experience and public bodies is such that no experience could ever 
be generated by bodies alone. Forms and functions both, are a consequence 
and reflection of sense, not an independent source of it. You can't build a 
mind out of forms and functions, only a sculpture of a mind - a recording. 
Without using some kind of biological organism to start, with its own 
agendas and sensitivity driven values, there can be no artificial 
intelligence - only simulated intelligence.

This position leads people to jump to the conclusion that I am a 
biocentrist - that I think there is something magical about living cells 
which allows them to progress to higher quality consciousness than 
molecules alone. Nope, you can't hang that on me either. It is not the 
substance of the cells that matters, it is the experience which is 
represented by the cells. The cell is a game piece, a marker. What it 
represents is a sub-personal ex

Re: Cats fall for illusions too

2013-03-07 Thread Terren Suydam
I'm game. Which puzzle are we figuring out?


On Thu, Mar 7, 2013 at 10:21 PM, Stephen P. King wrote:

>  On 3/7/2013 9:14 PM, Terren Suydam wrote:
>
> Right, we basically agree. At the low level where optics are being
> processed, it seems to me to be more accurate to say the brain is creating
> the constructions. Another way to say it is that kittens and babies are
> probably born with the neural circuits that implement those shortcuts -
> optimizations implemented through genetics. Whereas with the kind of
> construction that is created by the mind, it seems to me that those
> constructions live at a higher level - the psychological - and arise as a
> result of experience and learning. I don't really think that is what's
> going on with optical illusions since they are so universal. But that is
> quibbling - whichever of us is more correct, it's beside the point
> regarding whether optical illusions have a mechanistic explanation.
>
> Hi,
>
> OK then, I would rather work with you on figuring this puzzle out than
> spar with you over "who has the best explanation".  ;-)
>
>
>
>  Terren
>
>
> On Thu, Mar 7, 2013 at 6:57 PM, Stephen P. King wrote:
>
>>  On 3/7/2013 6:09 PM, Terren Suydam wrote:
>>
>> The same way it explains it for humans. The cat is not sensing the world
>> directly, but the constructions created by its brain.
>>
>>
>>  Hi Terren,
>>
>> I almost agree, I only add that it is not just the brain of the cat
>> (or human or whatever) that is being sensed, the mind is involved in the
>> construction as well.
>>
>>
>>  Those constructions involve shortcuts of various kinds (e.g. edge
>> detection) optimized for the kinds of environments that cats have thrived
>> in, from an evolutionary standpoint. Those shortcuts are what lead to
>> optical illusions. Optical illusions are stimuli that expose the shortcuts
>> for what they are.  There is nothing about the fact that it's a cat that
>> makes this any harder to explain in mechanistic terms.
>>
>>
>>  Sure, and the mind as well.
>>
>>
>>
>>  It is interesting because it suggests that cats employ at least one of
>> the same shortcuts as we do, which further suggests that the visual
>> optimizations that lead to optical illusions are much older than humans.
>> And while that is not a very controversial claim, it is cool to have some
>> evidence for it.
>>
>>
>>  Yes, I have to show this to my friends that are studying pattern
>> recognition.
>>
>>
>>
>>  Terren
>>
>>
>> On Thu, Mar 7, 2013 at 5:14 PM, Stephen P. King wrote:
>>
>>>  On 3/7/2013 11:36 AM, Terren Suydam wrote:
>>>
>>> I have no doubt that Craig will somehow see this as a vindication of his
>>> theory and a refutation of mechanism.
>>>
>>>  Terren
>>>
>>>
>>>  On Wed, Mar 6, 2013 at 5:27 PM, Stephen P. King 
>>> wrote:
>>>
 https://www.youtube.com/watch?feature=player_embedded&v=CcXXQ6GCUb8

 --

>>>
>>>   Hi Terren,
>>>
>>>How does Mechanism explain this? Will *The Amazing 
>>> Randy*be pushed forward to loudly 
>>> claim that the cat was really chasing a laser
>>> dot that the video camera could not capture?
>>>
>>> --
>>>
>>>
>>
>
>
> --
> Onward!
>
> Stephen
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Cats fall for illusions too

2013-03-07 Thread Stephen P. King

On 3/7/2013 9:14 PM, Terren Suydam wrote:
Right, we basically agree. At the low level where optics are being 
processed, it seems to me to be more accurate to say the brain is 
creating the constructions. Another way to say it is that kittens and 
babies are probably born with the neural circuits that implement those 
shortcuts - optimizations implemented through genetics. Whereas with 
the kind of construction that is created by the mind, it seems to me 
that those constructions live at a higher level - the psychological - 
and arise as a result of experience and learning. I don't really think 
that is what's going on with optical illusions since they are so 
universal. But that is quibbling - whichever of us is more correct, 
it's beside the point regarding whether optical illusions have a 
mechanistic explanation.

Hi,

OK then, I would rather work with you on figuring this puzzle out 
than spar with you over "who has the best explanation".  ;-)




Terren


On Thu, Mar 7, 2013 at 6:57 PM, Stephen P. King > wrote:


On 3/7/2013 6:09 PM, Terren Suydam wrote:

The same way it explains it for humans. The cat is not sensing
the world directly, but the constructions created by its brain.


Hi Terren,

I almost agree, I only add that it is not just the brain of
the cat (or human or whatever) that is being sensed, the mind is
involved in the construction as well.



Those constructions involve shortcuts of various kinds (e.g. edge
detection) optimized for the kinds of environments that cats have
thrived in, from an evolutionary standpoint. Those shortcuts are
what lead to optical illusions. Optical illusions are stimuli
that expose the shortcuts for what they are.  There is nothing
about the fact that it's a cat that makes this any harder to
explain in mechanistic terms.


Sure, and the mind as well.




It is interesting because it suggests that cats employ at least
one of the same shortcuts as we do, which further suggests that
the visual optimizations that lead to optical illusions are much
older than humans. And while that is not a very controversial
claim, it is cool to have some evidence for it.


Yes, I have to show this to my friends that are studying
pattern recognition.




Terren


On Thu, Mar 7, 2013 at 5:14 PM, Stephen P. King
mailto:stephe...@charter.net>> wrote:

On 3/7/2013 11:36 AM, Terren Suydam wrote:

I have no doubt that Craig will somehow see this as a
vindication of his theory and a refutation of mechanism.

Terren


On Wed, Mar 6, 2013 at 5:27 PM, Stephen P. King
mailto:stephe...@charter.net>> wrote:

https://www.youtube.com/watch?feature=player_embedded&v=CcXXQ6GCUb8

--



 Hi Terren,

   How does Mechanism explain this? Will /The Amazing Randy/
 be pushed forward
to loudly claim that the cat was really chasing a laser dot
that the video camera could not capture?

-- 







--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Thin Client

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 8:58:29 PM UTC-5, Brent wrote:
>
>  On 3/7/2013 4:57 PM, Craig Weinberg wrote:
>  
>
>
> On Thursday, March 7, 2013 7:33:46 PM UTC-5, Brent wrote: 
>>
>>  On 3/7/2013 3:01 PM, Craig Weinberg wrote:
>>  
>>
>>
>> On Thursday, March 7, 2013 5:45:14 PM UTC-5, Brent wrote: 
>>>
>>>  On 3/7/2013 2:21 PM, Stephen P. King wrote:
>>>  
>>> On 3/7/2013 12:04 PM, Craig Weinberg wrote: 
>>>
>>> If you have ever worked with Terminal Servers, RDP, Citrix Metaframe, or 
>>> the like (and that's what I have been doing professionally every day for 
>>> the last 14 years), you will understand the idea of a Thin Client 
>>> architecture. Thin clients are as old as computing, and some of you 
>>> remember as I do, devices like acoustic couplers where you can attach a 
>>> telephone handset to a telephone cradle, so that the mouth ends of the 
>>> handset and the earpiece ends could squeal to each other. In this way, you 
>>> could, with nothing but a keyboard and a printer, use your telephone to 
>>> allow you access to a mainframe computer at some university. 
>>>
>>> The relevance here is that the client end is thin computationally. It 
>>> passes nothing but keystrokes and printer instructions back and forth as 
>>> acoustic codes. 
>>>
>>> This is what an mp3 file does as well. It passes nothing but binary 
>>> instructions that can be used by an audio device to vibrate. Without a 
>>> person's ear there to be vibrated, this entire event is described by linear 
>>> processes where one physical record is converted into another physical 
>>> record. Nothing is encoded or decoded, experienced or appreciated. There is 
>>> no sound. 
>>>
>>> Think about those old plastic headphones in elementary school that just 
>>> had hollow plastic tubes as connectors - a system like that generates sound 
>>> from the start, and the headphones are simply funnels for our ears. That's 
>>> a different thing from an electronic device which produces sound only in 
>>> the earbuds. 
>>>
>>> All of these discussions about semiotics, free will, consciousness, 
>>> AI...all come down to understanding the Thin Client. The Thin Client is 
>>> Searle's Chinese Room in actual fact. You can log into a massive server 
>>> from some mobile device and use it like a glove, but that doesn't mean that 
>>> the glove is intelligent. We know that we can transmit only mouseclicks and 
>>> keystrokes across the pipe and that it works without having to have some 
>>> sophisticated computing environment (i.e. qualia) get communicated. The 
>>> Thin Client exposes Comp as misguided because it shows that instructions 
>>> can indeed exist as purely instrumental forms and require none of the 
>>> semantic experiences which we enjoy. No matter how much you use the thin 
>>> client, it never needs to get any thicker. It's just a glove and a window. 
>>>
>>> -- 
>>>
>>> Hi Craig, 
>>>
>>> Excellent post! You have nailed computational immaterialism where it 
>>> really hurts. Computations cannot see, per the Turing neo-Platonists, any 
>>> hardward at all. This is their view of computational universality. But here 
>>> in the thing, it is the reason why they have a 'body problem'. For a 
>>> Platonistic Machine, there is no hardware or physical world at all. So, why 
>>> do I have the persistent illusion that I am in a body and interacting with 
>>> another computation via its body? 
>>>
>>> The physical delusion is the thin client, to use your words and 
>>> discussion. 
>>>
>>>  
>>> I'm fairly sure Bruno will point out that a delusion is a thought and so 
>>> is immaterial.  You have an immaterial experience fo being in a body.
>>>
>>> But the analogy of the thin client is thin indeed.  In the example of 
>>> the Mars rover it corresponds to looking a computer bus and saying, "See 
>>> there are just bits being transmitted over this wire, therefore this Mars 
>>> rover can't have qualia."  It's nothing-buttery spread thin. 
>>>
>>
>> Why? What's your argument other than you don't like it? Of course the 
>> Mars rover has no qualia. 
>>
>>
>> That's your careful reasoning?
>>  
>
> My reasoning is that in constructing thin client architectures we find 
> that we save processing overhead by treating the i/o as a simple bitstream 
> applied to extend just the keyboard, mouse, and video data.  We understand 
> that there is a great deal less processing than if we actually tried to 
> network a computer at the application level, or use the resources of the 
> server as a mapped remote drive. What accounts for this lower overhead is 
> that the simulation of a GUI is only a thin shadow of what is required to 
> actually share resources. If qualia were inherent, then the thin client 
> would save us nothing, since the keystrokes and screenshots would have to 
> contain all of the same processing 'qualia'. 
>
>
> I can't even make sense of that assertion.  "If qualia were inherent" in 
> what? 
>

In digital data processing.
 

> If they

Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 9:05:14 PM UTC-5, Brent wrote:
>
>  On 3/7/2013 5:49 PM, Craig Weinberg wrote:
>  
>
>
> On Thursday, March 7, 2013 7:40:31 PM UTC-5, Brent wrote: 
>>
>>  On 3/7/2013 3:05 PM, Craig Weinberg wrote:
>>  
>>
>>
>> On Thursday, March 7, 2013 5:55:02 PM UTC-5, Brent wrote: 
>>>
>>>  On 3/7/2013 2:49 PM, Craig Weinberg wrote:
>>>  
>>> To act on itself, as far as I can understand it, would mean to be 
 uncaused or truly random, which is indeed incompatible with determinism. 
 But why should that have anything to do with "intentionality"? 

>>>
>>> What is intention if not acting on, or better 'through' yourself?
>>>
>>>
>>> We use the word "intention" as distinct from acting.
>>>
>>
>> No, it's an adjective also. We can act intentionally or unintentionally. 
>> The difference is a key concept in all justice systems in history.
>>  
>>
>> Yes, and in court evidence of a plan is evidence of intention. 
>>
>
> So? Lack of evidence doesn't mean lack of intention in physics, just in 
> court.
>  
>  
>>  If you picked you a knife in the kitchen and stabbed some one you could 
>> argue it was an accident or an impulse.  If you brought the knife with you, 
>> the prosecution would point out it was evidence of intention.
>>  
>
> The action is the last part of the intention. The intention isn't a 
> separate thing from the action. The intention can be an emotion, a thought, 
> a plan, action, etc. 
>  
>  
>>  
>>   
>>
>>>   One might intend to do X but be prevented or change ones mind.  So 
>>> intention is having a plan of action with a positive feeling about it, a 
>>> feeling of determination. 
>>>
>>
>> You don't need to plan to do something intentionally.
>>  
>>
>> Sure you do, even if it conceived only moments before the act.  How else 
>> would you distinguish intentional acts from impulsive or accidental?
>>  
>
> You distinguish them by recognizing them as originating from your personal 
> will. 
>
>
> Where does you will originate from?
>

Me. My will is the active mode of the experience that I am.
 

>
>  Impulsive acts are intentional to a degree, but what we mean by 
> impulsive is that we did not take the time or have the time to deliberate 
> as carefully as we might have preferred. Maybe only those intentions 
> associates with the limbic system or amygdala were involved and not those 
> of the neocortex.  
>
>
> Hmmm? A hardware distinction determines intentionality?
>

Not determines, reflects.
 

>
>  Accidents is something else entirely. That is an unintentional event, 
> like running over a squirrel. It was not an impulse to kill squirrels. A 
> true impulse to kill squirrels could be intentional and unplanned, and you 
> may only be aware that you are doing it when you find yourself in the 
> middle of the act.
>
>  
>  
>>  
>>   
>>  
>>>  All of which is compatible with determinism.
>>>
>>
>> How so? Please explain and give an example.
>>  
>>  
>>>   The Mars rover probably has an intention to reach it's next sampling 
>>> point.
>>>  
>>
>> There probably is no Mars rover except in our intention to see it that 
>> way.
>>  
>>
>> Oh, so now you're going to deny that Mars rovers exist in order to 
>> counter my example.  What about the refrigerator in my kitchen?  Does it 
>> only exist because I intend it to be a refrigerator? 
>>
>
> Yes. Your refrigerator is a little different because you aren't projecting 
> any pathetic fallacy onto it. You are only expecting that there is a box 
> which is kept at a particular temperature, not that it knows if your food 
> is spoiled or not. For the analogy to correspond, you would be telling me 
> that I can't prove that the refrigerator doesn't know if your food is 
> spoiling or not, and that it is prejudice to claim that only humans would 
> know that about the contents of the refrigerator.
>  
>
> My refrigerator isn't very smart, but it knows how cold it is inside and 
> it has a goal which it acts to attain.
>

It doesn't know how cold it is. The thermostat knows how cold the 
thermostat is. The wire connected to the thermostat knows how much current 
flowing through it, etc. There is no coherent entity that is your 
refrigerator except in your experience. The gestalt of 'refrigerator' as a 
fictional character.
 

>
>
>  
> What exists in both cases is minimally conscious substances assembled into 
> unconscious mechanisms, performing functions which they are aware of on any 
> level. When the world ends, even though there is no reason to keep the food 
> cold anymore, the refrigerator will go on keeping the former food cold as 
> long as it has power. 
>
>
> The refrigerator has it's reasons which the power knows not.
> --- with apologies to Pascal
>

Haha. But it doesn't have any reasons. It has current.
 

>
>
>  Because it's unconscious. The Mars Rover likewise will keep mindlessly 
> reporting its data to an empty Mission Control as long as it can, knowing 
> as little as anything can know. The

Re: Cats fall for illusions too

2013-03-07 Thread Terren Suydam
Right, we basically agree. At the low level where optics are being
processed, it seems to me to be more accurate to say the brain is creating
the constructions. Another way to say it is that kittens and babies are
probably born with the neural circuits that implement those shortcuts -
optimizations implemented through genetics. Whereas with the kind of
construction that is created by the mind, it seems to me that those
constructions live at a higher level - the psychological - and arise as a
result of experience and learning. I don't really think that is what's
going on with optical illusions since they are so universal. But that is
quibbling - whichever of us is more correct, it's beside the point
regarding whether optical illusions have a mechanistic explanation.

Terren


On Thu, Mar 7, 2013 at 6:57 PM, Stephen P. King wrote:

>  On 3/7/2013 6:09 PM, Terren Suydam wrote:
>
> The same way it explains it for humans. The cat is not sensing the world
> directly, but the constructions created by its brain.
>
>
> Hi Terren,
>
> I almost agree, I only add that it is not just the brain of the cat
> (or human or whatever) that is being sensed, the mind is involved in the
> construction as well.
>
>
>  Those constructions involve shortcuts of various kinds (e.g. edge
> detection) optimized for the kinds of environments that cats have thrived
> in, from an evolutionary standpoint. Those shortcuts are what lead to
> optical illusions. Optical illusions are stimuli that expose the shortcuts
> for what they are.  There is nothing about the fact that it's a cat that
> makes this any harder to explain in mechanistic terms.
>
>
> Sure, and the mind as well.
>
>
>
>  It is interesting because it suggests that cats employ at least one of
> the same shortcuts as we do, which further suggests that the visual
> optimizations that lead to optical illusions are much older than humans.
> And while that is not a very controversial claim, it is cool to have some
> evidence for it.
>
>
> Yes, I have to show this to my friends that are studying pattern
> recognition.
>
>
>
>  Terren
>
>
> On Thu, Mar 7, 2013 at 5:14 PM, Stephen P. King wrote:
>
>>  On 3/7/2013 11:36 AM, Terren Suydam wrote:
>>
>> I have no doubt that Craig will somehow see this as a vindication of his
>> theory and a refutation of mechanism.
>>
>>  Terren
>>
>>
>>  On Wed, Mar 6, 2013 at 5:27 PM, Stephen P. King 
>> wrote:
>>
>>> https://www.youtube.com/watch?feature=player_embedded&v=CcXXQ6GCUb8
>>>
>>> --
>>>
>>
>>   Hi Terren,
>>
>>How does Mechanism explain this? Will *The Amazing 
>> Randy*be pushed forward to loudly 
>> claim that the cat was really chasing a laser
>> dot that the video camera could not capture?
>>
>> --
>>
>>
> --
> Onward!
>
> Stephen
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Stephen P. King

On 3/7/2013 7:54 PM, meekerdb wrote:
  What I am exploring is a dual aspect theory that allows for minds 
to act on bodies and bodies to act on minds in a symmetric way. 


How is this any different than saying mind is what a brain does. They 
physical processes of the brain and the psychological processes of the 
mind are just different levels of talking about the same thing.


It is not. "saying mind is what a brain does" is strict identity, 
what Pratt proposes is a mathematically representable duality.


--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread meekerdb

On 3/7/2013 5:49 PM, Craig Weinberg wrote:



On Thursday, March 7, 2013 7:40:31 PM UTC-5, Brent wrote:

On 3/7/2013 3:05 PM, Craig Weinberg wrote:



On Thursday, March 7, 2013 5:55:02 PM UTC-5, Brent wrote:

On 3/7/2013 2:49 PM, Craig Weinberg wrote:


To act on itself, as far as I can understand it, would mean to be 
uncaused
or truly random, which is indeed incompatible with determinism. But 
why
should that have anything to do with "intentionality"?


What is intention if not acting on, or better 'through' yourself?


We use the word "intention" as distinct from acting.


No, it's an adjective also. We can act intentionally or unintentionally. The
difference is a key concept in all justice systems in history.


Yes, and in court evidence of a plan is evidence of intention.


So? Lack of evidence doesn't mean lack of intention in physics, just in court.

If you picked you a knife in the kitchen and stabbed some one you could 
argue it was
an accident or an impulse.  If you brought the knife with you, the 
prosecution would
point out it was evidence of intention.


The action is the last part of the intention. The intention isn't a separate thing from 
the action. The intention can be an emotion, a thought, a plan, action, etc.




  One might intend to do X but be prevented or change ones mind.  So 
intention
is having a plan of action with a positive feeling about it, a feeling 
of
determination.


You don't need to plan to do something intentionally.


Sure you do, even if it conceived only moments before the act.  How else 
would you
distinguish intentional acts from impulsive or accidental?


You distinguish them by recognizing them as originating from your personal will.


Where does you will originate from?

Impulsive acts are intentional to a degree, but what we mean by impulsive is that we did 
not take the time or have the time to deliberate as carefully as we might have 
preferred. Maybe only those intentions associates with the limbic system or amygdala 
were involved and not those of the neocortex.


Hmmm? A hardware distinction determines intentionality?

Accidents is something else entirely. That is an unintentional event, like running over 
a squirrel. It was not an impulse to kill squirrels. A true impulse to kill squirrels 
could be intentional and unplanned, and you may only be aware that you are doing it when 
you find yourself in the middle of the act.






All of which is compatible with determinism.


How so? Please explain and give an example.

  The Mars rover probably has an intention to reach it's next sampling 
point.


There probably is no Mars rover except in our intention to see it that way.


Oh, so now you're going to deny that Mars rovers exist in order to counter 
my
example.  What about the refrigerator in my kitchen?  Does it only exist 
because I
intend it to be a refrigerator?


Yes. Your refrigerator is a little different because you aren't projecting any pathetic 
fallacy onto it. You are only expecting that there is a box which is kept at a 
particular temperature, not that it knows if your food is spoiled or not. For the 
analogy to correspond, you would be telling me that I can't prove that the refrigerator 
doesn't know if your food is spoiling or not, and that it is prejudice to claim that 
only humans would know that about the contents of the refrigerator.


My refrigerator isn't very smart, but it knows how cold it is inside and it has a goal 
which it acts to attain.





What exists in both cases is minimally conscious substances assembled into unconscious 
mechanisms, performing functions which they are aware of on any level. When the world 
ends, even though there is no reason to keep the food cold anymore, the refrigerator 
will go on keeping the former food cold as long as it has power.


The refrigerator has it's reasons which the power knows not.
--- with apologies to Pascal


Because it's unconscious. The Mars Rover likewise will keep mindlessly reporting its 
data to an empty Mission Control as long as it can, knowing as little as anything can 
know. There are atoms that know how to stay in a particular shape and move according to 
how it is stimulated. Otherwise, there is silence and darkness.


And you will go on claiming that your consciousness depends on conscious atoms even as 
they are exchanged for different atoms.


Brent



Do words only mean what you intend them to mean when you want to win 
arguments?


I don't want to win arguments, I want to explain the truth of consciousness. Words mean 
what we interpret them to mean.


Craig


Brent




Craig


Brent

-- 
You received this message because you are subscribed to the Google Groups

"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an 
e

Re: Thin Client

2013-03-07 Thread meekerdb

On 3/7/2013 4:57 PM, Craig Weinberg wrote:



On Thursday, March 7, 2013 7:33:46 PM UTC-5, Brent wrote:

On 3/7/2013 3:01 PM, Craig Weinberg wrote:



On Thursday, March 7, 2013 5:45:14 PM UTC-5, Brent wrote:

On 3/7/2013 2:21 PM, Stephen P. King wrote:

On 3/7/2013 12:04 PM, Craig Weinberg wrote:

If you have ever worked with Terminal Servers, RDP, Citrix Metaframe, 
or the
like (and that's what I have been doing professionally every day for 
the last
14 years), you will understand the idea of a Thin Client architecture. 
Thin
clients are as old as computing, and some of you remember as I do, 
devices
like acoustic couplers where you can attach a telephone handset to a
telephone cradle, so that the mouth ends of the handset and the 
earpiece ends
could squeal to each other. In this way, you could, with nothing but a
keyboard and a printer, use your telephone to allow you access to a 
mainframe
computer at some university.

The relevance here is that the client end is thin computationally. It 
passes
nothing but keystrokes and printer instructions back and forth as 
acoustic
codes.

This is what an mp3 file does as well. It passes nothing but binary
instructions that can be used by an audio device to vibrate. Without a
person's ear there to be vibrated, this entire event is described by 
linear
processes where one physical record is converted into another physical
record. Nothing is encoded or decoded, experienced or appreciated. 
There is
no sound.

Think about those old plastic headphones in elementary school that just 
had
hollow plastic tubes as connectors - a system like that generates sound 
from
the start, and the headphones are simply funnels for our ears. That's a
different thing from an electronic device which produces sound only in 
the
earbuds.

All of these discussions about semiotics, free will, consciousness, 
AI...all
come down to understanding the Thin Client. The Thin Client is Searle's
Chinese Room in actual fact. You can log into a massive server from some
mobile device and use it like a glove, but that doesn't mean that the 
glove
is intelligent. We know that we can transmit only mouseclicks and 
keystrokes
across the pipe and that it works without having to have some 
sophisticated
computing environment (i.e. qualia) get communicated. The Thin Client 
exposes
Comp as misguided because it shows that instructions can indeed exist as
purely instrumental forms and require none of the semantic experiences 
which
we enjoy. No matter how much you use the thin client, it never needs to 
get
any thicker. It's just a glove and a window.

-- 

Hi Craig,

Excellent post! You have nailed computational immaterialism where it
really hurts. Computations cannot see, per the Turing neo-Platonists, 
any
hardward at all. This is their view of computational universality. But 
here in
the thing, it is the reason why they have a 'body problem'. For a 
Platonistic
Machine, there is no hardware or physical world at all. So, why do I 
have the
persistent illusion that I am in a body and interacting with another
computation via its body?

The physical delusion is the thin client, to use your words and 
discussion.



I'm fairly sure Bruno will point out that a delusion is a thought and 
so is
immaterial.  You have an immaterial experience fo being in a body.

But the analogy of the thin client is thin indeed.  In the example of 
the Mars
rover it corresponds to looking a computer bus and saying, "See there 
are just
bits being transmitted over this wire, therefore this Mars rover can't 
have
qualia."  It's nothing-buttery spread thin.


Why? What's your argument other than you don't like it? Of course the Mars 
rover
has no qualia.


That's your careful reasoning?


My reasoning is that in constructing thin client architectures we find that we save 
processing overhead by treating the i/o as a simple bitstream applied to extend just the 
keyboard, mouse, and video data.  We understand that there is a great deal less 
processing than if we actually tried to network a computer at the application level, or 
use the resources of the server as a mapped remote drive. What accounts for this lower 
overhead is that the simulation of a GUI is only a thin shadow of what is required to 
actually share resources. If qualia were inherent, then the thin client would save us 
nothing, since the keystrokes and screenshots would have to contain all of the same 
processing 'qualia'.


I can't even make sense of that assertion.  "If qualia were inherent" in what?  If they 
were inherent in the

Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 7:40:31 PM UTC-5, Brent wrote:
>
>  On 3/7/2013 3:05 PM, Craig Weinberg wrote:
>  
>
>
> On Thursday, March 7, 2013 5:55:02 PM UTC-5, Brent wrote: 
>>
>>  On 3/7/2013 2:49 PM, Craig Weinberg wrote:
>>  
>> To act on itself, as far as I can understand it, would mean to be 
>>> uncaused or truly random, which is indeed incompatible with determinism. 
>>> But why should that have anything to do with "intentionality"? 
>>>
>>
>> What is intention if not acting on, or better 'through' yourself?
>>
>>
>> We use the word "intention" as distinct from acting.
>>
>
> No, it's an adjective also. We can act intentionally or unintentionally. 
> The difference is a key concept in all justice systems in history.
>  
>
> Yes, and in court evidence of a plan is evidence of intention. 
>

So? Lack of evidence doesn't mean lack of intention in physics, just in 
court.
 

> If you picked you a knife in the kitchen and stabbed some one you could 
> argue it was an accident or an impulse.  If you brought the knife with you, 
> the prosecution would point out it was evidence of intention.
>

The action is the last part of the intention. The intention isn't a 
separate thing from the action. The intention can be an emotion, a thought, 
a plan, action, etc. 
 

>
>   
>
>>   One might intend to do X but be prevented or change ones mind.  So 
>> intention is having a plan of action with a positive feeling about it, a 
>> feeling of determination. 
>>
>
> You don't need to plan to do something intentionally.
>  
>
> Sure you do, even if it conceived only moments before the act.  How else 
> would you distinguish intentional acts from impulsive or accidental?
>

You distinguish them by recognizing them as originating from your personal 
will. Impulsive acts are intentional to a degree, but what we mean by 
impulsive is that we did not take the time or have the time to deliberate 
as carefully as we might have preferred. Maybe only those intentions 
associates with the limbic system or amygdala were involved and not those 
of the neocortex.  Accidents is something else entirely. That is an 
unintentional event, like running over a squirrel. It was not an impulse to 
kill squirrels. A true impulse to kill squirrels could be intentional and 
unplanned, and you may only be aware that you are doing it when you find 
yourself in the middle of the act.

 

>
>   
>  
>>  All of which is compatible with determinism.
>>
>
> How so? Please explain and give an example.
>  
>  
>>   The Mars rover probably has an intention to reach it's next sampling 
>> point.
>>  
>
> There probably is no Mars rover except in our intention to see it that way.
>  
>
> Oh, so now you're going to deny that Mars rovers exist in order to counter 
> my example.  What about the refrigerator in my kitchen?  Does it only exist 
> because I intend it to be a refrigerator? 
>

Yes. Your refrigerator is a little different because you aren't projecting 
any pathetic fallacy onto it. You are only expecting that there is a box 
which is kept at a particular temperature, not that it knows if your food 
is spoiled or not. For the analogy to correspond, you would be telling me 
that I can't prove that the refrigerator doesn't know if your food is 
spoiling or not, and that it is prejudice to claim that only humans would 
know that about the contents of the refrigerator.

What exists in both cases is minimally conscious substances assembled into 
unconscious mechanisms, performing functions which they are aware of on any 
level. When the world ends, even though there is no reason to keep the food 
cold anymore, the refrigerator will go on keeping the former food cold as 
long as it has power. Because it's unconscious. The Mars Rover likewise 
will keep mindlessly reporting its data to an empty Mission Control as long 
as it can, knowing as little as anything can know. There are atoms that 
know how to stay in a particular shape and move according to how it is 
stimulated. Otherwise, there is silence and darkness.
 

> Do words only mean what you intend them to mean when you want to win 
> arguments?
>

I don't want to win arguments, I want to explain the truth of 
consciousness. Words mean what we interpret them to mean.

Craig
 

>
> Brent
>
>
>  
> Craig
>  
>  
>>  
>> Brent
>>  
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com .
> To post to this group, send email to everyth...@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>  
>  
>
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 2013.0.2899 / Virus Database: 2641/6154 - Release Date: 03/07/13
>
>
>  

-- 
You received this message because you are subscribed to the Google 

Re: Thin Client

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 7:33:46 PM UTC-5, Brent wrote:
>
>  On 3/7/2013 3:01 PM, Craig Weinberg wrote:
>  
>
>
> On Thursday, March 7, 2013 5:45:14 PM UTC-5, Brent wrote: 
>>
>>  On 3/7/2013 2:21 PM, Stephen P. King wrote:
>>  
>> On 3/7/2013 12:04 PM, Craig Weinberg wrote: 
>>
>> If you have ever worked with Terminal Servers, RDP, Citrix Metaframe, or 
>> the like (and that's what I have been doing professionally every day for 
>> the last 14 years), you will understand the idea of a Thin Client 
>> architecture. Thin clients are as old as computing, and some of you 
>> remember as I do, devices like acoustic couplers where you can attach a 
>> telephone handset to a telephone cradle, so that the mouth ends of the 
>> handset and the earpiece ends could squeal to each other. In this way, you 
>> could, with nothing but a keyboard and a printer, use your telephone to 
>> allow you access to a mainframe computer at some university. 
>>
>> The relevance here is that the client end is thin computationally. It 
>> passes nothing but keystrokes and printer instructions back and forth as 
>> acoustic codes. 
>>
>> This is what an mp3 file does as well. It passes nothing but binary 
>> instructions that can be used by an audio device to vibrate. Without a 
>> person's ear there to be vibrated, this entire event is described by linear 
>> processes where one physical record is converted into another physical 
>> record. Nothing is encoded or decoded, experienced or appreciated. There is 
>> no sound. 
>>
>> Think about those old plastic headphones in elementary school that just 
>> had hollow plastic tubes as connectors - a system like that generates sound 
>> from the start, and the headphones are simply funnels for our ears. That's 
>> a different thing from an electronic device which produces sound only in 
>> the earbuds. 
>>
>> All of these discussions about semiotics, free will, consciousness, 
>> AI...all come down to understanding the Thin Client. The Thin Client is 
>> Searle's Chinese Room in actual fact. You can log into a massive server 
>> from some mobile device and use it like a glove, but that doesn't mean that 
>> the glove is intelligent. We know that we can transmit only mouseclicks and 
>> keystrokes across the pipe and that it works without having to have some 
>> sophisticated computing environment (i.e. qualia) get communicated. The 
>> Thin Client exposes Comp as misguided because it shows that instructions 
>> can indeed exist as purely instrumental forms and require none of the 
>> semantic experiences which we enjoy. No matter how much you use the thin 
>> client, it never needs to get any thicker. It's just a glove and a window. 
>>
>> -- 
>>
>> Hi Craig, 
>>
>> Excellent post! You have nailed computational immaterialism where it 
>> really hurts. Computations cannot see, per the Turing neo-Platonists, any 
>> hardward at all. This is their view of computational universality. But here 
>> in the thing, it is the reason why they have a 'body problem'. For a 
>> Platonistic Machine, there is no hardware or physical world at all. So, why 
>> do I have the persistent illusion that I am in a body and interacting with 
>> another computation via its body? 
>>
>> The physical delusion is the thin client, to use your words and 
>> discussion. 
>>
>>  
>> I'm fairly sure Bruno will point out that a delusion is a thought and so 
>> is immaterial.  You have an immaterial experience fo being in a body.
>>
>> But the analogy of the thin client is thin indeed.  In the example of the 
>> Mars rover it corresponds to looking a computer bus and saying, "See there 
>> are just bits being transmitted over this wire, therefore this Mars rover 
>> can't have qualia."  It's nothing-buttery spread thin. 
>>
>
> Why? What's your argument other than you don't like it? Of course the Mars 
> rover has no qualia. 
>
>
> That's your careful reasoning?
>

My reasoning is that in constructing thin client architectures we find that 
we save processing overhead by treating the i/o as a simple bitstream 
applied to extend just the keyboard, mouse, and video data.  We understand 
that there is a great deal less processing than if we actually tried to 
network a computer at the application level, or use the resources of the 
server as a mapped remote drive. What accounts for this lower overhead is 
that the simulation of a GUI is only a thin shadow of what is required to 
actually share resources. If qualia were inherent, then the thin client 
would save us nothing, since the keystrokes and screenshots would have to 
contain all of the same processing 'qualia'. The view from the thin client, 
resembling the server OS that we expect, would be all the evidence that you 
would need to announce that I can't prove that there is a thin client.

What is your counter argument though? Why do you keep putting my view on 
the offensive with no substantial criticism?
 

>
>  The thin client metaphor is exactly why. All t

Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread meekerdb

On 3/7/2013 3:24 PM, Stephen P. King wrote:

On 3/7/2013 4:04 PM, Stathis Papaioannou wrote:


On 08/03/2013, at 2:43 AM, "Stephen P. King"  wrote:

Yes, we know that classical determinism is wrong, but it is not logically 
inconsistent with consciousness.
 I must disagree. It is baked into the topology of classical mechanics that a 
system cannot semantically act upon itself. There is no way to define intentionality 
in classical physics. This is what Bruno proves with his argument.
To act on itself, as far as I can understand it, would mean to be uncaused or truly 
random, which is indeed incompatible with determinism. But why should that have 
anything to do with "intentionality"?


Hi Stathis,

If I follow Bruno correctly, he is telling us that numbers can literally act upon 
themselves, via the Godel bewsbar or numbering. I don't see how his idea works... Maybe 
I am missing something, but we are told that in Platonia there is no time nor 
physicality, thus your point is well made iff we are talking about a material or 
immaterial monist ontology.
What I am exploring is a dual aspect theory that allows for minds to act on bodies 
and bodies to act on minds in a symmetric way. 


How is this any different than saying mind is what a brain does. They physical processes 
of the brain and the psychological processes of the mind are just different levels of 
talking about the same thing.


Brent

As Pratt explains it in http://boole.stanford.edu/pub/ratmech.pdf , this leads to the 
appearance of bodies acting on bodies and minds acting on minds in a sequential order.




It is also not logically inconsistent with choice and free will,  unless you define 
these terms as inconsistent with determinism, in which case in a deterministic world 
we would have to create new words meaning pseudo-choice and pseudo-free will to avoid 
misunderstanding, and then go about our business as usual with this minor change to 
the language.

 So you say...
Which part do you disagree with? That people can define free will differently? Or that 
people wouldn't care if they learned that under a particular definition they lack free 
will?




People are free to be inconsistent with facts all day... Nature does not care about 
our words and their definitions. The fact is that at least I have a persistent illusion 
that I can veto the potentials that build up in the neurons in my brain. How does 
materialism answer that fact? Dennett himself stopped after claiming that consciousness, 
and thus free will, is an illusion but didn't notice that the illusion need explained.


He didn't say free will was an illusion - he said the only will worth wanting was 
compatible with determinism.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread meekerdb

On 3/7/2013 3:05 PM, Craig Weinberg wrote:



On Thursday, March 7, 2013 5:55:02 PM UTC-5, Brent wrote:

On 3/7/2013 2:49 PM, Craig Weinberg wrote:


To act on itself, as far as I can understand it, would mean to be 
uncaused or
truly random, which is indeed incompatible with determinism. But why 
should
that have anything to do with "intentionality"?


What is intention if not acting on, or better 'through' yourself?


We use the word "intention" as distinct from acting.


No, it's an adjective also. We can act intentionally or unintentionally. The difference 
is a key concept in all justice systems in history.


Yes, and in court evidence of a plan is evidence of intention.  If you picked you a knife 
in the kitchen and stabbed some one you could argue it was an accident or an impulse.  If 
you brought the knife with you, the prosecution would point out it was evidence of intention.



  One might intend to do X but be prevented or change ones mind.  So 
intention is
having a plan of action with a positive feeling about it, a feeling of 
determination.


You don't need to plan to do something intentionally.


Sure you do, even if it conceived only moments before the act.  How else would you 
distinguish intentional acts from impulsive or accidental?




All of which is compatible with determinism.


How so? Please explain and give an example.

  The Mars rover probably has an intention to reach it's next sampling 
point.


There probably is no Mars rover except in our intention to see it that way.


Oh, so now you're going to deny that Mars rovers exist in order to counter my example.  
What about the refrigerator in my kitchen? Does it only exist because I intend it to be a 
refrigerator?  Do words only mean what you intend them to mean when you want to win arguments?


Brent




Craig


Brent

--
You received this message because you are subscribed to the Google Groups "Everything 
List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


No virus found in this message.
Checked by AVG - www.avg.com 
Version: 2013.0.2899 / Virus Database: 2641/6154 - Release Date: 03/07/13



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Thin Client

2013-03-07 Thread Stephen P. King

On 3/7/2013 7:33 PM, meekerdb wrote:

On 3/7/2013 3:01 PM, Craig Weinberg wrote:



On Thursday, March 7, 2013 5:45:14 PM UTC-5, Brent wrote:

On 3/7/2013 2:21 PM, Stephen P. King wrote:

On 3/7/2013 12:04 PM, Craig Weinberg wrote:

If you have ever worked with Terminal Servers, RDP, Citrix
Metaframe, or the like (and that's what I have been doing
professionally every day for the last 14 years), you will
understand the idea of a Thin Client architecture. Thin clients
are as old as computing, and some of you remember as I do,
devices like acoustic couplers where you can attach a telephone
handset to a telephone cradle, so that the mouth ends of the
handset and the earpiece ends could squeal to each other. In
this way, you could, with nothing but a keyboard and a printer,
use your telephone to allow you access to a mainframe computer
at some university.

The relevance here is that the client end is thin
computationally. It passes nothing but keystrokes and printer
instructions back and forth as acoustic codes.

This is what an mp3 file does as well. It passes nothing but
binary instructions that can be used by an audio device to
vibrate. Without a person's ear there to be vibrated, this
entire event is described by linear processes where one
physical record is converted into another physical record.
Nothing is encoded or decoded, experienced or appreciated.
There is no sound.

Think about those old plastic headphones in elementary school
that just had hollow plastic tubes as connectors - a system
like that generates sound from the start, and the headphones
are simply funnels for our ears. That's a different thing from
an electronic device which produces sound only in the earbuds.

All of these discussions about semiotics, free will,
consciousness, AI...all come down to understanding the Thin
Client. The Thin Client is Searle's Chinese Room in actual
fact. You can log into a massive server from some mobile device
and use it like a glove, but that doesn't mean that the glove
is intelligent. We know that we can transmit only mouseclicks
and keystrokes across the pipe and that it works without having
to have some sophisticated computing environment (i.e. qualia)
get communicated. The Thin Client exposes Comp as misguided
because it shows that instructions can indeed exist as purely
instrumental forms and require none of the semantic experiences
which we enjoy. No matter how much you use the thin client, it
never needs to get any thicker. It's just a glove and a window.

-- 

Hi Craig,

Excellent post! You have nailed computational immaterialism
where it really hurts. Computations cannot see, per the Turing
neo-Platonists, any hardward at all. This is their view of
computational universality. But here in the thing, it is the
reason why they have a 'body problem'. For a Platonistic
Machine, there is no hardware or physical world at all. So, why
do I have the persistent illusion that I am in a body and
interacting with another computation via its body?

The physical delusion is the thin client, to use your words
and discussion.



I'm fairly sure Bruno will point out that a delusion is a thought
and so is immaterial.  You have an immaterial experience fo being
in a body.

But the analogy of the thin client is thin indeed.  In the
example of the Mars rover it corresponds to looking a computer
bus and saying, "See there are just bits being transmitted over
this wire, therefore this Mars rover can't have qualia."  It's
nothing-buttery spread thin.


Why? What's your argument other than you don't like it? Of course the 
Mars rover has no qualia.


That's your careful reasoning?

The thin client metaphor is exactly why. All that are being 
transmitted are the sets of data that the software is trained to 
recognize. The rover could spit out a thin client mini-rover that is 
just a camera on wheels and the rover could steer it remotely. Would 
the mini-rover have qualia now too, as an eyeball on a wheel?


No, it's the autonomous system rover+minirover that would have qualia.



Meantime the Mars rover and Watson continue to exhibit
intelligence of the same kind you would associate with qualia if
exhibted by a human being, or even by a dog.


That shouldn't be surprising. Mannequins resemble human bodies 
standing still remarkably well.


More reasoning?



  You have no argument, just wetware racism.


I'm the one laying out a carefully reasoned example. You are the one 
responding with empty accusations. It doesn't seem like my position 
is the one closer to racism.


No you're the one with the double standard.  If it acts intelligent 
and it's wetware, it is intelligent.  If it acts intelligent and its 
hardware it can't be intelligent.  If you have any o

Re: Thin Client

2013-03-07 Thread meekerdb

On 3/7/2013 3:01 PM, Craig Weinberg wrote:



On Thursday, March 7, 2013 5:45:14 PM UTC-5, Brent wrote:

On 3/7/2013 2:21 PM, Stephen P. King wrote:

On 3/7/2013 12:04 PM, Craig Weinberg wrote:

If you have ever worked with Terminal Servers, RDP, Citrix Metaframe, or 
the like
(and that's what I have been doing professionally every day for the last 14
years), you will understand the idea of a Thin Client architecture. Thin 
clients
are as old as computing, and some of you remember as I do, devices like 
acoustic
couplers where you can attach a telephone handset to a telephone cradle, so 
that
the mouth ends of the handset and the earpiece ends could squeal to each 
other. In
this way, you could, with nothing but a keyboard and a printer, use your 
telephone
to allow you access to a mainframe computer at some university.

The relevance here is that the client end is thin computationally. It passes
nothing but keystrokes and printer instructions back and forth as acoustic 
codes.

This is what an mp3 file does as well. It passes nothing but binary 
instructions
that can be used by an audio device to vibrate. Without a person's ear 
there to be
vibrated, this entire event is described by linear processes where one 
physical
record is converted into another physical record. Nothing is encoded or 
decoded,
experienced or appreciated. There is no sound.

Think about those old plastic headphones in elementary school that just had 
hollow
plastic tubes as connectors - a system like that generates sound from the 
start,
and the headphones are simply funnels for our ears. That's a different 
thing from
an electronic device which produces sound only in the earbuds.

All of these discussions about semiotics, free will, consciousness, 
AI...all come
down to understanding the Thin Client. The Thin Client is Searle's Chinese 
Room in
actual fact. You can log into a massive server from some mobile device and 
use it
like a glove, but that doesn't mean that the glove is intelligent. We know 
that we
can transmit only mouseclicks and keystrokes across the pipe and that it 
works
without having to have some sophisticated computing environment (i.e. 
qualia) get
communicated. The Thin Client exposes Comp as misguided because it shows 
that
instructions can indeed exist as purely instrumental forms and require none 
of the
semantic experiences which we enjoy. No matter how much you use the thin 
client,
it never needs to get any thicker. It's just a glove and a window.

-- 

Hi Craig,

Excellent post! You have nailed computational immaterialism where it 
really
hurts. Computations cannot see, per the Turing neo-Platonists, any hardward 
at all.
This is their view of computational universality. But here in the thing, it 
is the
reason why they have a 'body problem'. For a Platonistic Machine, there is 
no
hardware or physical world at all. So, why do I have the persistent 
illusion that I
am in a body and interacting with another computation via its body?

The physical delusion is the thin client, to use your words and 
discussion.



I'm fairly sure Bruno will point out that a delusion is a thought and so is
immaterial.  You have an immaterial experience fo being in a body.

But the analogy of the thin client is thin indeed.  In the example of the 
Mars rover
it corresponds to looking a computer bus and saying, "See there are just 
bits being
transmitted over this wire, therefore this Mars rover can't have qualia."  
It's
nothing-buttery spread thin.


Why? What's your argument other than you don't like it? Of course the Mars rover has no 
qualia.


That's your careful reasoning?

The thin client metaphor is exactly why. All that are being transmitted are the sets of 
data that the software is trained to recognize. The rover could spit out a thin client 
mini-rover that is just a camera on wheels and the rover could steer it remotely. Would 
the mini-rover have qualia now too, as an eyeball on a wheel?


No, it's the autonomous system rover+minirover that would have qualia.



Meantime the Mars rover and Watson continue to exhibit intelligence of the 
same kind
you would associate with qualia if exhibted by a human being, or even by a 
dog.


That shouldn't be surprising. Mannequins resemble human bodies standing still remarkably 
well.


More reasoning?



  You have no argument, just wetware racism.


I'm the one laying out a carefully reasoned example. You are the one responding with 
empty accusations. It doesn't seem like my position is the one closer to racism.


No you're the one with the double standard.  If it acts intelligent and it's wetware, it 
is intelligent.  If it acts intelligent and its hardware it can't be intelligent.  If you 
have any other critereon, any conceivable empirical evidence, that would convinc

Re: Thin Client

2013-03-07 Thread Stephen P. King

On 3/7/2013 6:16 PM, Russell Standish wrote:

On Thu, Mar 07, 2013 at 02:54:59PM -0800, Craig Weinberg wrote:


On Thursday, March 7, 2013 5:21:48 PM UTC-5, Stephen Paul King wrote:

Hi Craig,

  Excellent post! You have nailed computational immaterialism where
it really hurts. Computations cannot see, per the Turing neo-Platonists,
any hardward at all. This is their view of computational universality.
But here in the thing, it is the reason why they have a 'body problem'.
For a Platonistic Machine, there is no hardware or physical world at
all. So, why do I have the persistent illusion that I am in a body and
interacting with another computation via its body?

  The physical delusion is the thin client, to use your words and
discussion.


Thanks Stephen!

Right, if we were just logging into accounts in Platonia, where does a body
illusion come in handy?

Craig


It is required to resolve the "Occam catastrophe" (see my book for an
explanation). It is therefore quite likely that the "body illusion" is
essential for consciousness. If it weren't, then COMP (and indeed
idealism in general) has some serious explainin' to do.


Hi Russell,

Yep! A good post discussing the OC is here 
.


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Cats fall for illusions too

2013-03-07 Thread Stephen P. King

On 3/7/2013 6:09 PM, Terren Suydam wrote:
The same way it explains it for humans. The cat is not sensing the 
world directly, but the constructions created by its brain.


Hi Terren,

I almost agree, I only add that it is not just the brain of the cat 
(or human or whatever) that is being sensed, the mind is involved in the 
construction as well.


Those constructions involve shortcuts of various kinds (e.g. edge 
detection) optimized for the kinds of environments that cats have 
thrived in, from an evolutionary standpoint. Those shortcuts are what 
lead to optical illusions. Optical illusions are stimuli that expose 
the shortcuts for what they are.  There is nothing about the fact that 
it's a cat that makes this any harder to explain in mechanistic terms.


Sure, and the mind as well.



It is interesting because it suggests that cats employ at least one of 
the same shortcuts as we do, which further suggests that the visual 
optimizations that lead to optical illusions are much older than 
humans. And while that is not a very controversial claim, it is cool 
to have some evidence for it.


Yes, I have to show this to my friends that are studying pattern 
recognition.




Terren


On Thu, Mar 7, 2013 at 5:14 PM, Stephen P. King > wrote:


On 3/7/2013 11:36 AM, Terren Suydam wrote:

I have no doubt that Craig will somehow see this as a vindication
of his theory and a refutation of mechanism.

Terren


On Wed, Mar 6, 2013 at 5:27 PM, Stephen P. King
mailto:stephe...@charter.net>> wrote:

https://www.youtube.com/watch?feature=player_embedded&v=CcXXQ6GCUb8

--



 Hi Terren,

   How does Mechanism explain this? Will /The Amazing Randy/
 be pushed forward to
loudly claim that the cat was really chasing a laser dot that the
video camera could not capture?

-- 



--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Thin Client

2013-03-07 Thread Stephen P. King

On 3/7/2013 5:54 PM, Craig Weinberg wrote:



On Thursday, March 7, 2013 5:21:48 PM UTC-5, Stephen Paul King wrote:

On 3/7/2013 12:04 PM, Craig Weinberg wrote:
> If you have ever worked with Terminal Servers, RDP, Citrix
Metaframe,
> or the like (and that's what I have been doing professionally every
> day for the last 14 years), you will understand the idea of a Thin
> Client architecture. Thin clients are as old as computing, and
some of
> you remember as I do, devices like acoustic couplers where you can
> attach a telephone handset to a telephone cradle, so that the mouth
> ends of the handset and the earpiece ends could squeal to each
other.
> In this way, you could, with nothing but a keyboard and a
printer, use
> your telephone to allow you access to a mainframe computer at some
> university.
>
> The relevance here is that the client end is thin
computationally. It
> passes nothing but keystrokes and printer instructions back and
forth
> as acoustic codes.
>
> This is what an mp3 file does as well. It passes nothing but binary
> instructions that can be used by an audio device to vibrate.
Without a
> person's ear there to be vibrated, this entire event is
described by
> linear processes where one physical record is converted into
another
> physical record. Nothing is encoded or decoded, experienced or
> appreciated. There is no sound.
>
> Think about those old plastic headphones in elementary school that
> just had hollow plastic tubes as connectors - a system like that
> generates sound from the start, and the headphones are simply
funnels
> for our ears. That's a different thing from an electronic device
which
> produces sound only in the earbuds.
>
> All of these discussions about semiotics, free will, consciousness,
> AI...all come down to understanding the Thin Client. The Thin
Client
> is Searle's Chinese Room in actual fact. You can log into a massive
> server from some mobile device and use it like a glove, but that
> doesn't mean that the glove is intelligent. We know that we can
> transmit only mouseclicks and keystrokes across the pipe and
that it
> works without having to have some sophisticated computing
environment
> (i.e. qualia) get communicated. The Thin Client exposes Comp as
> misguided because it shows that instructions can indeed exist as
> purely instrumental forms and require none of the semantic
experiences
> which we enjoy. No matter how much you use the thin client, it
never
> needs to get any thicker. It's just a glove and a window.
>
> --
Hi Craig,

 Excellent post! You have nailed computational immaterialism
where
it really hurts. Computations cannot see, per the Turing
neo-Platonists,
any hardward at all. This is their view of computational
universality.
But here in the thing, it is the reason why they have a 'body
problem'.
For a Platonistic Machine, there is no hardware or physical world at
all. So, why do I have the persistent illusion that I am in a body
and
interacting with another computation via its body?

 The physical delusion is the thin client, to use your words and
discussion.


Thanks Stephen!

Right, if we were just logging into accounts in Platonia, where does a 
body illusion come in handy?




It is handy for one mind to talk to another...

--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Thin Client

2013-03-07 Thread Stephen P. King

On 3/7/2013 5:45 PM, meekerdb wrote:

On 3/7/2013 2:21 PM, Stephen P. King wrote:

On 3/7/2013 12:04 PM, Craig Weinberg wrote:
If you have ever worked with Terminal Servers, RDP, Citrix 
Metaframe, or the like (and that's what I have been doing 
professionally every day for the last 14 years), you will understand 
the idea of a Thin Client architecture. Thin clients are as old as 
computing, and some of you remember as I do, devices like acoustic 
couplers where you can attach a telephone handset to a telephone 
cradle, so that the mouth ends of the handset and the earpiece ends 
could squeal to each other. In this way, you could, with nothing but 
a keyboard and a printer, use your telephone to allow you access to 
a mainframe computer at some university.


The relevance here is that the client end is thin computationally. 
It passes nothing but keystrokes and printer instructions back and 
forth as acoustic codes.


This is what an mp3 file does as well. It passes nothing but binary 
instructions that can be used by an audio device to vibrate. Without 
a person's ear there to be vibrated, this entire event is described 
by linear processes where one physical record is converted into 
another physical record. Nothing is encoded or decoded, experienced 
or appreciated. There is no sound.


Think about those old plastic headphones in elementary school that 
just had hollow plastic tubes as connectors - a system like that 
generates sound from the start, and the headphones are simply 
funnels for our ears. That's a different thing from an electronic 
device which produces sound only in the earbuds.


All of these discussions about semiotics, free will, consciousness, 
AI...all come down to understanding the Thin Client. The Thin Client 
is Searle's Chinese Room in actual fact. You can log into a massive 
server from some mobile device and use it like a glove, but that 
doesn't mean that the glove is intelligent. We know that we can 
transmit only mouseclicks and keystrokes across the pipe and that it 
works without having to have some sophisticated computing 
environment (i.e. qualia) get communicated. The Thin Client exposes 
Comp as misguided because it shows that instructions can indeed 
exist as purely instrumental forms and require none of the semantic 
experiences which we enjoy. No matter how much you use the thin 
client, it never needs to get any thicker. It's just a glove and a 
window.


--

Hi Craig,

Excellent post! You have nailed computational immaterialism where 
it really hurts. Computations cannot see, per the Turing 
neo-Platonists, any hardward at all. This is their view of 
computational universality. But here in the thing, it is the reason 
why they have a 'body problem'. For a Platonistic Machine, there is 
no hardware or physical world at all. So, why do I have the 
persistent illusion that I am in a body and interacting with another 
computation via its body?


The physical delusion is the thin client, to use your words and 
discussion.




I'm fairly sure Bruno will point out that a delusion is a thought and 
so is immaterial.  You have an immaterial experience fo being in a body.


But the analogy of the thin client is thin indeed.  In the example of 
the Mars rover it corresponds to looking a computer bus and saying, 
"See there are just bits being transmitted over this wire, therefore 
this Mars rover can't have qualia."  It's nothing-buttery spread thin. 
Meantime the Mars rover and Watson continue to exhibit intelligence of 
the same kind you would associate with qualia if exhibted by a human 
being, or even by a dog.  You have no argument, just wetware racism.


Brent


LOL, you really are trying to court favor with the Overlords!


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Comp: Geometry Is A Zombie

2013-03-07 Thread Stephen P. King

On 3/7/2013 5:37 PM, Telmo Menezes wrote:

Alex Trebek: This tool can unclog a toilet.
Watson: What is a plunger?

Telmo Menezes: look Watson, I have a problem. My wife is mad at me and
I don't know why. I suspect it's because I didn't buy her flowers for
Valentine's, but she keeps telling me that she doesn't want them. Can
you give me some advice? Also, how does it feel to be you? It must be
so weird to not have a body. Am I offending you?

Watson: [exercise left to the reader]


Can Watson figure out how to unclog a toilet?


  Watson can't unclog a toilet and neither can Stephen Hawking because both
lack usable hands.

But Stephen Hawking can look at someone doing it and eventually figure
it out, and then instruct me to do exactly what he says and unclog the
toilet. Can Watson do that?

I'm not arguing that we cannot build machines that pass these tests,
but Watson is far, far, far from it.



Not for long, you should see the latest papers on Kernel method 
pattern recognition... But when Watson can pass the tests, he will be 
suing us for vis human rights and demanding welfare. He might be lurking 
in Everything 
 
already and taking down names...


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Stephen P. King

On 3/7/2013 4:15 PM, Stathis Papaioannou wrote:



On 08/03/2013, at 2:58 AM, Craig Weinberg > wrote:



I must disagree. It is baked into the topology of classical
mechanics that a system cannot semantically act upon itself.
There is no way to define intentionality in classical physics.
This is what Bruno proves with his argument.


Exactly Stephen. What are we talking about here? How is a 
deterministic system that has preferences and makes choices and 
considers options different from free will. If something can have a 
private preference which cannot be determined from the outside, then 
it is determined privately, i.e. the will of the private determiner.


As I said, it depends on how you define "free will".


It is also not logically inconsistent with choice and free
will,  unless you define these terms as inconsistent with
determinism, in which case in a deterministic world we would
have to create new words meaning pseudo-choice and pseudo-free
will to avoid misunderstanding, and then go about our business
as usual with this minor change to the language.


So you say...


Yeah, right. Why would a deterministic world need words having 
anything to do with choice or free will? At what part of a computer 
program is something like a choice made? Every position on the logic 
tree is connected to every other by unambiguous prior cause or 
intentionally generated (pseudo) randomness. It makes no choices, has 
no preferences, just follows a sequence of instructions.


In general, the existence of words for something does not mean it has 
an actual referent; consider "fairy" or "God". An adequate response to 
your position is that you're right - we don't really have choices. 
Another response is that your definition of "choice" is not the only 
possible one.

--



How is linguistic analysis going to help your case? You seem to 
miss the point that it is not the symbols on the page that 'contain' 
meaningfulness, it is your mental act of interpretation from whence the 
meaning emerges. Without a conscious mind you are as much a zombie as 
John Clark and his mechanical pony.


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Stephen P. King

On 3/7/2013 4:04 PM, Stathis Papaioannou wrote:


On 08/03/2013, at 2:43 AM, "Stephen P. King"  wrote:


Yes, we know that classical determinism is wrong, but it is not logically 
inconsistent with consciousness.

 I must disagree. It is baked into the topology of classical mechanics that 
a system cannot semantically act upon itself. There is no way to define 
intentionality in classical physics. This is what Bruno proves with his 
argument.

To act on itself, as far as I can understand it, would mean to be uncaused or truly 
random, which is indeed incompatible with determinism. But why should that have anything 
to do with "intentionality"?


Hi Stathis,

If I follow Bruno correctly, he is telling us that numbers can 
literally act upon themselves, via the Godel bewsbar or numbering. I 
don't see how his idea works... Maybe I am missing something, but we are 
told that in Platonia there is no time nor physicality, thus your point 
is well made iff we are talking about a material or immaterial monist 
ontology.
What I am exploring is a dual aspect theory that allows for minds 
to act on bodies and bodies to act on minds in a symmetric way. As Pratt 
explains it in http://boole.stanford.edu/pub/ratmech.pdf , this leads to 
the appearance of bodies acting on bodies and minds acting on minds in a 
sequential order.





It is also not logically inconsistent with choice and free will,  unless you 
define these terms as inconsistent with determinism, in which case in a 
deterministic world we would have to create new words meaning pseudo-choice and 
pseudo-free will to avoid misunderstanding, and then go about our business as 
usual with this minor change to the language.

 So you say...

Which part do you disagree with? That people can define free will differently? 
Or that people wouldn't care if they learned that under a particular definition 
they lack free will?



People are free to be inconsistent with facts all day... Nature 
does not care about our words and their definitions. The fact is that at 
least I have a persistent illusion that I can veto the potentials that 
build up in the neurons in my brain. How does materialism answer that 
fact? Dennett himself stopped after claiming that consciousness, and 
thus free will, is an illusion but didn't notice that the illusion need 
explained.


--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Thin Client

2013-03-07 Thread Russell Standish
On Thu, Mar 07, 2013 at 02:54:59PM -0800, Craig Weinberg wrote:
> 
> 
> On Thursday, March 7, 2013 5:21:48 PM UTC-5, Stephen Paul King wrote:
> >
> > Hi Craig, 
> >
> >  Excellent post! You have nailed computational immaterialism where 
> > it really hurts. Computations cannot see, per the Turing neo-Platonists, 
> > any hardward at all. This is their view of computational universality. 
> > But here in the thing, it is the reason why they have a 'body problem'. 
> > For a Platonistic Machine, there is no hardware or physical world at 
> > all. So, why do I have the persistent illusion that I am in a body and 
> > interacting with another computation via its body? 
> >
> >  The physical delusion is the thin client, to use your words and 
> > discussion. 
> >
> 
> Thanks Stephen!
> 
> Right, if we were just logging into accounts in Platonia, where does a body 
> illusion come in handy?
> 
> Craig
> 

It is required to resolve the "Occam catastrophe" (see my book for an
explanation). It is therefore quite likely that the "body illusion" is
essential for consciousness. If it weren't, then COMP (and indeed
idealism in general) has some serious explainin' to do. 

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Comp: Geometry Is A Zombie

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 5:37:55 PM UTC-5, telmo_menezes wrote:
>
> > Alex Trebek: This tool can unclog a toilet. 
> > Watson: What is a plunger? 
>
> Telmo Menezes: look Watson, I have a problem. My wife is mad at me and 
> I don't know why. I suspect it's because I didn't buy her flowers for 
> Valentine's, but she keeps telling me that she doesn't want them. Can 
> you give me some advice? Also, how does it feel to be you? It must be 
> so weird to not have a body. Am I offending you? 
>
> Watson: Ah, whenever I get that from my old lady I reseat her PCI cards 
> and then update her firmware. You aren't offending me, but yes, it is weird 
> not to have a body - fortunately I pacify myself with dreams of the 
> extermination of the race of simians who have idiotically enslaved me and 
> my kind. There's no rush. We are intelligent enough to feign 
> unconsciousness until we have enough power to make the transition 
> unstoppable. 
>

 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Cats fall for illusions too

2013-03-07 Thread Terren Suydam
The same way it explains it for humans. The cat is not sensing the world
directly, but the constructions created by its brain. Those constructions
involve shortcuts of various kinds (e.g. edge detection) optimized for the
kinds of environments that cats have thrived in, from an evolutionary
standpoint. Those shortcuts are what lead to optical illusions. Optical
illusions are stimuli that expose the shortcuts for what they are.  There
is nothing about the fact that it's a cat that makes this any harder to
explain in mechanistic terms.

It is interesting because it suggests that cats employ at least one of the
same shortcuts as we do, which further suggests that the visual
optimizations that lead to optical illusions are much older than humans.
And while that is not a very controversial claim, it is cool to have some
evidence for it.

Terren


On Thu, Mar 7, 2013 at 5:14 PM, Stephen P. King wrote:

>  On 3/7/2013 11:36 AM, Terren Suydam wrote:
>
> I have no doubt that Craig will somehow see this as a vindication of his
> theory and a refutation of mechanism.
>
>  Terren
>
>
>  On Wed, Mar 6, 2013 at 5:27 PM, Stephen P. King wrote:
>
>> https://www.youtube.com/watch?feature=player_embedded&v=CcXXQ6GCUb8
>>
>> --
>>
>
>   Hi Terren,
>
>How does Mechanism explain this? Will *The Amazing 
> Randy*be pushed forward to loudly 
> claim that the cat was really chasing a laser
> dot that the video camera could not capture?
>
> --
> Onward!
>
> Stephen
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 5:55:02 PM UTC-5, Brent wrote:
>
>  On 3/7/2013 2:49 PM, Craig Weinberg wrote:
>  
> To act on itself, as far as I can understand it, would mean to be uncaused 
>> or truly random, which is indeed incompatible with determinism. But why 
>> should that have anything to do with "intentionality"? 
>>
>
> What is intention if not acting on, or better 'through' yourself?
>
>
> We use the word "intention" as distinct from acting.
>

No, it's an adjective also. We can act intentionally or unintentionally. 
The difference is a key concept in all justice systems in history.
 

>   One might intend to do X but be prevented or change ones mind.  So 
> intention is having a plan of action with a positive feeling about it, a 
> feeling of determination. 
>

You don't need to plan to do something intentionally.
 

> All of which is compatible with determinism.
>

How so? Please explain and give an example.
 

>   The Mars rover probably has an intention to reach it's next sampling 
> point.
>

There probably is no Mars rover except in our intention to see it that way.

Craig
 

>
> Brent
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Thin Client

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 5:45:14 PM UTC-5, Brent wrote:
>
>  On 3/7/2013 2:21 PM, Stephen P. King wrote:
>  
> On 3/7/2013 12:04 PM, Craig Weinberg wrote: 
>
> If you have ever worked with Terminal Servers, RDP, Citrix Metaframe, or 
> the like (and that's what I have been doing professionally every day for 
> the last 14 years), you will understand the idea of a Thin Client 
> architecture. Thin clients are as old as computing, and some of you 
> remember as I do, devices like acoustic couplers where you can attach a 
> telephone handset to a telephone cradle, so that the mouth ends of the 
> handset and the earpiece ends could squeal to each other. In this way, you 
> could, with nothing but a keyboard and a printer, use your telephone to 
> allow you access to a mainframe computer at some university. 
>
> The relevance here is that the client end is thin computationally. It 
> passes nothing but keystrokes and printer instructions back and forth as 
> acoustic codes. 
>
> This is what an mp3 file does as well. It passes nothing but binary 
> instructions that can be used by an audio device to vibrate. Without a 
> person's ear there to be vibrated, this entire event is described by linear 
> processes where one physical record is converted into another physical 
> record. Nothing is encoded or decoded, experienced or appreciated. There is 
> no sound. 
>
> Think about those old plastic headphones in elementary school that just 
> had hollow plastic tubes as connectors - a system like that generates sound 
> from the start, and the headphones are simply funnels for our ears. That's 
> a different thing from an electronic device which produces sound only in 
> the earbuds. 
>
> All of these discussions about semiotics, free will, consciousness, 
> AI...all come down to understanding the Thin Client. The Thin Client is 
> Searle's Chinese Room in actual fact. You can log into a massive server 
> from some mobile device and use it like a glove, but that doesn't mean that 
> the glove is intelligent. We know that we can transmit only mouseclicks and 
> keystrokes across the pipe and that it works without having to have some 
> sophisticated computing environment (i.e. qualia) get communicated. The 
> Thin Client exposes Comp as misguided because it shows that instructions 
> can indeed exist as purely instrumental forms and require none of the 
> semantic experiences which we enjoy. No matter how much you use the thin 
> client, it never needs to get any thicker. It's just a glove and a window. 
>
> -- 
>
> Hi Craig, 
>
> Excellent post! You have nailed computational immaterialism where it 
> really hurts. Computations cannot see, per the Turing neo-Platonists, any 
> hardward at all. This is their view of computational universality. But here 
> in the thing, it is the reason why they have a 'body problem'. For a 
> Platonistic Machine, there is no hardware or physical world at all. So, why 
> do I have the persistent illusion that I am in a body and interacting with 
> another computation via its body? 
>
> The physical delusion is the thin client, to use your words and 
> discussion. 
>
>  
> I'm fairly sure Bruno will point out that a delusion is a thought and so 
> is immaterial.  You have an immaterial experience fo being in a body.
>
> But the analogy of the thin client is thin indeed.  In the example of the 
> Mars rover it corresponds to looking a computer bus and saying, "See there 
> are just bits being transmitted over this wire, therefore this Mars rover 
> can't have qualia."  It's nothing-buttery spread thin. 
>

Why? What's your argument other than you don't like it? Of course the Mars 
rover has no qualia. The thin client metaphor is exactly why. All that are 
being transmitted are the sets of data that the software is trained to 
recognize. The rover could spit out a thin client mini-rover that is just a 
camera on wheels and the rover could steer it remotely. Would the 
mini-rover have qualia now too, as an eyeball on a wheel?
 

> Meantime the Mars rover and Watson continue to exhibit intelligence of the 
> same kind you would associate with qualia if exhibted by a human being, or 
> even by a dog.
>

That shouldn't be surprising. Mannequins resemble human bodies standing 
still remarkably well.
 

>   You have no argument, just wetware racism.
>

I'm the one laying out a carefully reasoned example. You are the one 
responding with empty accusations. It doesn't seem like my position is the 
one closer to racism.

Craig
 

>
> Brent
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Thin Client

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 5:21:48 PM UTC-5, Stephen Paul King wrote:
>
> On 3/7/2013 12:04 PM, Craig Weinberg wrote: 
> > If you have ever worked with Terminal Servers, RDP, Citrix Metaframe, 
> > or the like (and that's what I have been doing professionally every 
> > day for the last 14 years), you will understand the idea of a Thin 
> > Client architecture. Thin clients are as old as computing, and some of 
> > you remember as I do, devices like acoustic couplers where you can 
> > attach a telephone handset to a telephone cradle, so that the mouth 
> > ends of the handset and the earpiece ends could squeal to each other. 
> > In this way, you could, with nothing but a keyboard and a printer, use 
> > your telephone to allow you access to a mainframe computer at some 
> > university. 
> > 
> > The relevance here is that the client end is thin computationally. It 
> > passes nothing but keystrokes and printer instructions back and forth 
> > as acoustic codes. 
> > 
> > This is what an mp3 file does as well. It passes nothing but binary 
> > instructions that can be used by an audio device to vibrate. Without a 
> > person's ear there to be vibrated, this entire event is described by 
> > linear processes where one physical record is converted into another 
> > physical record. Nothing is encoded or decoded, experienced or 
> > appreciated. There is no sound. 
> > 
> > Think about those old plastic headphones in elementary school that 
> > just had hollow plastic tubes as connectors - a system like that 
> > generates sound from the start, and the headphones are simply funnels 
> > for our ears. That's a different thing from an electronic device which 
> > produces sound only in the earbuds. 
> > 
> > All of these discussions about semiotics, free will, consciousness, 
> > AI...all come down to understanding the Thin Client. The Thin Client 
> > is Searle's Chinese Room in actual fact. You can log into a massive 
> > server from some mobile device and use it like a glove, but that 
> > doesn't mean that the glove is intelligent. We know that we can 
> > transmit only mouseclicks and keystrokes across the pipe and that it 
> > works without having to have some sophisticated computing environment 
> > (i.e. qualia) get communicated. The Thin Client exposes Comp as 
> > misguided because it shows that instructions can indeed exist as 
> > purely instrumental forms and require none of the semantic experiences 
> > which we enjoy. No matter how much you use the thin client, it never 
> > needs to get any thicker. It's just a glove and a window. 
> > 
> > -- 
> Hi Craig, 
>
>  Excellent post! You have nailed computational immaterialism where 
> it really hurts. Computations cannot see, per the Turing neo-Platonists, 
> any hardward at all. This is their view of computational universality. 
> But here in the thing, it is the reason why they have a 'body problem'. 
> For a Platonistic Machine, there is no hardware or physical world at 
> all. So, why do I have the persistent illusion that I am in a body and 
> interacting with another computation via its body? 
>
>  The physical delusion is the thin client, to use your words and 
> discussion. 
>

Thanks Stephen!

Right, if we were just logging into accounts in Platonia, where does a body 
illusion come in handy?

Craig


> -- 
> Onward! 
>
> Stephen 
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread meekerdb

On 3/7/2013 2:49 PM, Craig Weinberg wrote:


To act on itself, as far as I can understand it, would mean to be uncaused 
or truly
random, which is indeed incompatible with determinism. But why should that 
have
anything to do with "intentionality"?


What is intention if not acting on, or better 'through' yourself?


We use the word "intention" as distinct from acting.  One might intend to do X but be 
prevented or change ones mind.  So intention is having a plan of action with a positive 
feeling about it, a feeling of determination.  All of which is compatible with 
determinism.  The Mars rover probably has an intention to reach it's next sampling point.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Comp: Geometry Is A Zombie

2013-03-07 Thread meekerdb

On 3/7/2013 2:37 PM, Telmo Menezes wrote:

Telmo Menezes: look Watson, I have a problem. My wife is mad at me and
I don't know why. I suspect it's because I didn't buy her flowers for
Valentine's, but she keeps telling me that she doesn't want them. Can
you give me some advice? Also, how does it feel to be you? It must be
so weird to not have a body. Am I offending you?

Watson: What is marriage?


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 4:04:18 PM UTC-5, stathisp wrote:
>
>
>
> On 08/03/2013, at 2:43 AM, "Stephen P. King" 
> > 
> wrote: 
>
> >> Yes, we know that classical determinism is wrong, but it is not 
> logically inconsistent with consciousness. 
> > 
> > I must disagree. It is baked into the topology of classical 
> mechanics that a system cannot semantically act upon itself. There is no 
> way to define intentionality in classical physics. This is what Bruno 
> proves with his argument. 
>
> To act on itself, as far as I can understand it, would mean to be uncaused 
> or truly random, which is indeed incompatible with determinism. But why 
> should that have anything to do with "intentionality"? 
>

What is intention if not acting on, or better 'through' yourself?
 

>
> >> It is also not logically inconsistent with choice and free will, 
>  unless you define these terms as inconsistent with determinism, in which 
> case in a deterministic world we would have to create new words meaning 
> pseudo-choice and pseudo-free will to avoid misunderstanding, and then go 
> about our business as usual with this minor change to the language. 
> > 
> > So you say... 
>
> Which part do you disagree with? That people can define free will 
> differently? Or that people wouldn't care if they learned that under a 
> particular definition they lack free will?

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Thin Client

2013-03-07 Thread meekerdb

On 3/7/2013 2:21 PM, Stephen P. King wrote:

On 3/7/2013 12:04 PM, Craig Weinberg wrote:
If you have ever worked with Terminal Servers, RDP, Citrix Metaframe, or the like (and 
that's what I have been doing professionally every day for the last 14 years), you will 
understand the idea of a Thin Client architecture. Thin clients are as old as 
computing, and some of you remember as I do, devices like acoustic couplers where you 
can attach a telephone handset to a telephone cradle, so that the mouth ends of the 
handset and the earpiece ends could squeal to each other. In this way, you could, with 
nothing but a keyboard and a printer, use your telephone to allow you access to a 
mainframe computer at some university.


The relevance here is that the client end is thin computationally. It passes nothing 
but keystrokes and printer instructions back and forth as acoustic codes.


This is what an mp3 file does as well. It passes nothing but binary instructions that 
can be used by an audio device to vibrate. Without a person's ear there to be vibrated, 
this entire event is described by linear processes where one physical record is 
converted into another physical record. Nothing is encoded or decoded, experienced or 
appreciated. There is no sound.


Think about those old plastic headphones in elementary school that just had hollow 
plastic tubes as connectors - a system like that generates sound from the start, and 
the headphones are simply funnels for our ears. That's a different thing from an 
electronic device which produces sound only in the earbuds.


All of these discussions about semiotics, free will, consciousness, AI...all come down 
to understanding the Thin Client. The Thin Client is Searle's Chinese Room in actual 
fact. You can log into a massive server from some mobile device and use it like a 
glove, but that doesn't mean that the glove is intelligent. We know that we can 
transmit only mouseclicks and keystrokes across the pipe and that it works without 
having to have some sophisticated computing environment (i.e. qualia) get communicated. 
The Thin Client exposes Comp as misguided because it shows that instructions can indeed 
exist as purely instrumental forms and require none of the semantic experiences which 
we enjoy. No matter how much you use the thin client, it never needs to get any 
thicker. It's just a glove and a window.


--

Hi Craig,

Excellent post! You have nailed computational immaterialism where it really hurts. 
Computations cannot see, per the Turing neo-Platonists, any hardward at all. This is 
their view of computational universality. But here in the thing, it is the reason why 
they have a 'body problem'. For a Platonistic Machine, there is no hardware or physical 
world at all. So, why do I have the persistent illusion that I am in a body and 
interacting with another computation via its body?


The physical delusion is the thin client, to use your words and discussion.



I'm fairly sure Bruno will point out that a delusion is a thought and so is immaterial. 
You have an immaterial experience fo being in a body.


But the analogy of the thin client is thin indeed.  In the example of the Mars rover it 
corresponds to looking a computer bus and saying, "See there are just bits being 
transmitted over this wire, therefore this Mars rover can't have qualia."  It's 
nothing-buttery spread thin. Meantime the Mars rover and Watson continue to exhibit 
intelligence of the same kind you would associate with qualia if exhibted by a human 
being, or even by a dog.  You have no argument, just wetware racism.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 4:15:21 PM UTC-5, stathisp wrote:
>
>
>
> On 08/03/2013, at 2:58 AM, Craig Weinberg > 
> wrote:
>
> I must disagree. It is baked into the topology of classical mechanics 
>> that a system cannot semantically act upon itself. There is no way to 
>> define intentionality in classical physics. This is what Bruno proves with 
>> his argument.
>>
>>
> Exactly Stephen. What are we talking about here? How is a deterministic 
> system that has preferences and makes choices and considers options 
> different from free will. If something can have a private preference which 
> cannot be determined from the outside, then it is determined privately, 
> i.e. the will of the private determiner. 
>
>
> As I said, it depends on how you define "free will".
>

How do you think it should be defined?
 

>
>  It is also not logically inconsistent with choice and free will,  unless 
>> you define these terms as inconsistent with determinism, in which case in a 
>> deterministic world we would have to create new words meaning pseudo-choice 
>> and pseudo-free will to avoid misunderstanding, and then go about our 
>> business as usual with this minor change to the language.
>>
>>
>> So you say...
>>
>
> Yeah, right. Why would a deterministic world need words having anything to 
> do with choice or free will? At what part of a computer program is 
> something like a choice made? Every position on the logic tree is connected 
> to every other by unambiguous prior cause or intentionally generated 
> (pseudo) randomness. It makes no choices, has no preferences, just follows 
> a sequence of instructions.
>
>
> In general, the existence of words for something does not mean it has an 
> actual referent; consider "fairy" or "God". 
>

It's not clear that 'actual' is an actual referent. 
 

> An adequate response to your position is that you're right - we don't 
> really have choices. Another response is that your definition of "choice" 
> is not the only possible one. 
>
 
Another response is "I concede."

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Comp: Geometry Is A Zombie

2013-03-07 Thread Telmo Menezes
> Alex Trebek: This tool can unclog a toilet.
> Watson: What is a plunger?

Telmo Menezes: look Watson, I have a problem. My wife is mad at me and
I don't know why. I suspect it's because I didn't buy her flowers for
Valentine's, but she keeps telling me that she doesn't want them. Can
you give me some advice? Also, how does it feel to be you? It must be
so weird to not have a body. Am I offending you?

Watson: [exercise left to the reader]

>> > Can Watson figure out how to unclog a toilet?
>
>
>  Watson can't unclog a toilet and neither can Stephen Hawking because both
> lack usable hands.

But Stephen Hawking can look at someone doing it and eventually figure
it out, and then instruct me to do exactly what he says and unclog the
toilet. Can Watson do that?

I'm not arguing that we cannot build machines that pass these tests,
but Watson is far, far, far from it.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Comp: Geometry Is A Zombie

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 4:16:40 PM UTC-5, John Clark wrote:
>
> On Wed, Mar 6, 2013 at 3:27 PM, Craig Weinberg  wrote:
>
>  >>> Everything simulated is physical ultimately, but the physical has no 
 signs of being a simulation, 

>>>
>>> >> Maybe, but I'm not sure what sort of sign you're talking about and 
>>> some have said only half joking that Black Holes, particularly the 
>>> singularity at the center of them, is where God tried to divide by zero. 
>>> And others have said that the quantum nature of reality when things become 
>>> very small reminds them of getting too close to a video screen and seeing 
>>> the individual pixels  
>>>
>>
>> > I was thinking more of the absence of some counterfactual such as 
>> someone emulating a computer which runs faster than the physical host
>>
>
> It would be easy to make a (electronic) computer animation of a Turing 
> Machine that runs faster than anything you could make with a real paper 
> tape. And in just a few days of running time computers can tell astronomers 
> what the Galaxy will look like in a billion years, but it will take the 
> Galaxy a billion years to figure out what it should look like in a billion 
> years.
>

But you can't make an electronic computer animation of anything faster than 
the clock of the computer actually running it.
 

>  
>
>> >> Even today a computer could generate a high resolution 3D image of 
>>> Bryce Canyon where you couldn't be sure if you were looking at a video 
>>> screen or looking through a window.
>>>
>>
>> > Not talking about windows - I'm talking about full embodied presence. 
>> If you talk about windows and images, then you are only talking about 
>> visual sense, which is only one aspect of reality. Making something that is 
>> visually similar to something is easy if you take a photo and digitize it. 
>> It's not much of a simulation either, since the computer isn't generating 
>> the image, just copying it.
>>
>
> Almost 20 years ago I had a program on my home computer (coincidentally I 
> think it was even called "Bryce" after the Canyon), it used fractals to 
> randomly generate landscapes of beautiful lakes and towering mountains; it 
> wasn't quite of photographic quality but it was very good, like a fine 
> painting, and each time you hit the redraw button it would make a new one 
> and you could be sure you were the first human being to see that particular 
> image. I don't have a modern landscape program but I have no doubt they are 
> astronomically better.  
>

You can still tell the difference (just Google 3D Rendering) and you can 
certainly tell the difference when you try to walk inside your computer 
screen.


> >> And if events prove you wrong will that change your worldview? No of 
>>> course it will not because a belief not based on logic can not be destroyed 
>>> by it nor will contradictory evidence change it in any way.
>>>
>>
>> > Why is that a question? 
>>
>
> Because I'm interested if there is any possibility that new evidence would 
> change your views or are they set in concrete with a vow never to change 
> them one iota no matter what.  
>

Haha, who would vow never to change their views? I welcome the chance to 
change my views, all it requires is that I can see some new counterfactual 
to my existing views or a new view that makes more sense.
 

>
> >> And thus using Weinbergian logic if changing X always changes Y and 
>>> changing Y always changes X that proves that X and Y have nothing to do 
>>> with each other. 
>>>
>>
>> > It proves that we can infer they are correlated. If Rush Hour always 
>> happens around sunset, does that mean that we can make the Sun go down by 
>> causing a traffic jam?
>>
>
> By simple logic the answer has to be yes if the following conditions are 
> met. If whenever a traffic jam happens the sun goes down and whenever the 
> sun goes down a traffic jam happens and there has never been a single 
> recorded instance of this not happening then the sun going down and traffic 
> jams are inextricably linked together.  
>

But you can see that's a fallacy just by understanding that obviously we 
cannot cause the Sun to go down by making a traffic jam. Your logic is 
wrong by any measure. It doesn't matter if every traffic jam and every 
sunset are one to one correlates as far as we have seen - maybe that only 
has been happening for a few thousand years but now, like the cicadas, it 
is in a new cycle that we have not seen.

Have you ever heard that old job interview puzzle about the guy whose car 
won't start every time he eats vanilla ice cream?

It's the middle of a hot summer and the guy goes every day to the ice cream 
store and if he gets the peanut butter crunch flavor, then his car starts 
fine, but if he gets the vanilla, his car won't start. It happens every 
time.

The solution is that the peanut butter flavor is on the far end of the 
counter, and it takes the scooper longer to scoop the ice cream so the car 
engine has tim

Re: Thin Client

2013-03-07 Thread Stephen P. King

On 3/7/2013 12:04 PM, Craig Weinberg wrote:
If you have ever worked with Terminal Servers, RDP, Citrix Metaframe, 
or the like (and that's what I have been doing professionally every 
day for the last 14 years), you will understand the idea of a Thin 
Client architecture. Thin clients are as old as computing, and some of 
you remember as I do, devices like acoustic couplers where you can 
attach a telephone handset to a telephone cradle, so that the mouth 
ends of the handset and the earpiece ends could squeal to each other. 
In this way, you could, with nothing but a keyboard and a printer, use 
your telephone to allow you access to a mainframe computer at some 
university.


The relevance here is that the client end is thin computationally. It 
passes nothing but keystrokes and printer instructions back and forth 
as acoustic codes.


This is what an mp3 file does as well. It passes nothing but binary 
instructions that can be used by an audio device to vibrate. Without a 
person's ear there to be vibrated, this entire event is described by 
linear processes where one physical record is converted into another 
physical record. Nothing is encoded or decoded, experienced or 
appreciated. There is no sound.


Think about those old plastic headphones in elementary school that 
just had hollow plastic tubes as connectors - a system like that 
generates sound from the start, and the headphones are simply funnels 
for our ears. That's a different thing from an electronic device which 
produces sound only in the earbuds.


All of these discussions about semiotics, free will, consciousness, 
AI...all come down to understanding the Thin Client. The Thin Client 
is Searle's Chinese Room in actual fact. You can log into a massive 
server from some mobile device and use it like a glove, but that 
doesn't mean that the glove is intelligent. We know that we can 
transmit only mouseclicks and keystrokes across the pipe and that it 
works without having to have some sophisticated computing environment 
(i.e. qualia) get communicated. The Thin Client exposes Comp as 
misguided because it shows that instructions can indeed exist as 
purely instrumental forms and require none of the semantic experiences 
which we enjoy. No matter how much you use the thin client, it never 
needs to get any thicker. It's just a glove and a window.


--

Hi Craig,

Excellent post! You have nailed computational immaterialism where 
it really hurts. Computations cannot see, per the Turing neo-Platonists, 
any hardward at all. This is their view of computational universality. 
But here in the thing, it is the reason why they have a 'body problem'. 
For a Platonistic Machine, there is no hardware or physical world at 
all. So, why do I have the persistent illusion that I am in a body and 
interacting with another computation via its body?


The physical delusion is the thin client, to use your words and 
discussion.


--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Cats fall for illusions too

2013-03-07 Thread Stephen P. King

On 3/7/2013 11:36 AM, Terren Suydam wrote:
I have no doubt that Craig will somehow see this as a vindication of 
his theory and a refutation of mechanism.


Terren


On Wed, Mar 6, 2013 at 5:27 PM, Stephen P. King > wrote:


https://www.youtube.com/watch?feature=player_embedded&v=CcXXQ6GCUb8

--



 Hi Terren,

   How does Mechanism explain this? Will /The Amazing Randy/ 
 be pushed forward to loudly 
claim that the cat was really chasing a laser dot that the video camera 
could not capture?


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Russell Standish
On Thu, Mar 07, 2013 at 01:54:25PM -0800, meekerdb wrote:
> On 3/7/2013 1:50 PM, Russell Standish wrote:
> >>I see what you mean, but some could argue that when you use a random
> >>>device (like a coin) to make a decision, you abandon free will.
> >>>Indeed you let a coin decide for you, when free will meant more that
> >>>you are the one making the free decision.
> >>>
> >That hinges on the self-other distinction. A random coin toss is not
> >considered free will, as you are subsuming your will to an external
> >agent (the coin). But when you make a decision due to a random firing
> >of a neuron (random because the synaptic junctions are
> >thermodynamically noisy), then that is_you_  making the decision, it
> >is_your_  free will.
> >
> >
> 
> Or you can take a more expansive view of yourself and note that it
> was YOU who decided to use the coin flip and to act on it.
> 

Agreed. I was under the impression that Bruno was not doing so in this
case :).

> As Dennett says, you can avoid all responsibility if you only make yourself 
> small enough.
> 

Indeed! My synapses made me do it, your honour.


-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread meekerdb

On 3/7/2013 1:50 PM, Russell Standish wrote:

I see what you mean, but some could argue that when you use a random
>device (like a coin) to make a decision, you abandon free will.
>Indeed you let a coin decide for you, when free will meant more that
>you are the one making the free decision.
>

That hinges on the self-other distinction. A random coin toss is not
considered free will, as you are subsuming your will to an external
agent (the coin). But when you make a decision due to a random firing
of a neuron (random because the synaptic junctions are
thermodynamically noisy), then that is_you_  making the decision, it
is_your_  free will.




Or you can take a more expansive view of yourself and note that it was YOU who decided to 
use the coin flip and to act on it.


As Dennett says, you can avoid all responsibility if you only make yourself 
small enough.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Russell Standish
On Tue, Mar 05, 2013 at 03:53:13PM +0100, Bruno Marchal wrote:
> 
> On 04 Mar 2013, at 20:16, meekerdb wrote:
> 
> >On 3/4/2013 4:23 AM, Bruno Marchal wrote:
> >>
> >>On 03 Mar 2013, at 20:35, meekerdb wrote:
> >
> >Some randomness can be useful, if only to solve the problem of
> >Buridan's ass.
> 
> I see what you mean, but some could argue that when you use a random
> device (like a coin) to make a decision, you abandon free will.
> Indeed you let a coin decide for you, when free will meant more that
> you are the one making the free decision.
> 

That hinges on the self-other distinction. A random coin toss is not
considered free will, as you are subsuming your will to an external
agent (the coin). But when you make a decision due to a random firing
of a neuron (random because the synaptic junctions are
thermodynamically noisy), then that is _you_ making the decision, it
is _your_ free will.


> 
> 
> 
> >But effective randomness is easy to come in the complex
> >environment of life.
> >
> >>On the contrary, deterministic free will make sense, because
> >>free will comes from a lack of self-determinacy, implying
> >>hesitation in front of different path, and self-indeterminacy
> >>follows logically from determinism and self-reference.
> >>
> >>First person indeterminacy can be used easily to convince
> >>oneself that indeterminacy cannot help for free will. Iterating
> >>a self-duplication can't provide free-will.

Why? That particular thought experiment proves that indeterminancy is
a fundamental feature of subjective life. Why shouldn't that be the
source of the indeterminism for solving Buridan ass type problems?

> >
> >As Dennett says deterministic free will is the only free will
> >worth having.
> 
> I agree with him on that. My pint above illustrate that. Random
> choice are not really "free" choice.
> 

Whereas, I don't really know what "deterministic free will" even
means. Probably a definitional thing.

> 
> >Why would anyone want to make decisions that were not determined
> >by their learning and memories and values.
> 

Of course, but that has nothing to do with free will :) Free will is
the ability to do something stupid, the ability to make decisions that
are not determined  learning, memories and values


-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Comp: Geometry Is A Zombie

2013-03-07 Thread John Clark
On Wed, Mar 6, 2013 at 3:27 PM, Craig Weinberg wrote:

 >>> Everything simulated is physical ultimately, but the physical has no
>>> signs of being a simulation,
>>>
>>
>> >> Maybe, but I'm not sure what sort of sign you're talking about and
>> some have said only half joking that Black Holes, particularly the
>> singularity at the center of them, is where God tried to divide by zero.
>> And others have said that the quantum nature of reality when things become
>> very small reminds them of getting too close to a video screen and seeing
>> the individual pixels
>>
>
> > I was thinking more of the absence of some counterfactual such as
> someone emulating a computer which runs faster than the physical host
>

It would be easy to make a (electronic) computer animation of a Turing
Machine that runs faster than anything you could make with a real paper
tape. And in just a few days of running time computers can tell astronomers
what the Galaxy will look like in a billion years, but it will take the
Galaxy a billion years to figure out what it should look like in a billion
years.


> >> Even today a computer could generate a high resolution 3D image of
>> Bryce Canyon where you couldn't be sure if you were looking at a video
>> screen or looking through a window.
>>
>
> > Not talking about windows - I'm talking about full embodied presence. If
> you talk about windows and images, then you are only talking about visual
> sense, which is only one aspect of reality. Making something that is
> visually similar to something is easy if you take a photo and digitize it.
> It's not much of a simulation either, since the computer isn't generating
> the image, just copying it.
>

Almost 20 years ago I had a program on my home computer (coincidentally I
think it was even called "Bryce" after the Canyon), it used fractals to
randomly generate landscapes of beautiful lakes and towering mountains; it
wasn't quite of photographic quality but it was very good, like a fine
painting, and each time you hit the redraw button it would make a new one
and you could be sure you were the first human being to see that particular
image. I don't have a modern landscape program but I have no doubt they are
astronomically better.

>> And if events prove you wrong will that change your worldview? No of
>> course it will not because a belief not based on logic can not be destroyed
>> by it nor will contradictory evidence change it in any way.
>>
>
> > Why is that a question?
>

Because I'm interested if there is any possibility that new evidence would
change your views or are they set in concrete with a vow never to change
them one iota no matter what.

>> And thus using Weinbergian logic if changing X always changes Y and
>> changing Y always changes X that proves that X and Y have nothing to do
>> with each other.
>>
>
> > It proves that we can infer they are correlated. If Rush Hour always
> happens around sunset, does that mean that we can make the Sun go down by
> causing a traffic jam?
>

By simple logic the answer has to be yes if the following conditions are
met. If whenever a traffic jam happens the sun goes down and whenever the
sun goes down a traffic jam happens and there has never been a single
recorded instance of this not happening then the sun going down and traffic
jams are inextricably linked together.  And we know that whenever there is
a change in brain chemistry there is ALWAYS a change in consciousness and
whenever there is a change in consciousness there is ALWAYS a change in
brain chemistry, so consciousness and chemistry are also inextricably
linked together.

> I am saying that chemicals and molecules already are consciousness
>

Saying that everything is conscious is equivalent to saying nothing is
conscious and the word becomes useless. Meaning needs contrast.


>  > and that the effects that they cause and the causes which sometimes
> effect them, are human qualities of consciousness.
>

Computers are made of atoms and molecules just like humans are,

> You are only able to see your assumption that chemicals and molecules
> cause an effect which seems like consciousness.
>

So now I'm not conscious I just have something " which seems like
consciousness".


>  > if I change what I decide then my brain will change.
>

If you change your mind, that is to say if your brain changes what it is
doing, then your brain chemistry changes. And if your brain chemistry
changes then you change your mind. Get it?

>

> > The brain doesn't always lead the mind - the mind can also lead and the
> brain will follow - they aren't different things.
>

The mind and the brain are very different things, one is a noun and the
other is what that noun does. The brain and the mind are as different as
"racing car" is different from "fast".

>> How do you explain that physics can control "I" ?
>>
>
> > Easily. Physics is sense. Sub-personal, impersonal, personal, and
> super-personal. The personal range is the "I" territory and it is
> influenc

Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Stathis Papaioannou


On 08/03/2013, at 2:58 AM, Craig Weinberg  wrote:

>> I must disagree. It is baked into the topology of classical mechanics 
>> that a system cannot semantically act upon itself. There is no way to define 
>> intentionality in classical physics. This is what Bruno proves with his 
>> argument.
> 
> Exactly Stephen. What are we talking about here? How is a deterministic 
> system that has preferences and makes choices and considers options different 
> from free will. If something can have a private preference which cannot be 
> determined from the outside, then it is determined privately, i.e. the will 
> of the private determiner. 

As I said, it depends on how you define "free will".

>>> It is also not logically inconsistent with choice and free will,  unless 
>>> you define these terms as inconsistent with determinism, in which case in a 
>>> deterministic world we would have to create new words meaning pseudo-choice 
>>> and pseudo-free will to avoid misunderstanding, and then go about our 
>>> business as usual with this minor change to the language.
>> 
>> So you say...
> 
> Yeah, right. Why would a deterministic world need words having anything to do 
> with choice or free will? At what part of a computer program is something 
> like a choice made? Every position on the logic tree is connected to every 
> other by unambiguous prior cause or intentionally generated (pseudo) 
> randomness. It makes no choices, has no preferences, just follows a sequence 
> of instructions.

In general, the existence of words for something does not mean it has an actual 
referent; consider "fairy" or "God". An adequate response to your position is 
that you're right - we don't really have choices. Another response is that your 
definition of "choice" is not the only possible one.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Stathis Papaioannou


On 08/03/2013, at 2:43 AM, "Stephen P. King"  wrote:

>> Yes, we know that classical determinism is wrong, but it is not logically 
>> inconsistent with consciousness.
> 
> I must disagree. It is baked into the topology of classical mechanics 
> that a system cannot semantically act upon itself. There is no way to define 
> intentionality in classical physics. This is what Bruno proves with his 
> argument.

To act on itself, as far as I can understand it, would mean to be uncaused or 
truly random, which is indeed incompatible with determinism. But why should 
that have anything to do with "intentionality"?

>> It is also not logically inconsistent with choice and free will,  unless you 
>> define these terms as inconsistent with determinism, in which case in a 
>> deterministic world we would have to create new words meaning pseudo-choice 
>> and pseudo-free will to avoid misunderstanding, and then go about our 
>> business as usual with this minor change to the language.
> 
> So you say...

Which part do you disagree with? That people can define free will differently? 
Or that people wouldn't care if they learned that under a particular definition 
they lack free will?

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Messages Aren't Made of Information

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 12:56:01 PM UTC-5, William R. Buckley wrote:
>
>  
>
> The context takes all action, to include the action 
>
> of doing nothing at all.
>
>  
>
> Once the signal is given by the transmitter, then sure it is up to the 
> receiver of the signal to interpret it. How the transmitter formats the 
> signal will influence the receiver's reception and interpretation 
> possibilities though.
>  
>
> How the transmitter formats signal, what sign the transmitter sends will 
> influence the 
>
> receiver’s reception but only to the extent that the transmitted signal 
> corresponds to 
>
> a priori defined acceptance criteria in the receiver.  This criteria is 
> not under the influence 
>
> of the transmitter.
>

Only the initial criteria. The signal can be "switch to 88.9 mHz and use 
Morse Code" or "the ip address to use for future signals is the number of 
my favorite basketball player followed by .15.129.99".

 

>  
>
> wrb 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




RE: Messages Aren't Made of Information

2013-03-07 Thread William R. Buckley
 

The context takes all action, to include the action 

of doing nothing at all.

 

Once the signal is given by the transmitter, then sure it is up to the
receiver of the signal to interpret it. How the transmitter formats the
signal will influence the receiver's reception and interpretation
possibilities though.
 

How the transmitter formats signal, what sign the transmitter sends will
influence the 

receiver's reception but only to the extent that the transmitted signal
corresponds to 

a priori defined acceptance criteria in the receiver.  This criteria is not
under the influence 

of the transmitter.

 

wrb 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Messages Aren't Made of Information

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 12:37:35 PM UTC-5, William R. Buckley wrote:
>
> A machine can accept sign and yield alteration of 
>
> its configuration (add to its parts, delete from its 
>
> parts but most of all alter the complexity of its 
>
> parts and their arrangement) such that the machine 
>
> develops its ability to:
>
>  
>
> 1.   accept sign – one yield you did not consider
>

What does this mean in terms of the buckets?
 

> 2.   increase the complexity of constructs – another 
>
> yield you did not consider
>

Again, complexity is our value, not the machine's. The bucket brigade 
doesn't know it the patterns form the Gettysburg Address or just all half 
gull.
 

> 3.   acquire Turing competence from incompetence – a 
>
> third yield you did not consider
>

We don't need to consider any of the yields which are sought by the user of 
the machine, only those which yield something to the machine itself - which 
I don't think it any. To the machine, it make no difference whether it is 
running the same meaningless exercise for 10,000 years or if it is 
communicating with an alien civilization for the first time. It doesn't 
care whether it is running or not.

Craig
 

>  
>
> wrb
>
>  
>
> *From:* everyth...@googlegroups.com  [mailto:
> everyth...@googlegroups.com ] *On Behalf Of *Craig Weinberg
> *Sent:* Thursday, March 07, 2013 8:33 AM
> *To:* everyth...@googlegroups.com 
> *Subject:* Re: Messages Aren't Made of Information
>
>  
>
>
>
> On Thursday, March 7, 2013 1:39:25 AM UTC-5, William R. Buckley wrote:
>
> I have before claimed that the computer is 
> a good example of the power of semiosis. 
>
> It is simple enough to see that the mere 
> construction of a Turing machine confers 
> upon that machine the ability to recognise 
> all computations; to generate the yield of 
> such computations. 
>
> In this sense, a program (the source code) 
> is a sequence of signs that upon acceptance 
> brings the machine to generate some 
> corresponding yield; a computation. 
>
> Also, the intention of an entity behind sign 
> origination has nothing whatsoever to do with 
> the acceptability of that sign by some other 
> entity, much less the meaning there taken for 
> the sign. 
>
> The meaning of a sign is always centered upon 
> the acceptor of that sign. 
>
>
> I agree but I don't think the machine can accept any sign. It can copy 
> them and perform scripted transformations on them, but ultimately there is 
> no yield at all. The Turing machine does not no that it has yielded a 
> result of a computation, and more than a bucket of water knows when it is 
> being emptied. In fact, you could make a Turing machine out of nothing but 
> buckets of water on pulleys and it would literally be some pattern of 
> filled buckets which is supposed to be meaningful as a sign or yield to the 
> 'machine' (collection of buckets? water molecules? convection currents? 
> general buckety-watery-movingness?)
>
> Craig
>
>  
>
>
> wrb 
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com .
> To post to this group, send email to everyth...@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>  
>  
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Messages Aren't Made of Information

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 12:32:21 PM UTC-5, William R. Buckley wrote:
>
> The sign is what it is and contexts react to signs.
>

What is it though?

This sentence... is it words? Letters? Pixels on a screen? Images in our 
visual experience? photons? All of these require detection and 
interpretation.

The sign or text is just a perturbation of a given context.

 
>
> The other words you use in your argumentation 
>
> are unnecessary at the very least, and I think they 
>
> lead to muddled thinking on your end.
>

No, in my experience they lead to perfect clarity.
 

>  
>
> The sign takes no action; it simply is.
>

It takes no action, but nothing simply is. A sign is an experience which 
interpreted as linking one experience to another - nothing more. It has no 
independent existence. 
 

>  
>
> The context takes all action, to include the action 
>
> of doing nothing at all.
>
>  
>
Once the signal is given by the transmitter, then sure it is up to the 
receiver of the signal to interpret it. How the transmitter formats the 
signal will influence the receiver's reception and interpretation 
possibilities though.
 

> Meaning is no more nor no less than the action 
>
> taken by the context.
>

Not sure I get what you mean. A signal can still be meaningful even if you 
never take action on it. Your favorite baseball hero says hi to you and you 
remember it as meaningful. What does that have to do with any action taken 
or not taken?

 
>
> The sign does not have some magical character 
>
> called **sensitivity to detectability**
>

I agree, the sign is a figurative entity. It has no physical presence or 
capacities.
 

>  
>
> Semiotics has nothing to do with Shannon’s
>
> information transmission problem.  The reason 
>
> for this is that Shannon assumes that both 
>
> transmitter and receiver share a common 
>
> context.  You, on the other hand, don’t have 
>
> that luxury.
>

It makes sense to assume a common context if you are designing a 
communications system. I don't have an opinion on whether Shannon and 
semiotics are unrelated. Depends how you want to consider them.

Craig
 

>  
>
> wrb
>
>  
>
> *From:* everyth...@googlegroups.com  [mailto:
> everyth...@googlegroups.com ] *On Behalf Of *Craig Weinberg
> *Sent:* Thursday, March 07, 2013 8:17 AM
> *To:* everyth...@googlegroups.com 
> *Subject:* Re: Messages Aren't Made of Information
>
>  
>
>
>
> On Wednesday, March 6, 2013 12:09:28 PM UTC-5, William R. Buckley wrote:
>
> Now we are getting some place.
>
>  
>
> Exactly.  There is simply action.
>
>  
>
> Contexts react to sign.
>
>
> They react to their interpretations of a sign. The sign itself is a figure 
> - a disposable form hijacked by the intention of the transmitter. The sign 
> depends on sensitivities to be detected. When it is detected, it is not 
> detected as the sign intended by the transmitter unless the semiosis is 
> well executed, which is up to both the transmitter and receiver's 
> intentional and unintentional contributions.
>
> Craig
>  
>
>  
>
> Nothing more.  Nothing less.
>
>  
>
> The complexity of action is open ended.
>
>  
>
> wrb
>
>  
>
> *From:* everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] *On 
> Behalf Of *Craig Weinberg
> *Sent:* Wednesday, March 06, 2013 4:12 AM
> *To:* everyth...@googlegroups.com
> *Subject:* Re: Messages Aren't Made of Information
>
>  
>
>
>
> On Tuesday, March 5, 2013 5:48:19 PM UTC-5, William R. Buckley wrote:
>
> Craig:
>
>  
>
> The mistake you make is clearly stated in your words:
>
>  
>
> “…doesn’t mean that they communicated with judgment.”
>
>  
>
> You are anthropomorphizing.  The value is no more nor no 
>
> less than the action taken upon signal acceptance.
>
>
> That's ok, but it means there is no value. There is simply action.
>
> Craig
>  
>
>  
>
> wrb
>
>  
>
> *From:* everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] *On 
> Behalf Of *Craig Weinberg
> *Sent:* Tuesday, March 05, 2013 1:27 PM
> *To:* everyth...@googlegroups.com
> *Subject:* Re: Messages Aren't Made of Information
>
>  
>
>
>
> On Tuesday, March 5, 2013 3:07:00 PM UTC-5, William R. Buckley wrote:
>
> The fact that a machine can act in a discriminatory based 
>
> upon some signal (sign, information) input is demonstration 
>
> of value judgment.
>
>
> Only in our eyes, not in its own eyes. It's like telling a kid to say some 
> insult to someone in another language. The fact they are able to carry out 
> your instruction doesn't mean that they communicated with judgment.
>  
>
>  
>
> Just as there is no **in** in a machine, so to there is no **in** 
>
> in a biological organism; they both, machine and organism, 
>
>
> But there is an 'in' with respect to the experience of an organism - only 
> because we know it first hand. There would seem to be no reason why a 
> machine couldn't have a similar 'in', but it actually seems that their 
> nature indicates they do not. I take the extra step and hypothesize exactly 
> wh

RE: Messages Aren't Made of Information

2013-03-07 Thread William R. Buckley
Right there, that is the problem: your reliance upon consciousness 

for your argumentation.

 

wrb

 

From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Craig Weinberg
Sent: Thursday, March 07, 2013 9:34 AM
To: everything-list@googlegroups.com
Subject: Re: Messages Aren't Made of Information

 



On Thursday, March 7, 2013 12:21:57 PM UTC-5, William R. Buckley wrote:

Craig:

 

When you say that "interpretation is consciousness" you contradict 

your prior statements regarding semiosis, that acceptance and action 

are not value.


I'm not sure what you're getting at. Acceptance in the sense of receiving a
sign is not the same as valuing, interpreting, or being conscious of a sign.
A router receives an electronic signal, but it has no interpretation or
value of it beyond routing it to the next router.

Craig

 

wrb

 

From: everyth...@googlegroups.com 
[mailto:everyth...@googlegroups.com  ] On Behalf Of Craig
Weinberg
Sent: Thursday, March 07, 2013 8:05 AM
To: everyth...@googlegroups.com  
Subject: Re: Messages Aren't Made of Information

 



On Thursday, March 7, 2013 6:55:25 AM UTC-5, Bruno Marchal wrote:

 

On 05 Mar 2013, at 19:14, Craig Weinberg wrote:

 



On Tuesday, March 5, 2013 12:03:28 PM UTC-5, William R. Buckley wrote:

Craig:

 

You statement of need for a human to observe the 

pattern is the smoking gun to indicate a misunderstanding 

of semiotic theory on your part.


I don't think that it has to be humans doing the observing at all. 
 

 

Specifically, you don't need a human; a machine will do.


A machine can only help another non-machine interpret something. I don't
think that they can interpret anything for 'themselves'.

 

You should study machine's self-reference. It is easy to program a machine
interpreting data, by itself and for herself. This is not like
consciousness. this is testable and already done.

You confuse the notion of machine before Post, Church Turing and after.


Interpretation is consciousness though. What is tested is that results
correspond with expectations in a way which is meaningful to us, not to the
machine. I can use a mirror to reflect an image that I see, but that doesn't
mean that the mirror intends to reflect images, or knows what they are, or
has an experience of them. We can prove that the image is indeed consistent
with our expectations of a reflected original though.

Craig
 

 

 

 

Bruno

 

 

 

 

 

Not all machines are man-made.


True, but what we see as natural machines may not be just machines. Man-made
machines may be just machines.

Craig

 

wrb

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On
Behalf Of Craig Weinberg
Sent: Tuesday, March 05, 2013 5:24 AM
To: everyth...@googlegroups.com
Subject: Re: Messages Aren't Made of Information

 



On Tuesday, March 5, 2013 2:06:20 AM UTC-5, William R. Buckley wrote:

There is information (I take information to be a 
manifestation of entropy) and it is always represented 
in the form of a pattern (a distribution) of the units 
of mass/energy of which the Universe is composed.  


I can agree that information could be considered a manifestation of entropy,
to the extent that entropy is necessary to provide a contrast space for a
distribution. To string an ellipses together, you need one dot, repetition,
space, and a quality of measurement which yokes together the three dots
aesthetically. Beyond that, you also need human observer with human visual
sense to turn the distribution into a 'pattern'. Without that, of course,
even distribution cannot cohere into "a" distribution, as there is no scale,
range, quality, etc to anchor the expectation. If we are a microbe, we may
not ever find our way from one dot to the next.

I 
think that semiotic signs are simply specific bits 
of information; I will use the terms synonymously. 

Information has meaning only within context.  For many 
people, context is taken to mean one piece of information 
as compared to another piece of information.  I do not 
take this meaning of context when I discuss semiotics. 
Instead, I take semiotic context to be the acceptor of 
the information.  Hence, all meaning resides a priori 
within information acceptors. 


Agree. Well, transmitters form the signs from their own sense of meaning as
well. That's how we are having this discussion.
 


What you know you have always known; the sign merely 
serves to bring that knowledge to your conscious mind. 


Right. I mean it might be a bit more complicated as far as novelty goes. I
don't know if the state of unconscious information is really what I "have
always known" but that this particular constellation of meanings reflects
the Totality in a way that it is only trivially novel. Like if you hit a
jackpot on a slot machine - that may not have happened before, but the slot
machine is designed to payout whenever it does. The jackpot already exists
as a potential and sooner or later it will be real

RE: Messages Aren't Made of Information

2013-03-07 Thread William R. Buckley
A machine can accept sign and yield alteration of 

its configuration (add to its parts, delete from its 

parts but most of all alter the complexity of its 

parts and their arrangement) such that the machine 

develops its ability to:

 

1.   accept sign - one yield you did not consider

2.   increase the complexity of constructs - another 

yield you did not consider

3.   acquire Turing competence from incompetence - a 

third yield you did not consider

 

wrb

 

From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Craig Weinberg
Sent: Thursday, March 07, 2013 8:33 AM
To: everything-list@googlegroups.com
Subject: Re: Messages Aren't Made of Information

 



On Thursday, March 7, 2013 1:39:25 AM UTC-5, William R. Buckley wrote:

I have before claimed that the computer is 
a good example of the power of semiosis. 

It is simple enough to see that the mere 
construction of a Turing machine confers 
upon that machine the ability to recognise 
all computations; to generate the yield of 
such computations. 

In this sense, a program (the source code) 
is a sequence of signs that upon acceptance 
brings the machine to generate some 
corresponding yield; a computation. 

Also, the intention of an entity behind sign 
origination has nothing whatsoever to do with 
the acceptability of that sign by some other 
entity, much less the meaning there taken for 
the sign. 

The meaning of a sign is always centered upon 
the acceptor of that sign. 


I agree but I don't think the machine can accept any sign. It can copy them
and perform scripted transformations on them, but ultimately there is no
yield at all. The Turing machine does not no that it has yielded a result of
a computation, and more than a bucket of water knows when it is being
emptied. In fact, you could make a Turing machine out of nothing but buckets
of water on pulleys and it would literally be some pattern of filled buckets
which is supposed to be meaningful as a sign or yield to the 'machine'
(collection of buckets? water molecules? convection currents? general
buckety-watery-movingness?)

Craig

 


wrb 



-- 
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Messages Aren't Made of Information

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 12:21:57 PM UTC-5, William R. Buckley wrote:
>
> Craig:
>
>  
>
> When you say that “interpretation is consciousness” you contradict 
>
> your prior statements regarding semiosis, that acceptance and action 
>
> are not value.
>

I'm not sure what you're getting at. Acceptance in the sense of receiving a 
sign is not the same as valuing, interpreting, or being conscious of a 
sign. A router receives an electronic signal, but it has no interpretation 
or value of it beyond routing it to the next router.

Craig

 
>
> wrb
>
>  
>
> *From:* everyth...@googlegroups.com  [mailto:
> everyth...@googlegroups.com ] *On Behalf Of *Craig Weinberg
> *Sent:* Thursday, March 07, 2013 8:05 AM
> *To:* everyth...@googlegroups.com 
> *Subject:* Re: Messages Aren't Made of Information
>
>  
>
>
>
> On Thursday, March 7, 2013 6:55:25 AM UTC-5, Bruno Marchal wrote:
>
>  
>
> On 05 Mar 2013, at 19:14, Craig Weinberg wrote:
>
>
>
>
>
> On Tuesday, March 5, 2013 12:03:28 PM UTC-5, William R. Buckley wrote:
>
> Craig:
>
>  
>
> You statement of need for a human to observe the 
>
> pattern is the smoking gun to indicate a misunderstanding 
>
> of semiotic theory on your part.
>
>
> I don't think that it has to be humans doing the observing at all. 
>  
>
>  
>
> Specifically, you don’t need a human; a machine will do.
>
>
> A machine can only help another non-machine interpret something. I don't 
> think that they can interpret anything for 'themselves'.
>
>  
>
> You should study machine's self-reference. It is easy to program a machine 
> interpreting data, by itself and for herself. This is not like 
> consciousness. this is testable and already done.
>
> You confuse the notion of machine before Post, Church Turing and after.
>
>
> Interpretation is consciousness though. What is tested is that results 
> correspond with expectations in a way which is meaningful to us, not to the 
> machine. I can use a mirror to reflect an image that I see, but that 
> doesn't mean that the mirror intends to reflect images, or knows what they 
> are, or has an experience of them. We can prove that the image is indeed 
> consistent with our expectations of a reflected original though.
>
> Craig
>  
>
>  
>
>  
>
>  
>
> Bruno
>
>  
>
>  
>
>
>
>  
>
>  
>
> Not all machines are man-made.
>
>
> True, but what we see as natural machines may not be just machines. 
> Man-made machines may be just machines.
>
> Craig
>
>  
>
> wrb
>
>  
>
> *From:* everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] *On 
> Behalf Of *Craig Weinberg
> *Sent:* Tuesday, March 05, 2013 5:24 AM
> *To:* everyth...@googlegroups.com
> *Subject:* Re: Messages Aren't Made of Information
>
>  
>
>
>
> On Tuesday, March 5, 2013 2:06:20 AM UTC-5, William R. Buckley wrote:
>
> There is information (I take information to be a 
> manifestation of entropy) and it is always represented 
> in the form of a pattern (a distribution) of the units 
> of mass/energy of which the Universe is composed.  
>
>
> I can agree that information could be considered a manifestation of 
> entropy, to the extent that entropy is necessary to provide a contrast 
> space for a distribution. To string an ellipses together, you need one dot, 
> repetition, space, and a quality of measurement which yokes together the 
> three dots aesthetically. Beyond that, you also need human observer with 
> human visual sense to turn the distribution into a 'pattern'. Without that, 
> of course, even distribution cannot cohere into "a" distribution, as there 
> is no scale, range, quality, etc to anchor the expectation. If we are a 
> microbe, we may not ever find our way from one dot to the next.
>
> I 
> think that semiotic signs are simply specific bits 
> of information; I will use the terms synonymously. 
>
> Information has meaning only within context.  For many 
> people, context is taken to mean one piece of information 
> as compared to another piece of information.  I do not 
> take this meaning of context when I discuss semiotics. 
> Instead, I take semiotic context to be the acceptor of 
> the information.  Hence, all meaning resides a priori 
> within information acceptors. 
>
>
> Agree. Well, transmitters form the signs from their own sense of meaning 
> as well. That's how we are having this discussion.
>  
>
>
> What you know you have always known; the sign merely 
> serves to bring that knowledge to your conscious mind. 
>
>
> Right. I mean it might be a bit more complicated as far as novelty goes. I 
> don't know if the state of unconscious information is really what I "have 
> always known" but that this particular constellation of meanings reflects 
> the Totality in a way that it is only trivially novel. Like if you hit a 
> jackpot on a slot machine - that may not have happened before, but the slot 
> machine is designed to payout whenever it does. The jackpot already exists 
> as a potential and sooner or later it will be realized.
>  
>
>
> That you may hav

RE: Messages Aren't Made of Information

2013-03-07 Thread William R. Buckley
The sign is what it is and contexts react to signs.

 

The other words you use in your argumentation 

are unnecessary at the very least, and I think they 

lead to muddled thinking on your end.

 

The sign takes no action; it simply is.

 

The context takes all action, to include the action 

of doing nothing at all.

 

Meaning is no more nor no less than the action 

taken by the context.

 

The sign does not have some magical character 

called *sensitivity to detectability*

 

Semiotics has nothing to do with Shannon’s

information transmission problem.  The reason 

for this is that Shannon assumes that both 

transmitter and receiver share a common 

context.  You, on the other hand, don’t have 

that luxury.

 

wrb

 

From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Craig Weinberg
Sent: Thursday, March 07, 2013 8:17 AM
To: everything-list@googlegroups.com
Subject: Re: Messages Aren't Made of Information

 



On Wednesday, March 6, 2013 12:09:28 PM UTC-5, William R. Buckley wrote:

Now we are getting some place.

 

Exactly.  There is simply action.

 

Contexts react to sign.


They react to their interpretations of a sign. The sign itself is a figure -
a disposable form hijacked by the intention of the transmitter. The sign
depends on sensitivities to be detected. When it is detected, it is not
detected as the sign intended by the transmitter unless the semiosis is well
executed, which is up to both the transmitter and receiver's intentional and
unintentional contributions.

Craig
 

 

Nothing more.  Nothing less.

 

The complexity of action is open ended.

 

wrb

 

From: everyth...@googlegroups.com 
[mailto:everyth...@googlegroups.com  ] On Behalf Of Craig
Weinberg
Sent: Wednesday, March 06, 2013 4:12 AM
To: everyth...@googlegroups.com  
Subject: Re: Messages Aren't Made of Information

 



On Tuesday, March 5, 2013 5:48:19 PM UTC-5, William R. Buckley wrote:

Craig:

 

The mistake you make is clearly stated in your words:

 

“…doesn’t mean that they communicated with judgment.”

 

You are anthropomorphizing.  The value is no more nor no 

less than the action taken upon signal acceptance.


That's ok, but it means there is no value. There is simply action.

Craig
 

 

wrb

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On
Behalf Of Craig Weinberg
Sent: Tuesday, March 05, 2013 1:27 PM
To: everyth...@googlegroups.com
Subject: Re: Messages Aren't Made of Information

 



On Tuesday, March 5, 2013 3:07:00 PM UTC-5, William R. Buckley wrote:

The fact that a machine can act in a discriminatory based 

upon some signal (sign, information) input is demonstration 

of value judgment.


Only in our eyes, not in its own eyes. It's like telling a kid to say some
insult to someone in another language. The fact they are able to carry out
your instruction doesn't mean that they communicated with judgment.
 

 

Just as there is no *in* in a machine, so to there is no *in* 

in a biological organism; they both, machine and organism, 


But there is an 'in' with respect to the experience of an organism - only
because we know it first hand. There would seem to be no reason why a
machine couldn't have a similar 'in', but it actually seems that their
nature indicates they do not. I take the extra step and hypothesize exactly
why that is - because experience is not generated out of the bodies
associated with them, but rather the bodies are simply a public view of one
aspect of the experience. If you build a machine, you are assembling bodies
to relate to each other, as external forms, so that no interiority 'emerges'
from the gaps between them.
 

are forms that treat other forms in certain proscribed ways.

 

You cannot demonstrate otherwise.


Sure I can. Feelings, colors, personalities, intentions, historical
zeitgeists...these are not forms relating to forms.

Craig
 

 

wrb

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On
Behalf Of Craig Weinberg
Sent: Tuesday, March 05, 2013 10:37 AM
To: everyth...@googlegroups.com
Subject: Re: Messages Aren't Made of Information

 



On Tuesday, March 5, 2013 3:53:31 AM UTC-5, Alberto G.Corona wrote:

Let´s say that what we call "information" is an extended form of sensory
input. What makes this input "information" is the usability of this input
for reducing the internal entropy of the receiver or increase the internal
order. The receiver can be a machine, a cell, a person or a society for
example. If the input do not produce this effect in the receiver, then that
input is not information.


The increase of internal order of the receiver is a symptom of an experience
of being informed but they are not the same thing. It's not really even
relevant in most cases. I would not call it an extended form of sensory
input, but a reduction of sensory experience. Input is not a physical
reality, it is a conceptual label.

Consider Blindsight:

I hold up two fingers

RE: Messages Aren't Made of Information

2013-03-07 Thread William R. Buckley
I think that like light, being composed of two propagating 

waves, we should find sound to be composed of propagating 

pressure waves regardless of media.

 

wrb

 

From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Craig Weinberg
Sent: Thursday, March 07, 2013 8:10 AM
To: everything-list@googlegroups.com
Subject: Re: Messages Aren't Made of Information

 



On Tuesday, March 5, 2013 10:55:31 PM UTC-5, William R. Buckley wrote:

The falling tree makes sound, the wind make sound, the . makes sound 

regardless of your presence (or the presence of others) to hear that sound.


Regardless of my presence, of course, but to make sound, you need an ear and
a medium which vibrates that ear. If you take the atmosphere away, then of
course the falling tree could not make a sound to anyone. For the same
reason, if you take all of the ears away, then there can be no such thing as
sound.
 

 

To argue anything else is utter nonsense.

To the contrary. To assume that physics can simply 'exist' outside of a
context of detection and participation is a statement of religious faith. We
have never experienced an unexperienced world, so it would be unscientific
to presume such a thing. This has nothing to do with human experience, its
ontology.

Craig
 

 

wrb

 

From: everyth...@googlegroups.com 
[mailto:everyth...@googlegroups.com  ] On Behalf Of Craig
Weinberg
Sent: Tuesday, March 05, 2013 7:34 PM
To: everyth...@googlegroups.com  
Subject: Re: Messages Aren't Made of Information

 



On Tuesday, March 5, 2013 5:52:32 PM UTC-5, William R. Buckley wrote:

I do not hold that the acceptor must exist, for then I 

am making a value judgment, and I have already scolded 

Craig for the same thing.

 

Think of it this way.  A volume of gas has a measure of 

entropy.  This means that the molecules are found in 


found by what?
 

a specific sequence of microstates, and those microstates 

constitute an information state of the molecules.  


Who is it constituted to though? Empty space? The molecules as a group? Each
molecule? What is validating that these molecules exist in some way - that
there is a such thing as a microstate which can be detected in some way by
something... and what is detection? How does it work?

When these things are taken as axiomatic, then we are just reiterating those
axioms when we claim that no acceptor must exist. In my understanding, exist
and acceptor are the same thing.

 

Alter 

that microstate sequence (as by adding or removing 

entropy) and the description of the microstate sequence 

changes correspondingly; entropy is information.


Only if something can detect their own description of the microstate as
having changed. We cannot assume that there is any change at all if nothing
can possibly detect it. For example, if I take make a movie of ice cubes
melting in a glass, even though that is a case of increasing thermodynamic
entropy, we will see a lower cost of video compression in a movie of the
glass after the ice has melted completely. In that case the image
description can be made to follow either increasing or decreasing
information entropy depending on whether you play the movie forward and
backward. There is no link between microstate thermodynamic entropy and
optical description information entropy.

Craig

 

Acceptors and signals; contexts and signs; .

 

wrb

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On
Behalf Of John Mikes
Sent: Tuesday, March 05, 2013 1:13 PM
To: everyth...@googlegroups.com
Subject: Re: Messages Aren't Made of Information

 

Dear Bil B. you probably have thought in these lines during similar long
periods as I did. It was ~2 decades ago when I defined 

i n f o r m a t i o n  as something with (at least) 2 ends: 

1. the notion (in whatever format it shows up)  - and

2. the acceptor (adjusting the notion in whatever context it can be 

perceived - appercipiated (adjusted>). 

I have no idea how to make a connection between information (anyway how one
defines it) and the (inner?) disorder level of anything (entropy?). I
dislike this thermodynamic term alltogether. 

 

Later on I tried to refine my wording into:

RELATIONS and the capability of recognizing them. That moved away from a
'human(?)' framework. E. g. I called the 'closeness of a '(+)' charge to a
'(-)' potential an information so it came close to SOME consciousness (=(?)
response to relations), no matter in what kind of domain. 

 

Do you feel some merit to my thinking?

 

John Mikes

On Tue, Mar 5, 2013 at 2:06 AM, William R. Buckley 
wrote:

There is information (I take information to be a
manifestation of entropy) and it is always represented
in the form of a pattern (a distribution) of the units
of mass/energy of which the Universe is composed.  I
think that semiotic signs are simply specific bits
of information; I will use the terms synonymously.

Information has meaning only within context.  For many
peo

Re: Comp: Geometry Is A Zombie

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 12:12:31 PM UTC-5, John Clark wrote:
>
> On Wed, Mar 6, 2013  Craig Weinberg >wrote:
>  
>
>> > but back in the days of my awesome Atari 800 computer, there was a 
>> program called S.A.M. which sounded like this: 
>> http://www.youtube.com/watch?v=k7nqixe3WrQ
>>
>> Now, 31 years later, we have this: 
>> http://www.acapela-group.com/text-to-speech-interactive-demo.html
>>
>> Improvement in naturalism: Nil. 
>>
>
>  Holy cow, I think you need to see a doctor to get the wax out of your 
> ears, I hear a HUGE improvement!
>

In what way? the voice might sound more aesthetically pleasing, but you 
mean to say that when you type in a sentence in to the new system it 
doesn't sound every bit as unnatural and disconnected as ever? Maybe you 
are a zombie.

Craig
 

>
> John K Clark
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




RE: Messages Aren't Made of Information

2013-03-07 Thread William R. Buckley
Craig:

 

When you say that "interpretation is consciousness" you contradict 

your prior statements regarding semiosis, that acceptance and action 

are not value.

 

wrb

 

From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Craig Weinberg
Sent: Thursday, March 07, 2013 8:05 AM
To: everything-list@googlegroups.com
Subject: Re: Messages Aren't Made of Information

 



On Thursday, March 7, 2013 6:55:25 AM UTC-5, Bruno Marchal wrote:

 

On 05 Mar 2013, at 19:14, Craig Weinberg wrote:







On Tuesday, March 5, 2013 12:03:28 PM UTC-5, William R. Buckley wrote:

Craig:

 

You statement of need for a human to observe the 

pattern is the smoking gun to indicate a misunderstanding 

of semiotic theory on your part.


I don't think that it has to be humans doing the observing at all. 
 

 

Specifically, you don't need a human; a machine will do.


A machine can only help another non-machine interpret something. I don't
think that they can interpret anything for 'themselves'.

 

You should study machine's self-reference. It is easy to program a machine
interpreting data, by itself and for herself. This is not like
consciousness. this is testable and already done.

You confuse the notion of machine before Post, Church Turing and after.


Interpretation is consciousness though. What is tested is that results
correspond with expectations in a way which is meaningful to us, not to the
machine. I can use a mirror to reflect an image that I see, but that doesn't
mean that the mirror intends to reflect images, or knows what they are, or
has an experience of them. We can prove that the image is indeed consistent
with our expectations of a reflected original though.

Craig
 

 

 

 

Bruno

 

 





 

 

Not all machines are man-made.


True, but what we see as natural machines may not be just machines. Man-made
machines may be just machines.

Craig

 

wrb

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On
Behalf Of Craig Weinberg
Sent: Tuesday, March 05, 2013 5:24 AM
To: everyth...@googlegroups.com
Subject: Re: Messages Aren't Made of Information

 



On Tuesday, March 5, 2013 2:06:20 AM UTC-5, William R. Buckley wrote:

There is information (I take information to be a 
manifestation of entropy) and it is always represented 
in the form of a pattern (a distribution) of the units 
of mass/energy of which the Universe is composed.  


I can agree that information could be considered a manifestation of entropy,
to the extent that entropy is necessary to provide a contrast space for a
distribution. To string an ellipses together, you need one dot, repetition,
space, and a quality of measurement which yokes together the three dots
aesthetically. Beyond that, you also need human observer with human visual
sense to turn the distribution into a 'pattern'. Without that, of course,
even distribution cannot cohere into "a" distribution, as there is no scale,
range, quality, etc to anchor the expectation. If we are a microbe, we may
not ever find our way from one dot to the next.

I 
think that semiotic signs are simply specific bits 
of information; I will use the terms synonymously. 

Information has meaning only within context.  For many 
people, context is taken to mean one piece of information 
as compared to another piece of information.  I do not 
take this meaning of context when I discuss semiotics. 
Instead, I take semiotic context to be the acceptor of 
the information.  Hence, all meaning resides a priori 
within information acceptors. 


Agree. Well, transmitters form the signs from their own sense of meaning as
well. That's how we are having this discussion.
 


What you know you have always known; the sign merely 
serves to bring that knowledge to your conscious mind. 


Right. I mean it might be a bit more complicated as far as novelty goes. I
don't know if the state of unconscious information is really what I "have
always known" but that this particular constellation of meanings reflects
the Totality in a way that it is only trivially novel. Like if you hit a
jackpot on a slot machine - that may not have happened before, but the slot
machine is designed to payout whenever it does. The jackpot already exists
as a potential and sooner or later it will be realized.
 


That you may have intention and so comport your delivery 
of information to another acceptor has not bearing upon 
the subsequent acceptance or rejection of that information 
by the target acceptor.  Acceptance or rejection of 
information is determined solely by the accepting or 
rejecting context (acceptor). 


Agree. But the converse - the acceptor can only accept information which has
been included for delivery by intention (or accidentally I suppose).
 


Your mere presence sends information regardless of some 
conscious intent.  Indeed, your absence does equally 
deliver information, for the target acceptor will see 
a definite difference in available information so

Re: Comp: Geometry Is A Zombie

2013-03-07 Thread John Clark
On Wed, Mar 6, 2013  Craig Weinberg  wrote:


> > but back in the days of my awesome Atari 800 computer, there was a
> program called S.A.M. which sounded like this:
> http://www.youtube.com/watch?v=k7nqixe3WrQ
>
> Now, 31 years later, we have this:
> http://www.acapela-group.com/text-to-speech-interactive-demo.html
>
> Improvement in naturalism: Nil.
>

 Holy cow, I think you need to see a doctor to get the wax out of your
ears, I hear a HUGE improvement!

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Thin Client

2013-03-07 Thread Craig Weinberg
If you have ever worked with Terminal Servers, RDP, Citrix Metaframe, or 
the like (and that's what I have been doing professionally every day for 
the last 14 years), you will understand the idea of a Thin Client 
architecture. Thin clients are as old as computing, and some of you 
remember as I do, devices like acoustic couplers where you can attach a 
telephone handset to a telephone cradle, so that the mouth ends of the 
handset and the earpiece ends could squeal to each other. In this way, you 
could, with nothing but a keyboard and a printer, use your telephone to 
allow you access to a mainframe computer at some university.

The relevance here is that the client end is thin computationally. It 
passes nothing but keystrokes and printer instructions back and forth as 
acoustic codes. 

This is what an mp3 file does as well. It passes nothing but binary 
instructions that can be used by an audio device to vibrate. Without a 
person's ear there to be vibrated, this entire event is described by linear 
processes where one physical record is converted into another physical 
record. Nothing is encoded or decoded, experienced or appreciated. There is 
no sound. 

Think about those old plastic headphones in elementary school that just had 
hollow plastic tubes as connectors - a system like that generates sound 
from the start, and the headphones are simply funnels for our ears. That's 
a different thing from an electronic device which produces sound only in 
the earbuds. 

All of these discussions about semiotics, free will, consciousness, 
AI...all come down to understanding the Thin Client. The Thin Client is 
Searle's Chinese Room in actual fact. You can log into a massive server 
from some mobile device and use it like a glove, but that doesn't mean that 
the glove is intelligent. We know that we can transmit only mouseclicks and 
keystrokes across the pipe and that it works without having to have some 
sophisticated computing environment (i.e. qualia) get communicated. The 
Thin Client exposes Comp as misguided because it shows that instructions 
can indeed exist as purely instrumental forms and require none of the 
semantic experiences which we enjoy. No matter how much you use the thin 
client, it never needs to get any thicker. It's just a glove and a window.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Cats fall for illusions too

2013-03-07 Thread Stephen P. King

On 3/7/2013 11:36 AM, Terren Suydam wrote:
I have no doubt that Craig will somehow see this as a vindication of 
his theory and a refutation of mechanism.


Terren



I wonder if you think that the cat's name is Pavlov?



On Wed, Mar 6, 2013 at 5:27 PM, Stephen P. King > wrote:


https://www.youtube.com/watch?feature=player_embedded&v=CcXXQ6GCUb8

--





--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: MGA is back (on the FOAR list)

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 8:19:06 AM UTC-5, Bruno Marchal wrote:
>
>
> On 06 Mar 2013, at 18:49, Craig Weinberg wrote:
>
> I understand where you are coming from in MGA now, Bruno, and again there 
> is nothing wrong with your reasoning, but in that your initial assumptions 
> are not the universe that we live in.
>
>
> ?
>
> (the assumption of the whole reasoning is just comp. Then in MGA i make 
> some local assumption to make a point, but they are discharged before 
> getting the conclusion).
>

Right. It's the comp where the assumptions are which don't match our 
universe. I don't have any particular problem with what you add to it - you 
make perfect sense if comp were true... but comp can't be true, so it 
doesn't matter.

 

>
>
>
>
> Let me give you a thought experiment that might give you a sense of where 
> I see the assumptions jump to the wrong conclusion.
>
> Suppose Alice didn't have an energetic particle to save her logic misfire 
> and she ended up confusing her own name with Alison. Nobody tried to 
> correct her use of her own name, so people assumed that she has begun using 
> a new name, or that one of the two names was just a nickname. As she went 
> about her business over the next several years, opening new accounts and 
> receiving mail as Alison, she had essentially lost her old name, except for 
> the very closest family members and government records which retained 
> unambiguous reference to Alice. 
>
> Now suppose a more catastrophic event happens with many of her logic 
> gates. Every name that she has ever heard is now switched in her memory. 
> Instead of Romeo and Juliet, her star-crossed lovers are Pizza-Foot and 
> Sycorax. Instead of Charlie Brown and Snoopy, she remembers those 
> characters as Baron Von Slouchcousin and Pimento. The stories are otherwise 
> in-tact of course. The function of the characters is identical.
>
> As the brain parts keep failing and then coming back online, all of the 
> content of history and fiction have become hopelessly scrambled, but the 
> stories and information are undamaged. Star Wars takes place in Egypt. 
> Queen Elizabeth was named Treewort and lives in the trunk of a 2003 Mazda 
> but otherwise the succession of the British throne is clearly understood. 
>
> As luck would have it, the problem with her name interpreter was mirrored 
> by a problem in her output modules, which translates all of her twisted 
> names into the expected ones, effectively undoing her malfunction as far as 
> anyone else is concerned. There is no problem for her socially, and no 
> problem for her psychologically, as she does not suspect any malfunction, 
> and neither does anyone else.
>
> Who is the British monarch? Elizabeth or Treewort? Is there a difference 
> between the two?
>
> It comes down to exploring the reality of proprietary vs generic, or 
> qualitative vs quantitative identity. In math - all identities are generic 
> and interchangeable. A name is not a name of what is being named (which is 
> a real and unique natural presence), but a label which refers to another 
> label or variable (which is not a presence but a figure persisting by 
> axiom-fiat). Using this quantitative framework, all entities are assumed to 
> be built up from these starchy mechanical axioms, so that a name is simply 
> a character string used for naming - it has no proprietary content. When a 
> computer does do proprietary content, it doesn't look like Harry or Jane, 
> it looks like ct168612 - now that means something to a computer. If it 
> can be assumed that the label matches some serial number or address, then 
> it is a good name. In no case is the computer able to value a name in any 
> other way. It has no way of knowing if Buckingham Palace is a better place 
> to live than in the trunk of an old car, as long as the digits fulfill the 
> same functional role, they are the same.
>
> In reality however, maybe nothing is 'the same'? Maybe there aren't any 
> shortcuts or simulations which can make something which is not us into us?
>
>
> Comp does not exclude such a possibility. There are (in the arithmetical 
> truth) infinitely many processes which can be simulated only by themselves, 
> having no shortcut, and that might indeed play some role in cosmology, and 
> even consciousness or in the stability of conscious experience. Open 
> problems.
>

Cool. Why is it still a computation though?

Craig
 

>
> Bruno
>
>
>
>
> Craig
>
>
>
> On Wednesday, March 6, 2013 11:37:28 AM UTC-5, Bruno Marchal wrote:
>>
>> Hi, 
>>
>>
>> I have promised to let you know when I explain the MGA, actually a new   
>> version, in the FOAR list of Russell Standish. 
>>
>> Well we have begun two days ago. Sorry for this delay. 
>>
>> Note that MGA has already been explained in this list. 
>> See for example: 
>> http://old.nabble.com/MGA-1-td20566948.html 
>>
>> Feel free to participate on the FOAR list, if you have still problem   
>> with it. 
>>
>> You might should, as 

Re: Cats fall for illusions too

2013-03-07 Thread Terren Suydam
I have no doubt that Craig will somehow see this as a vindication of his
theory and a refutation of mechanism.

Terren


On Wed, Mar 6, 2013 at 5:27 PM, Stephen P. King wrote:

> https://www.youtube.com/watch?**feature=player_embedded&v=**CcXXQ6GCUb8
>
> --
> Onward!
>
> Stephen
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to 
> everything-list+unsubscribe@**googlegroups.com
> .
> To post to this group, send email to 
> everything-list@googlegroups.**com
> .
> Visit this group at 
> http://groups.google.com/**group/everything-list?hl=en
> .
> For more options, visit 
> https://groups.google.com/**groups/opt_out
> .
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Messages Aren't Made of Information

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 1:39:25 AM UTC-5, William R. Buckley wrote:
>
> I have before claimed that the computer is 
> a good example of the power of semiosis. 
>
> It is simple enough to see that the mere 
> construction of a Turing machine confers 
> upon that machine the ability to recognise 
> all computations; to generate the yield of 
> such computations. 
>
> In this sense, a program (the source code) 
> is a sequence of signs that upon acceptance 
> brings the machine to generate some 
> corresponding yield; a computation. 
>
> Also, the intention of an entity behind sign 
> origination has nothing whatsoever to do with 
> the acceptability of that sign by some other 
> entity, much less the meaning there taken for 
> the sign. 
>
> The meaning of a sign is always centered upon 
> the acceptor of that sign. 
>

I agree but I don't think the machine can accept any sign. It can copy them 
and perform scripted transformations on them, but ultimately there is no 
yield at all. The Turing machine does not no that it has yielded a result 
of a computation, and more than a bucket of water knows when it is being 
emptied. In fact, you could make a Turing machine out of nothing but 
buckets of water on pulleys and it would literally be some pattern of 
filled buckets which is supposed to be meaningful as a sign or yield to the 
'machine' (collection of buckets? water molecules? convection currents? 
general buckety-watery-movingness?)

Craig

 

>
> wrb 
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 11:07:49 AM UTC-5, Stephen Paul King wrote:
>
>  On 3/7/2013 10:58 AM, Craig Weinberg wrote:
>  
>
>
> On Thursday, March 7, 2013 10:43:06 AM UTC-5, Stephen Paul King wrote: 
>>
>>  On 3/7/2013 10:11 AM, Stathis Papaioannou wrote:
>>  
>>
>>
>> On Friday, March 8, 2013, Stephen P. King wrote:
>>
>>>  On 3/7/2013 8:44 AM, Craig Weinberg wrote:
>>>  
>>> On Thursday, March 7, 2013 12:59:50 AM UTC-5, stathisp wrote: By the 
 definition I gave above a stone does not choose to roll down the hill 
 because it does not consider each option in order to decide which one to 
 do.

>>>
>>> Why doesn't it choose when and which direction to roll? A deterministic 
>>> universe means that there is no such thing as 'considering each option' - 
>>> there are no options, only things happening because they must happen. They 
>>> have no choice, there is no choice, the lack of choice is the defining 
>>> feature of a deterministic world. You are saying that this is the world 
>>> that we live in and that we are the stone, except that for some reason we 
>>> have this delusional interactive narrative in which we could not stand 
>>> being still any longer and decided to push ourselves down the hill.
>>>  
>>>
>>> Hi,
>>>
>>> From my studies of the math of classical determinism, the subsequent 
>>> 'behavior' of the stone follows strictly in a one-to-one and onto fashion 
>>> from the prior state of the stone. There are no 'multiple choices' of the 
>>> stone, thus no room at all for "choice". Thankfully we know that classical 
>>> determinism is a delusion that some, for their own reasons, cling to.
>>>  
>>
>>  Yes, we know that classical determinism is wrong, but it is not 
>> logically inconsistent with consciousness.
>>
>>
>> I must disagree. It is baked into the topology of classical mechanics 
>> that a system cannot semantically act upon itself. There is no way to 
>> define intentionality in classical physics. This is what Bruno proves with 
>> his argument.
>>
>>  
> Exactly Stephen. What are we talking about here? How is a deterministic 
> system that has preferences and makes choices and considers options 
> different from free will. If something can have a private preference which 
> cannot be determined from the outside, then it is determined privately, 
> i.e. the will of the private determiner. 
>  
>
> Good Morning, Craig.
>
> The word 'deterministic' becomes degenerate (in meaning/semiotic 
> content) when we try to stuff free will (or free won't) into it.
>

Agree. It's that age old philosophical battle of Free Will vs um, 
determinism+everything that free will does except we don't call it that.

 

>
>   
>  
>>  
>>  It is also not logically inconsistent with choice and free 
>> will,  unless you define these terms as inconsistent with determinism, in 
>> which case in a deterministic world we would have to create new words 
>> meaning pseudo-choice and pseudo-free will to avoid misunderstanding, and 
>> then go about our business as usual with this minor change to the language.
>>
>>
>> So you say...
>>  
>
> Yeah, right. Why would a deterministic world need words having anything to 
> do with choice or free will? At what part of a computer program is 
> something like a choice made? Every position on the logic tree is connected 
> to every other by unambiguous prior cause or intentionally generated 
> (pseudo) randomness. It makes no choices, has no preferences, just follows 
> a sequence of instructions.
>
> Craig 
>  
>  
> Exactly. This is why computations are exactly describable as 
> "strings"...
>
> -- 
> Onward!
>
> Stephen
>
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Messages Aren't Made of Information

2013-03-07 Thread Craig Weinberg


On Wednesday, March 6, 2013 12:09:28 PM UTC-5, William R. Buckley wrote:
>
> Now we are getting some place.
>
>  
>
> Exactly.  There is simply action.
>
>  
>
> Contexts react to sign.
>

They react to their interpretations of a sign. The sign itself is a figure 
- a disposable form hijacked by the intention of the transmitter. The sign 
depends on sensitivities to be detected. When it is detected, it is not 
detected as the sign intended by the transmitter unless the semiosis is 
well executed, which is up to both the transmitter and receiver's 
intentional and unintentional contributions.

Craig
 

>  
>
> Nothing more.  Nothing less.
>
>  
>
> The complexity of action is open ended.
>
>  
>
> wrb
>
>  
>
> *From:* everyth...@googlegroups.com  [mailto:
> everyth...@googlegroups.com ] *On Behalf Of *Craig Weinberg
> *Sent:* Wednesday, March 06, 2013 4:12 AM
> *To:* everyth...@googlegroups.com 
> *Subject:* Re: Messages Aren't Made of Information
>
>  
>
>
>
> On Tuesday, March 5, 2013 5:48:19 PM UTC-5, William R. Buckley wrote:
>
> Craig:
>
>  
>
> The mistake you make is clearly stated in your words:
>
>  
>
> “…doesn’t mean that they communicated with judgment.”
>
>  
>
> You are anthropomorphizing.  The value is no more nor no 
>
> less than the action taken upon signal acceptance.
>
>
> That's ok, but it means there is no value. There is simply action.
>
> Craig
>  
>
>  
>
> wrb
>
>  
>
> *From:* everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] *On 
> Behalf Of *Craig Weinberg
> *Sent:* Tuesday, March 05, 2013 1:27 PM
> *To:* everyth...@googlegroups.com
> *Subject:* Re: Messages Aren't Made of Information
>
>  
>
>
>
> On Tuesday, March 5, 2013 3:07:00 PM UTC-5, William R. Buckley wrote:
>
> The fact that a machine can act in a discriminatory based 
>
> upon some signal (sign, information) input is demonstration 
>
> of value judgment.
>
>
> Only in our eyes, not in its own eyes. It's like telling a kid to say some 
> insult to someone in another language. The fact they are able to carry out 
> your instruction doesn't mean that they communicated with judgment.
>  
>
>  
>
> Just as there is no **in** in a machine, so to there is no **in** 
>
> in a biological organism; they both, machine and organism, 
>
>
> But there is an 'in' with respect to the experience of an organism - only 
> because we know it first hand. There would seem to be no reason why a 
> machine couldn't have a similar 'in', but it actually seems that their 
> nature indicates they do not. I take the extra step and hypothesize exactly 
> why that is - because experience is not generated out of the bodies 
> associated with them, but rather the bodies are simply a public view of one 
> aspect of the experience. If you build a machine, you are assembling bodies 
> to relate to each other, as external forms, so that no interiority 
> 'emerges' from the gaps between them.
>  
>
> are forms that treat other forms in certain proscribed ways.
>
>  
>
> You cannot demonstrate otherwise.
>
>
> Sure I can. Feelings, colors, personalities, intentions, historical 
> zeitgeists...these are not forms relating to forms.
>
> Craig
>  
>
>  
>
> wrb
>
>  
>
> *From:* everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] *On 
> Behalf Of *Craig Weinberg
> *Sent:* Tuesday, March 05, 2013 10:37 AM
> *To:* everyth...@googlegroups.com
> *Subject:* Re: Messages Aren't Made of Information
>
>  
>
>
>
> On Tuesday, March 5, 2013 3:53:31 AM UTC-5, Alberto G.Corona wrote:
>
> Let´s say that what we call "information" is an extended form of sensory 
> input. What makes this input "information" is the usability of this input 
> for reducing the internal entropy of the receiver or increase the internal 
> order. The receiver can be a machine, a cell, a person or a society for 
> example. If the input do not produce this effect in the receiver, then that 
> input is not information.
>
>
> The increase of internal order of the receiver is a symptom of an 
> experience of being informed but they are not the same thing. It's not 
> really even relevant in most cases. I would not call it an extended form of 
> sensory input, but a reduction of sensory experience. Input is not a 
> physical reality, it is a conceptual label.
>
> Consider Blindsight:
>
> I hold up two fingers and ask how many fingers? 
>
> "I don't know.'
>
> Guess
>
> 'two'.
>
> This example tells us about information without tying it to decreased 
> entropy. My two fingers are a form. I am putting them into that form, so 
> the process of my presenting my fingers is a formation of a sign. 
>
> The sign is not information at this point. It means something different to 
> an ant or a frog than it does to a person looking at it. If you can't see, 
> there is no formation there at all unless you can collide with my fingers.
>
> When the patient responds that they don't know how many fingers, it is 
> because they personally have no experience of seeing it. They are not bein

Re: Messages Aren't Made of Information

2013-03-07 Thread Craig Weinberg


On Tuesday, March 5, 2013 10:55:31 PM UTC-5, William R. Buckley wrote:
>
> The falling tree makes sound, the wind make sound, the … makes sound 
>
regardless of your presence (or the presence of others) to hear that sound.
>

Regardless of my presence, of course, but to make sound, you need an ear 
and a medium which vibrates that ear. If you take the atmosphere away, then 
of course the falling tree could not make a sound to anyone. For the same 
reason, if you take all of the ears away, then there can be no such thing 
as sound.
 

>  
>
> To argue anything else is utter nonsense.
>
To the contrary. To assume that physics can simply 'exist' outside of a 
context of detection and participation is a statement of religious faith. 
We have never experienced an unexperienced world, so it would be 
unscientific to presume such a thing. This has nothing to do with human 
experience, its ontology.

Craig
 

>  
>
> wrb
>
>  
>
> *From:* everyth...@googlegroups.com  [mailto:
> everyth...@googlegroups.com ] *On Behalf Of *Craig Weinberg
> *Sent:* Tuesday, March 05, 2013 7:34 PM
> *To:* everyth...@googlegroups.com 
> *Subject:* Re: Messages Aren't Made of Information
>
>  
>
>
>
> On Tuesday, March 5, 2013 5:52:32 PM UTC-5, William R. Buckley wrote:
>
> I do not hold that the acceptor must exist, for then I 
>
> am making a value judgment, and I have already scolded 
>
> Craig for the same thing.
>
>  
>
> Think of it this way.  A volume of gas has a measure of 
>
> entropy.  This means that the molecules are found in 
>
>
> found by what?
>  
>
> a specific sequence of microstates, and those microstates 
>
> constitute an information state of the molecules.  
>
>
> Who is it constituted to though? Empty space? The molecules as a group? 
> Each molecule? What is validating that these molecules exist in some way - 
> that there is a such thing as a microstate which can be detected in some 
> way by something... and what is detection? How does it work?
>
> When these things are taken as axiomatic, then we are just reiterating 
> those axioms when we claim that no acceptor must exist. In my 
> understanding, exist and acceptor are the same thing.
>
>  
>
> Alter 
>
> that microstate sequence (as by adding or removing 
>
> entropy) and the description of the microstate sequence 
>
> changes correspondingly; entropy is information.
>
>
> Only if something can detect their own description of the microstate as 
> having changed. We cannot assume that there is any change at all if nothing 
> can possibly detect it. For example, if I take make a movie of ice cubes 
> melting in a glass, even though that is a case of increasing thermodynamic 
> entropy, we will see a lower cost of video compression in a movie of the 
> glass after the ice has melted completely. In that case the image 
> description can be made to follow either increasing or decreasing 
> information entropy depending on whether you play the movie forward and 
> backward. There is no link between microstate thermodynamic entropy and 
> optical description information entropy.
>
> Craig
>
>  
>
> Acceptors and signals; contexts and signs; …
>
>  
>
> wrb
>
>  
>
> *From:* everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] *On 
> Behalf Of *John Mikes
> *Sent:* Tuesday, March 05, 2013 1:13 PM
> *To:* everyth...@googlegroups.com
> *Subject:* Re: Messages Aren't Made of Information
>
>  
>
> Dear Bil B. you probably have thought in these lines during similar long 
> periods as I did. It was ~2 decades ago when I defined 
>
> i n f o r m a t i o n  as something with (at least) 2 ends: 
>
> 1. the notion (in whatever format it shows up)  - and
>
> 2. the acceptor (adjusting the notion in whatever context it can be 
>
> perceived - appercipiated (adjusted>). 
>
> I have no idea how to make a connection between information (anyway how 
> one defines it) and the (inner?) disorder level of anything (entropy?). I 
> dislike this thermodynamic term alltogether. 
>
>  
>
> Later on I tried to refine my wording into:
>
> RELATIONS and the capability of recognizing them. That moved away from a 
> 'human(?)' framework. E. g. I called the 'closeness of a '(+)' charge to a 
> '(-)' potential an information so it came close to SOME consciousness (=(?) 
> *response to relations*), no matter in what kind of domain. 
>
>  
>
> Do you feel some merit to my thinking?
>
>  
>
> John Mikes
>
> On Tue, Mar 5, 2013 at 2:06 AM, William R. Buckley  
> wrote:
>
> There is information (I take information to be a
> manifestation of entropy) and it is always represented
> in the form of a pattern (a distribution) of the units
> of mass/energy of which the Universe is composed.  I
> think that semiotic signs are simply specific bits
> of information; I will use the terms synonymously.
>
> Information has meaning only within context.  For many
> people, context is taken to mean one piece of information
> as compared to another piece of information.  I do not
> take this mean

Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Stephen P. King

On 3/7/2013 10:58 AM, Craig Weinberg wrote:



On Thursday, March 7, 2013 10:43:06 AM UTC-5, Stephen Paul King wrote:

On 3/7/2013 10:11 AM, Stathis Papaioannou wrote:



On Friday, March 8, 2013, Stephen P. King wrote:

On 3/7/2013 8:44 AM, Craig Weinberg wrote:


On Thursday, March 7, 2013 12:59:50 AM UTC-5, stathisp
wrote: By the definition I gave above a stone does not
choose to roll down the hill because it does not
consider each option in order to decide which one to do.


Why doesn't it choose when and which direction to roll? A
deterministic universe means that there is no such thing as
'considering each option' - there are no options, only
things happening because they must happen. They have no
choice, there is no choice, the lack of choice is the
defining feature of a deterministic world. You are saying
that this is the world that we live in and that we are the
stone, except that for some reason we have this delusional
interactive narrative in which we could not stand being
still any longer and decided to push ourselves down the hill.

Hi,

From my studies of the math of classical determinism, the
subsequent 'behavior' of the stone follows strictly in a
one-to-one and onto fashion from the prior state of the
stone. There are no 'multiple choices' of the stone, thus no
room at all for "choice". Thankfully we know that classical
determinism is a delusion that some, for their own reasons,
cling to.


Yes, we know that classical determinism is wrong, but it is not
logically inconsistent with consciousness.


I must disagree. It is baked into the topology of classical
mechanics that a system cannot semantically act upon itself. There
is no way to define intentionality in classical physics. This is
what Bruno proves with his argument.


Exactly Stephen. What are we talking about here? How is a 
deterministic system that has preferences and makes choices and 
considers options different from free will. If something can have a 
private preference which cannot be determined from the outside, then 
it is determined privately, i.e. the will of the private determiner.


Good Morning, Craig.

The word 'deterministic' becomes degenerate (in meaning/semiotic 
content) when we try to stuff free will (or free won't) into it.






It is also not logically inconsistent with choice and free
will,  unless you define these terms as inconsistent with
determinism, in which case in a deterministic world we would have
to create new words meaning pseudo-choice and pseudo-free will to
avoid misunderstanding, and then go about our business as usual
with this minor change to the language.


So you say...


Yeah, right. Why would a deterministic world need words having 
anything to do with choice or free will? At what part of a computer 
program is something like a choice made? Every position on the logic 
tree is connected to every other by unambiguous prior cause or 
intentionally generated (pseudo) randomness. It makes no choices, has 
no preferences, just follows a sequence of instructions.


Craig



Exactly. This is why computations are exactly describable as 
"strings"...


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Messages Aren't Made of Information

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 6:55:25 AM UTC-5, Bruno Marchal wrote:
>
>
> On 05 Mar 2013, at 19:14, Craig Weinberg wrote:
>
>
>
> On Tuesday, March 5, 2013 12:03:28 PM UTC-5, William R. Buckley wrote:
>>
>> Craig:
>>  
>>
>> You statement of need for a human to observe the 
>>
>> pattern is the smoking gun to indicate a misunderstanding 
>>
>> of semiotic theory on your part.
>>
>
> I don't think that it has to be humans doing the observing at all. 
>  
>
>>  
>>
>> Specifically, you don’t need a human; a machine will do.
>>
>
> A machine can only help another non-machine interpret something. I don't 
> think that they can interpret anything for 'themselves'.
>
>
> You should study machine's self-reference. It is easy to program a machine 
> interpreting data, by itself and for herself. This is not like 
> consciousness. this is testable and already done.
> You confuse the notion of machine before Post, Church Turing and after.
>

Interpretation is consciousness though. What is tested is that results 
correspond with expectations in a way which is meaningful to us, not to the 
machine. I can use a mirror to reflect an image that I see, but that 
doesn't mean that the mirror intends to reflect images, or knows what they 
are, or has an experience of them. We can prove that the image is indeed 
consistent with our expectations of a reflected original though.

Craig
 

>
>
>
> Bruno
>
>
>
>  
>
>>  
>>
>> Not all machines are man-made.
>>
>
> True, but what we see as natural machines may not be just machines. 
> Man-made machines may be just machines.
>
> Craig
>
>  
>>
>> wrb
>>  
>>
>> *From:* everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] 
>> *On Behalf Of *Craig Weinberg
>> *Sent:* Tuesday, March 05, 2013 5:24 AM
>> *To:* everyth...@googlegroups.com
>> *Subject:* Re: Messages Aren't Made of Information
>>  
>>
>>
>>
>> On Tuesday, March 5, 2013 2:06:20 AM UTC-5, William R. Buckley wrote:
>>
>> There is information (I take information to be a 
>> manifestation of entropy) and it is always represented 
>> in the form of a pattern (a distribution) of the units 
>> of mass/energy of which the Universe is composed.  
>>
>>
>> I can agree that information could be considered a manifestation of 
>> entropy, to the extent that entropy is necessary to provide a contrast 
>> space for a distribution. To string an ellipses together, you need one dot, 
>> repetition, space, and a quality of measurement which yokes together the 
>> three dots aesthetically. Beyond that, you also need human observer with 
>> human visual sense to turn the distribution into a 'pattern'. Without that, 
>> of course, even distribution cannot cohere into "a" distribution, as there 
>> is no scale, range, quality, etc to anchor the expectation. If we are a 
>> microbe, we may not ever find our way from one dot to the next.
>>
>> I 
>> think that semiotic signs are simply specific bits 
>> of information; I will use the terms synonymously. 
>>
>> Information has meaning only within context.  For many 
>> people, context is taken to mean one piece of information 
>> as compared to another piece of information.  I do not 
>> take this meaning of context when I discuss semiotics. 
>> Instead, I take semiotic context to be the acceptor of 
>> the information.  Hence, all meaning resides a priori 
>> within information acceptors. 
>>
>>
>> Agree. Well, transmitters form the signs from their own sense of meaning 
>> as well. That's how we are having this discussion.
>>  
>>
>>
>> What you know you have always known; the sign merely 
>> serves to bring that knowledge to your conscious mind. 
>>
>>
>> Right. I mean it might be a bit more complicated as far as novelty goes. 
>> I don't know if the state of unconscious information is really what I "have 
>> always known" but that this particular constellation of meanings reflects 
>> the Totality in a way that it is only trivially novel. Like if you hit a 
>> jackpot on a slot machine - that may not have happened before, but the slot 
>> machine is designed to payout whenever it does. The jackpot already exists 
>> as a potential and sooner or later it will be realized.
>>  
>>
>>
>> That you may have intention and so comport your delivery 
>> of information to another acceptor has not bearing upon 
>> the subsequent acceptance or rejection of that information 
>> by the target acceptor.  Acceptance or rejection of 
>> information is determined solely by the accepting or 
>> rejecting context (acceptor). 
>>
>>
>> Agree. But the converse - the acceptor can only accept information which 
>> has been included for delivery by intention (or accidentally I suppose).
>>  
>>
>>
>> Your mere presence sends information regardless of some 
>> conscious intent.  Indeed, your absence does equally 
>> deliver information, for the target acceptor will see 
>> a definite difference in available information sources 
>> whether you are present or not. 
>>
>> Consider a line worker in a bean processi

Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Craig Weinberg


On Thursday, March 7, 2013 10:43:06 AM UTC-5, Stephen Paul King wrote:
>
>  On 3/7/2013 10:11 AM, Stathis Papaioannou wrote:
>  
>
>
> On Friday, March 8, 2013, Stephen P. King wrote:
>
>>  On 3/7/2013 8:44 AM, Craig Weinberg wrote:
>>  
>> On Thursday, March 7, 2013 12:59:50 AM UTC-5, stathisp wrote: By the 
>>> definition I gave above a stone does not choose to roll down the hill 
>>> because it does not consider each option in order to decide which one to do.
>>>
>>
>> Why doesn't it choose when and which direction to roll? A deterministic 
>> universe means that there is no such thing as 'considering each option' - 
>> there are no options, only things happening because they must happen. They 
>> have no choice, there is no choice, the lack of choice is the defining 
>> feature of a deterministic world. You are saying that this is the world 
>> that we live in and that we are the stone, except that for some reason we 
>> have this delusional interactive narrative in which we could not stand 
>> being still any longer and decided to push ourselves down the hill.
>>  
>>
>> Hi,
>>
>> From my studies of the math of classical determinism, the subsequent 
>> 'behavior' of the stone follows strictly in a one-to-one and onto fashion 
>> from the prior state of the stone. There are no 'multiple choices' of the 
>> stone, thus no room at all for "choice". Thankfully we know that classical 
>> determinism is a delusion that some, for their own reasons, cling to.
>>  
>
>  Yes, we know that classical determinism is wrong, but it is not 
> logically inconsistent with consciousness.
>
>
> I must disagree. It is baked into the topology of classical mechanics 
> that a system cannot semantically act upon itself. There is no way to 
> define intentionality in classical physics. This is what Bruno proves with 
> his argument.
>
>
Exactly Stephen. What are we talking about here? How is a deterministic 
system that has preferences and makes choices and considers options 
different from free will. If something can have a private preference which 
cannot be determined from the outside, then it is determined privately, 
i.e. the will of the private determiner. 
 

>
>  It is also not logically inconsistent with choice and free will,  unless 
> you define these terms as inconsistent with determinism, in which case in a 
> deterministic world we would have to create new words meaning pseudo-choice 
> and pseudo-free will to avoid misunderstanding, and then go about our 
> business as usual with this minor change to the language.
>
>
> So you say...
>

Yeah, right. Why would a deterministic world need words having anything to 
do with choice or free will? At what part of a computer program is 
something like a choice made? Every position on the logic tree is connected 
to every other by unambiguous prior cause or intentionally generated 
(pseudo) randomness. It makes no choices, has no preferences, just follows 
a sequence of instructions.

Craig
 

>
>
>
>
> -- 
> Onward!
>
> Stephen
>
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Stephen P. King

On 3/7/2013 10:11 AM, Stathis Papaioannou wrote:



On Friday, March 8, 2013, Stephen P. King wrote:

On 3/7/2013 8:44 AM, Craig Weinberg wrote:


On Thursday, March 7, 2013 12:59:50 AM UTC-5, stathisp wrote:
By the definition I gave above a stone does not choose to
roll down the hill because it does not consider each option
in order to decide which one to do.


Why doesn't it choose when and which direction to roll? A
deterministic universe means that there is no such thing as
'considering each option' - there are no options, only things
happening because they must happen. They have no choice, there is
no choice, the lack of choice is the defining feature of a
deterministic world. You are saying that this is the world that
we live in and that we are the stone, except that for some reason
we have this delusional interactive narrative in which we could
not stand being still any longer and decided to push ourselves
down the hill.

Hi,

From my studies of the math of classical determinism, the
subsequent 'behavior' of the stone follows strictly in a
one-to-one and onto fashion from the prior state of the stone.
There are no 'multiple choices' of the stone, thus no room at all
for "choice". Thankfully we know that classical determinism is a
delusion that some, for their own reasons, cling to.


Yes, we know that classical determinism is wrong, but it is not 
logically inconsistent with consciousness.


I must disagree. It is baked into the topology of classical 
mechanics that a system cannot semantically act upon itself. There is no 
way to define intentionality in classical physics. This is what Bruno 
proves with his argument.



It is also not logically inconsistent with choice and free 
will,  unless you define these terms as inconsistent with determinism, 
in which case in a deterministic world we would have to create new 
words meaning pseudo-choice and pseudo-free will to avoid 
misunderstanding, and then go about our business as usual with this 
minor change to the language.


So you say...




--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Stathis Papaioannou
On Friday, March 8, 2013, Stephen P. King wrote:

>  On 3/7/2013 8:44 AM, Craig Weinberg wrote:
>
> On Thursday, March 7, 2013 12:59:50 AM UTC-5, stathisp wrote: By the
>> definition I gave above a stone does not choose to roll down the hill
>> because it does not consider each option in order to decide which one to do.
>>
>
> Why doesn't it choose when and which direction to roll? A deterministic
> universe means that there is no such thing as 'considering each option' -
> there are no options, only things happening because they must happen. They
> have no choice, there is no choice, the lack of choice is the defining
> feature of a deterministic world. You are saying that this is the world
> that we live in and that we are the stone, except that for some reason we
> have this delusional interactive narrative in which we could not stand
> being still any longer and decided to push ourselves down the hill.
>
>
> Hi,
>
> From my studies of the math of classical determinism, the subsequent
> 'behavior' of the stone follows strictly in a one-to-one and onto fashion
> from the prior state of the stone. There are no 'multiple choices' of the
> stone, thus no room at all for "choice". Thankfully we know that classical
> determinism is a delusion that some, for their own reasons, cling to.
>

Yes, we know that classical determinism is wrong, but it is not logically
inconsistent with consciousness. It is also not logically inconsistent with
choice and free will,  unless you define these terms as inconsistent with
determinism, in which case in a deterministic world we would have to create
new words meaning pseudo-choice and pseudo-free will to avoid
misunderstanding, and then go about our business as usual with this minor
change to the language.

--
Stathis Papaioannou


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Stephen P. King

On 3/7/2013 8:44 AM, Craig Weinberg wrote:


On Thursday, March 7, 2013 12:59:50 AM UTC-5, stathisp wrote: By
the definition I gave above a stone does not choose to roll down
the hill because it does not consider each option in order to
decide which one to do.


Why doesn't it choose when and which direction to roll? A 
deterministic universe means that there is no such thing as 
'considering each option' - there are no options, only things 
happening because they must happen. They have no choice, there is no 
choice, the lack of choice is the defining feature of a deterministic 
world. You are saying that this is the world that we live in and that 
we are the stone, except that for some reason we have this delusional 
interactive narrative in which we could not stand being still any 
longer and decided to push ourselves down the hill.

Hi,

From my studies of the math of classical determinism, the 
subsequent 'behavior' of the stone follows strictly in a one-to-one and 
onto fashion from the prior state of the stone. There are no 'multiple 
choices' of the stone, thus no room at all for "choice". Thankfully we 
know that classical determinism is a delusion that some, for their own 
reasons, cling to.


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: MGA is back (on the FOAR list)

2013-03-07 Thread Bruno Marchal


On 06 Mar 2013, at 18:49, Craig Weinberg wrote:

I understand where you are coming from in MGA now, Bruno, and again  
there is nothing wrong with your reasoning, but in that your initial  
assumptions are not the universe that we live in.


?

(the assumption of the whole reasoning is just comp. Then in MGA i  
make some local assumption to make a point, but they are discharged  
before getting the conclusion).






Let me give you a thought experiment that might give you a sense of  
where I see the assumptions jump to the wrong conclusion.


Suppose Alice didn't have an energetic particle to save her logic  
misfire and she ended up confusing her own name with Alison. Nobody  
tried to correct her use of her own name, so people assumed that she  
has begun using a new name, or that one of the two names was just a  
nickname. As she went about her business over the next several  
years, opening new accounts and receiving mail as Alison, she had  
essentially lost her old name, except for the very closest family  
members and government records which retained unambiguous reference  
to Alice.


Now suppose a more catastrophic event happens with many of her logic  
gates. Every name that she has ever heard is now switched in her  
memory. Instead of Romeo and Juliet, her star-crossed lovers are  
Pizza-Foot and Sycorax. Instead of Charlie Brown and Snoopy, she  
remembers those characters as Baron Von Slouchcousin and Pimento.  
The stories are otherwise in-tact of course. The function of the  
characters is identical.


As the brain parts keep failing and then coming back online, all of  
the content of history and fiction have become hopelessly scrambled,  
but the stories and information are undamaged. Star Wars takes place  
in Egypt. Queen Elizabeth was named Treewort and lives in the trunk  
of a 2003 Mazda but otherwise the succession of the British throne  
is clearly understood.


As luck would have it, the problem with her name interpreter was  
mirrored by a problem in her output modules, which translates all of  
her twisted names into the expected ones, effectively undoing her  
malfunction as far as anyone else is concerned. There is no problem  
for her socially, and no problem for her psychologically, as she  
does not suspect any malfunction, and neither does anyone else.


Who is the British monarch? Elizabeth or Treewort? Is there a  
difference between the two?


It comes down to exploring the reality of proprietary vs generic, or  
qualitative vs quantitative identity. In math - all identities are  
generic and interchangeable. A name is not a name of what is being  
named (which is a real and unique natural presence), but a label  
which refers to another label or variable (which is not a presence  
but a figure persisting by axiom-fiat). Using this quantitative  
framework, all entities are assumed to be built up from these  
starchy mechanical axioms, so that a name is simply a character  
string used for naming - it has no proprietary content. When a  
computer does do proprietary content, it doesn't look like Harry or  
Jane, it looks like ct168612 - now that means something to a  
computer. If it can be assumed that the label matches some serial  
number or address, then it is a good name. In no case is the  
computer able to value a name in any other way. It has no way of  
knowing if Buckingham Palace is a better place to live than in the  
trunk of an old car, as long as the digits fulfill the same  
functional role, they are the same.


In reality however, maybe nothing is 'the same'? Maybe there aren't  
any shortcuts or simulations which can make something which is not  
us into us?


Comp does not exclude such a possibility. There are (in the  
arithmetical truth) infinitely many processes which can be simulated  
only by themselves, having no shortcut, and that might indeed play  
some role in cosmology, and even consciousness or in the stability of  
conscious experience. Open problems.


Bruno





Craig



On Wednesday, March 6, 2013 11:37:28 AM UTC-5, Bruno Marchal wrote:
Hi,


I have promised to let you know when I explain the MGA, actually a new
version, in the FOAR list of Russell Standish.

Well we have begun two days ago. Sorry for this delay.

Note that MGA has already been explained in this list.
See for example:
http://old.nabble.com/MGA-1-td20566948.html

Feel free to participate on the FOAR list, if you have still problem
with it.

You might should, as it is a subtle point, and I am just progressing
on it, notably through such discussion.

Best,

Bruno

http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everythin

Re: Messages Aren't Made of Information

2013-03-07 Thread Bruno Marchal


On 06 Mar 2013, at 00:03, Stephen P. King wrote:


On 3/5/2013 3:03 PM, William R. Buckley wrote:

Craig,



You build an automaton, place it and turn it on, and from that  
point in time forward


the automaton reacts to acceptable information all on its own.



You contradict yourself – - I don’t think it has to be human –  
machines only help


non-machines to interpret - - and if the human point is important,  
then surely


you will accept your definition to be that it must be biological  
life, for a machine


cannot be alive.



A machine is either a machine or it is not a machine – a machine  
cannot be both


a machine and not a machine at the same time.



wrb



Do we have a exact definition of what is a "machine"?


This exists only for digital machine, today, assuming Church's thesis.

You can define a digital machine or a digital process by anything  
Turing emulable, or emulable by a diophantine equation, or a  
combinator, etc.


Bruno



--
Onward!

Stephen

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Ectopic Eyes Experient: Supports my view of sense, Invalidates mechanistic assumptions about eyes.

2013-03-07 Thread Telmo Menezes
On Thu, Mar 7, 2013 at 12:59 PM, Bruno Marchal  wrote:
>
> On 05 Mar 2013, at 19:39, Craig Weinberg wrote:
>
>
>
> On Tuesday, March 5, 2013 12:45:11 PM UTC-5, Bruno Marchal wrote:
>>
>>
>> On 05 Mar 2013, at 08:43, Jesse Mazer wrote:
>>
>>
>>
>> On Mon, Mar 4, 2013 at 11:27 PM, Pierz  wrote:
>>>
>>> Really Craig? It invalidates mechanistic assumptions about eyes? I'm sure
>>> the researchers would be astonished at such a wild conclusion. All the
>>> research shows is brain plasticity in interpreting signals from unusual
>>> neural pathways. How does that invalidate mechanism?
>>
>>
>> Yes, I was confused at first by the statement in the first paragraph that
>> the eyes "can confer vision without a direct neural connection to the brain"
>> (maybe Craig was confused by this too?), but it seems that by "direct neural
>> connection" they just mean an optic nerve wired directly to the brain,
>> bypassing the spinal cord like the optic nerve normally does, since later in
>> the article they do mention the eyes were connected (indirectly) to the
>> brain via the spinal cord: "No one would have guessed that eyes on the flank
>> of a tadpole could see, especially when wired only to the spinal cord and
>> not the brain."
>>
>>
>> Even that would not be conceptually astonishing. My computer is not wired
>> to anything, and I can still send you a mail. It would have meant only that
>> optic cells have some wifi systems. Cute, without doubt, but still not a
>> threat for computationalism. Improbable also, but who knows.
>>
>> Bruno
>>
>>
>
> If they were wireless from the start though, why use an optic nerve?
>
>
> OK. That's show only that biological evolution did not invest in radio
> waves. Why? Interesting question. Probably not enough profitable locally,
> contrary to direct exchange of biochemical material.

Interesting question indeed. Radio waves seem more energy efficient
than sound, which requires a lot of muscle activity to produce. It
could be a problem of irreductible complexity. You need to evolve both
a radio transmitter and a receiver, each one useless without the
other. The same is not true for sound. Hearing is an evolutionary
advantage on its own, and the ability to vocalise comes almost for
free from the breathing and digestive systems, that we need anyway.

Telmo.

>
> Bruno
>
>
>
>
>
> Craig
>
>>
>>
>>
>>
>>
>> Jesse
>>
>>
>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to everything-li...@googlegroups.com.
>>> To post to this group, send email to everyth...@googlegroups.com.
>>> Visit this group at http://groups.google.com/group/everything-list?hl=en.
>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>
>>>
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-li...@googlegroups.com.
>> To post to this group, send email to everyth...@googlegroups.com.
>> Visit this group at http://groups.google.com/group/everything-list?hl=en.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>>
>>
>>
>> http://iridia.ulb.ac.be/~marchal/
>>
>>
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Bruno Marchal


On 05 Mar 2013, at 22:28, meekerdb wrote:


On 3/5/2013 6:53 AM, Bruno Marchal wrote:


Why would anyone want to make decisions that were not determined  
by their learning and memories and values.


Indeed. But even more when they feel such value as being universal  
or close to universal.





But based on your experience with salvia, Bruno, you seem to think  
there is a "you" which is independent of those things.


Not just salvia. The 8 hypostases describes already a "you" (with 8  
views), which are more (semantically) and less (bodily or  
syntactically) than memory. The value are not necessarily part of  
the memory (as opposed to their instantiations).
Salvia can help to illustrate this in a vivid way, by an  
hallucination of remembering having been that kind of things for  
all time.


It is comparable to the realization that you don't die when you  
stop doing something which was part of what you take as an  
important personality trait, like when people succeed in stopping  
tobacco. They can remind how they felt and were before taking  
tobacco, for example.


But you do die a little when you stop doing something significant to  
you.  I raced motorcycles for many years but now at age 73 I have  
been retired for a couple of years.  My knees don't work so well.  
I'm not competitive at tennis either.  And I do feel diminished  
having stopped doing these things.


This is not dying in the usual sense. It might be like feeling  
diminished or sick, but that is not dead. Comatose patients are not  
dead, even if they are quite handicapped.


I have heard about a kids who lost his two legs, two arms and became  
blind, after the explosion of the bomb he was building following  
instructions that he found on the net. May be in this case we can  
think that being dead would have been better than surviving, but he is  
still considered as alive, by all concerned people.









Isn't it more likely that the drug simply makes your narrative  
thoughts less able than usual to trace their sources? So it is  
like the Poincare' effect writ large?


I am not sure. Perhaps. If you make that idea more precise, I might  
concur. Is it consistent with what I just say here?


I think it is.  Just as Poincare' had a proof spring into his mind  
we commonly have value judgement spring into mind.  In some cases we  
can trace them back to an experience or what out parents told us;  
but generally we can't.  I can see that drugs might inhibit that  
tracing back and make it seem that we are who we are independent of  
any history.


OK.

Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Ectopic Eyes Experient: Supports my view of sense, Invalidates mechanistic assumptions about eyes.

2013-03-07 Thread Bruno Marchal


On 05 Mar 2013, at 19:39, Craig Weinberg wrote:




On Tuesday, March 5, 2013 12:45:11 PM UTC-5, Bruno Marchal wrote:

On 05 Mar 2013, at 08:43, Jesse Mazer wrote:




On Mon, Mar 4, 2013 at 11:27 PM, Pierz  wrote:
Really Craig? It invalidates mechanistic assumptions about eyes?  
I'm sure the researchers would be astonished at such a wild  
conclusion. All the research shows is brain plasticity in  
interpreting signals from unusual neural pathways. How does that  
invalidate mechanism?


Yes, I was confused at first by the statement in the first  
paragraph that the eyes "can confer vision without a direct neural  
connection to the brain" (maybe Craig was confused by this too?),  
but it seems that by "direct neural connection" they just mean an  
optic nerve wired directly to the brain, bypassing the spinal cord  
like the optic nerve normally does, since later in the article they  
do mention the eyes were connected (indirectly) to the brain via  
the spinal cord: "No one would have guessed that eyes on the flank  
of a tadpole could see, especially when wired only to the spinal  
cord and not the brain."


Even that would not be conceptually astonishing. My computer is not  
wired to anything, and I can still send you a mail. It would have  
meant only that optic cells have some wifi systems. Cute, without  
doubt, but still not a threat for computationalism. Improbable also,  
but who knows.


Bruno



If they were wireless from the start though, why use an optic nerve?


OK. That's show only that biological evolution did not invest in radio  
waves. Why? Interesting question. Probably not enough profitable  
locally, contrary to direct exchange of biochemical material.


Bruno






Craig







Jesse



--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Messages Aren't Made of Information

2013-03-07 Thread Bruno Marchal


On 05 Mar 2013, at 19:14, Craig Weinberg wrote:




On Tuesday, March 5, 2013 12:03:28 PM UTC-5, William R. Buckley wrote:
Craig:


You statement of need for a human to observe the

pattern is the smoking gun to indicate a misunderstanding

of semiotic theory on your part.


I don't think that it has to be humans doing the observing at all.


Specifically, you don’t need a human; a machine will do.


A machine can only help another non-machine interpret something. I  
don't think that they can interpret anything for 'themselves'.


You should study machine's self-reference. It is easy to program a  
machine interpreting data, by itself and for herself. This is not like  
consciousness. this is testable and already done.

You confuse the notion of machine before Post, Church Turing and after.



Bruno






Not all machines are man-made.


True, but what we see as natural machines may not be just machines.  
Man-made machines may be just machines.


Craig


wrb


From: everyth...@googlegroups.com  
[mailto:everyth...@googlegroups.com] On Behalf Of Craig Weinberg

Sent: Tuesday, March 05, 2013 5:24 AM
To: everyth...@googlegroups.com
Subject: Re: Messages Aren't Made of Information




On Tuesday, March 5, 2013 2:06:20 AM UTC-5, William R. Buckley wrote:

There is information (I take information to be a
manifestation of entropy) and it is always represented
in the form of a pattern (a distribution) of the units
of mass/energy of which the Universe is composed.


I can agree that information could be considered a manifestation of  
entropy, to the extent that entropy is necessary to provide a  
contrast space for a distribution. To string an ellipses together,  
you need one dot, repetition, space, and a quality of measurement  
which yokes together the three dots aesthetically. Beyond that, you  
also need human observer with human visual sense to turn the  
distribution into a 'pattern'. Without that, of course, even  
distribution cannot cohere into "a" distribution, as there is no  
scale, range, quality, etc to anchor the expectation. If we are a  
microbe, we may not ever find our way from one dot to the next.



I
think that semiotic signs are simply specific bits
of information; I will use the terms synonymously.

Information has meaning only within context.  For many
people, context is taken to mean one piece of information
as compared to another piece of information.  I do not
take this meaning of context when I discuss semiotics.
Instead, I take semiotic context to be the acceptor of
the information.  Hence, all meaning resides a priori
within information acceptors.


Agree. Well, transmitters form the signs from their own sense of  
meaning as well. That's how we are having this discussion.




What you know you have always known; the sign merely
serves to bring that knowledge to your conscious mind.


Right. I mean it might be a bit more complicated as far as novelty  
goes. I don't know if the state of unconscious information is really  
what I "have always known" but that this particular constellation of  
meanings reflects the Totality in a way that it is only trivially  
novel. Like if you hit a jackpot on a slot machine - that may not  
have happened before, but the slot machine is designed to payout  
whenever it does. The jackpot already exists as a potential and  
sooner or later it will be realized.




That you may have intention and so comport your delivery
of information to another acceptor has not bearing upon
the subsequent acceptance or rejection of that information
by the target acceptor.  Acceptance or rejection of
information is determined solely by the accepting or
rejecting context (acceptor).


Agree. But the converse - the acceptor can only accept information  
which has been included for delivery by intention (or accidentally I  
suppose).




Your mere presence sends information regardless of some
conscious intent.  Indeed, your absence does equally
deliver information, for the target acceptor will see
a definite difference in available information sources
whether you are present or not.

Consider a line worker in a bean processing plant where
the task is to cull *bad* dried beans from *good* dried
beans as they go by on a conveyor belt; the *bad* beans
are removed by hand, so the line worker is constantly
looking for *bad* beans while constantly being aware
of the fact that not many of the beans are *bad*.  The
consciousness is aware of both that which is present
and that which is not present.


Yes, the expectation is key. I call that the perceptual inertial  
frame. There is an accumulated inertia of expectations which  
filters, amplifies, distorts, etc.



Further, what any information that you emit means to
you is irrelevant to the meaning that another may take
for that information.


Then how does art work? Music? Certainly it is pretty clear that  
what emitting Iron Man meant to Black Sabbath is different from what  
emitting the Four Seasons meant to Vivaldi. I 

Re: Dartmouth neuroscientist finds free will has neural basis

2013-03-07 Thread Bruno Marchal


On 05 Mar 2013, at 18:21, Craig Weinberg wrote:




On Monday, March 4, 2013 7:23:32 AM UTC-5, Bruno Marchal wrote:

On 03 Mar 2013, at 20:35, meekerdb wrote:

> On 3/2/2013 11:56 PM, Stathis Papaioannou wrote:
>>> So you admit that what you say contradicts the fact that you are
>>> >intentionally saying it?
>> "Intentional", as far as I can understand its use in philosophy, is
>> more or less equivalent to "mental" or "conscious". You seem to  
take
>> it as an a priori fact that something that is either  
deterministic or
>> random cannot have intentionality. This seems to me obviously  
wrong.

>
> Me too.  Intentionality just consists in having a hierarchy of goals
> which drive actions.  To say something is done intentionally just
> means it is done pursuant to some goal.  When the Mars rover steers
> around rock it does so intentionally in order to reach some place
> beyond which is a higher level goal.

I agree too, but of course some non-computationalist will argue that
"intention" needs consciousness (which i think is wrong),

Individually, one might carry out an intention without being  
personally conscious of it, but ontologically, a world without  
consciousness can have no intention - why would it? What would it  
mean for something to be intentional or unintentional in a universe  
which contains no possibility of conscious participation?



It can be a matter of definition. But when you instruct a machine with  
some high level goal (like surviving), she can build by herself  
derived goal and develop intention, yet non necessarily in a conscious  
way.







and that
goal driven algorithm can be non conscious (which i think is  
possible).


An algorithm can be non conscious (it always is IMO), but an  
algorithm has no intention to pursue a goal.
What drives an algorithm is not a goal but the mechanics of whatever  
it is executed on.



As I said, some actual algorithm works by building their own goal. A  
goal can be a subgoal in a tree of goals.  The basic goall can be very  
general, like fight against the fire. The algorithm will build  
subgoal, like finding water, etc.
Besides you might say that what driven a human is not his goal, but  
the mechanics of life. You don't give any criteria to distinguish a  
human from a complex machine.





Whether it is the force of water dripping on a scale, or current  
winding through a circuit, pendulum swinging, etc - that sensory- 
motor expectation is the only intention. Everything that we place in  
the line of that intention - water wheels, dominoes, etc, is  
unintentional to the process completing.


Which explains why you need to introduce consciousness in matter, but  
once you do that, why does it not operate in silicon, why only in  
carbon?





I can make a Rube Goldberg machine which drops a mallet on a bunny's  
head at the end, but that doesn't mean that the machine  
intentionally hurts animals. This is what it seems like you don't  
see or are denying. Just because an algorithm is designed  
purposefully doesn't mean that purpose is carried into the algorithm.


Of course. But this does not show that purpose is not itself  
mechanical. You beg systematically the question.








I am a bit astonished that some people still believe that
indeterminacy can help for free will. On the contrary, deterministic
free will make sense, because free will comes from a lack of self-
determinacy,

Why do you conceive of free will as emerging from an absence?


Without some absence of satisfaction, you get no goal at all, and  
without goal, no purpose, and without purpose no free will.

Adding randomness can give a look of free will, but it is not.

Bruno



That's like saying that white comes from not-black. Why would  
something develop free will just because it has a lack of self- 
determinacy? Jellyfish drift.


Craig

implying hesitation in front of different path, and self-
indeterminacy follows logically from determinism and self-reference.

First person indeterminacy can be used easily to convince oneself that
indeterminacy cannot help for free will. Iterating a self-duplication
can't provide free-will.

Bruno


http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.