Yes. From the user's perspective they ran identically. Those workstations
didn't even have the same instruction sets.
---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505
505 670-9918
Santa Fe, NM
On Fri, Oct 21, 2022, 8:24 AM glen <[email protected]
<mailto:[email protected]>> wrote:
By "ran identically", you actually mean "produced identical outputs". They
didn't run identically. Simple ways to see this are system and process monitors, top, strace, etc.
On 10/20/22 17:19, Frank Wimberly wrote:
> Back in the 80s I wrote many Unix shell scripts. For my purposes they
ran identically on various workstations whether Sun, SG, or, eventually, Vax
(running Unix). The software existed in my mind/brain, in files in the various
filesystems, or on paper listings. What's wrong with my thinking?
>
> Frank
>
> ---
> Frank C. Wimberly
> 140 Calle Ojo Feliz,
> Santa Fe, NM 87505
>
> 505 670-9918
> Santa Fe, NM
>
> On Thu, Oct 20, 2022, 3:52 PM glen <[email protected] <mailto:[email protected]>
<mailto:[email protected] <mailto:[email protected]>>> wrote:
>
> I can't speak for anyone else. I'm a simulationist. Everything I do is in terms of analogy [⛧]. But there is
no such thing as a fully transparent or opaque box. And there is no such thing as "software". All processes are
executed by some material mechanism. So if by "computational metaphor", you mean the tossing out of the
differences between any 2 machines executing the same code, then I'm right there with you in rejecting it. No 2 machines
can execute the same (identical) code. But if you define an analogy well, then you can replace one machine with another
machine, up to some similarity criterion. Equivalence is defined by that similarity criterion. By your use of the
qualifier "merely" in "merely the equivalent", I infer you think there's something *other* than
equivalence, something other than simulation. I reject that. It's all equivalence, just some tighter and some looser.
>
> [⛧] Everyone's welcome to replace "analogy" with "metaphor" if they so choose. But, to me,
"metaphor" tends to wipe away or purposefully ignore the pragmatics involved in distinguishing any 2 members of an equivalence
class. The literary concept of "metaphor" has it right. It's a rhetorical, manipulative trick to help us ignore actual difference,
whereas "analogy" helps us remember and account for differences and similarities. "Metaphor" is an evil word, a crucial
tool in the toolkit for manipulators and gaslighters.
>
>
> On 10/20/22 13:27, Prof David West wrote:
> >
> > Marcus and glen (and others on occasion) have posted frequently on the
"algorithmic "equivalent" of [some feature] of consciousness, human emotion, etc.
> >
> > I am always confronted with the question of of "how equivalent?" I am almost certain
that they are not saying anything close to absolute equivalence - i.e., that the brain/mind is executing the same
algorithm albeit in, perhaps, a different programming language. But, are their assertions meant to be
"analogous to," "a metaphor for," or some other semi/pseudo equivalence?
> >
> > Perhaps all that is being said is we have two black boxes into
which we put the same inputs and arrive at the same outputs. Voila! We expose the
contents of one black box, an algorithm executing on silicon. From that we conclude
it does not matter what is happening inside the other black box—whatever it is, our,
now, white box is an 'equivalent'.
> >
> > Put another way: If I have two objects, A and B, each with an (ir)regular
edge. in this case the irregular edge of A is an inverse match to that of B—when put together
there are no gaps between the two edges. They "fit."
> >
> > Assume that A and B have some means to detect if they "fit"
together. I can think of algorithms that could determine fit, a simplistic iteration across all
points to see if there was a gap between it and its neighbor, to some kind of collision
detection.
> >
> > Is it the case that whatever means used by A and B to detect fit,
it is _*/merely/*_ the equivalent of such an algorithm?
> >
> > The roots of this question go back to my first two published
papers, in _AI Magazine_ (then the 'journal of record' for AI research); one critical
of the computational metaphor, the second a set of alternative metaphors of mind. An
excerpt relevant to the above example of fit.
> >
> > /Tactilizing Processor
> > /
> > /Conrad draws his inspiration from the ability of an enzyme to
combine with a substrate on the basis of the physical congruency of their
respective shapes (topography). This is a generalized version of the lock-and-key
mechanism as the hormone-receptor matching discussed by Bergland. When the
topographic shape of an enzyme (hormone) matches that of a substrate
(receptor), a simple recognize- by-touch mechanism (like two pieces of a
puzzle fitting together) allows a simple decision, binary state change, or
process to take place, hence the label “tactilizing processor.”/
> >
> > Hormones and enzymes, probably/possibly, lack the ability to
compute (execute algorithms), so, at most, the black box equivalence might be used
here.
> >
> > [BTW, tactilizing processors were built, but were extremely slow
(speed of chemical reactions) but had some advantages derived from parallelism.
Similar 'shape matching' computation was explored in DNA computing as well.]
> >
> > My interest in the issue is the (naive) question about how our
understanding of mind/consciousness is fatally impeded by putting all our research
eggs into the simplistic 'algorithm box'?
> >
> > It seems to me that we have the CS/AI/ML equivalent of the quantum physics
world where everyone is told to "shut up and compute" instead of actually trying to
understand the domain and the theory.
> >
> > davew