Marcus and glen (and others on occasion) have posted frequently on the 
"algorithmic "equivalent" of [some feature] of consciousness, human emotion, 
etc.

I am always confronted with the question of of "how equivalent?" I am almost 
certain that they are not saying anything close to absolute equivalence - i.e., 
that the brain/mind is executing the same algorithm albeit in, perhaps, a 
different programming language. But, are their assertions meant to be 
"analogous to," "a metaphor for," or some other semi/pseudo equivalence? 

Perhaps all that is being said is we have two black boxes into which we put the 
same inputs and arrive at the same outputs. Voila! We expose the contents of 
one black box, an algorithm executing on silicon. From that we conclude it does 
not matter what is happening inside the other black box—whatever it is, our, 
now, white box is an 'equivalent'.

Put another way: If I have two objects, A and B, each with an (ir)regular edge. 
in this case the irregular edge of A is an inverse match to that of B—when put 
together there are no gaps between the two edges. They "fit."

Assume that A and B have some means to detect if they "fit" together. I can 
think of algorithms that could determine fit, a simplistic iteration across all 
points to see if there was a gap between it and its neighbor, to some kind of 
collision detection.

Is it the case that whatever means used by A and B to detect fit, it is 
_**merely**_ the equivalent of such an algorithm?

The roots of this question go back to my first two published papers, in _AI 
Magazine_ (then the 'journal of record' for AI research); one critical of the 
computational metaphor, the second a set of alternative metaphors of mind. An 
excerpt relevant to the above example of fit.

*Tactilizing Processor
*
*Conrad draws his inspiration from the ability of an enzyme to combine with a 
substrate on the  basis  of  the  physical  congruency  of  their respective 
shapes (topography). This is a generalized  version  of  the  lock-and-key  
mechanism  as  the  hormone-receptor  matching discussed by Bergland. When the 
topographic shape  of  an  enzyme  (hormone)  matches  that of  a  substrate  
(receptor),  a  simple  recognize- by-touch  mechanism  (like  two  pieces  of  
a puzzle  fitting  together)  allows  a  simple  decision,  binary  state  
change,  or  process  to  take place, hence the label “tactilizing processor.”*

Hormones and enzymes, probably/possibly, lack the ability to compute (execute 
algorithms), so, at most, the black box equivalence might be used here.

[BTW, tactilizing processors were built, but were extremely slow (speed of 
chemical reactions) but had some advantages derived from parallelism. Similar 
'shape matching' computation was explored in DNA computing as well.]

My interest in the issue is the (naive) question about how our understanding of 
mind/consciousness is fatally impeded by putting all our research eggs into the 
simplistic 'algorithm box'?

It seems to me that we have the CS/AI/ML equivalent of the quantum physics 
world where everyone is told to "shut up and compute" instead of actually 
trying to understand the domain and the theory.

davew

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to