Hm. First, I'd propose the homunculus is tiny in scope and impact with respect to every 
other process. I'd even argue that it's so tiny, it doesn't (can't) transform states. 
Maybe it doesn't have memory at all. It might simply be a random bit flip. And the only 
time it would matter at all is if the rest of the system in which it's mostly enslaved 
sits on some fragile cusp where the bit flip matters. Maybe whatever free will we have is 
vanishingly small. E.g. out of 1 million people, maybe only 1 of them ever did anything 
of their own free will ... and it was only that one decision when they were 2 years old. 
Everything else is determined. Second, despite being determined, it's *lossy*, 
irreversible. And when we use the phrase "free will" in our everyday 
conversation, we're really talking about that loss, the information lost when we truncate 
others or others truncate us. The existence of the lossy, truncating collective doesn't 
preclude the existence of the tiny, tiny impact randomness.

On 6/15/20 9:21 AM, Marcus Daniels wrote:
How does the free will homunculus transform states?  By state I mean all of the 
function definitions,  memory, hyperparameters, etc.?
In a biological system how do the biochemistry and electrodynamics evolve?   
Does the homunculus get to choose which physics it likes?   How does it do that?
It doesn't matter if the system or homunculus has to face uncertainty.   That 
just means the homunculus has to manage risk.
For people to say they "believe in free will" is to say they couldn't, in 
principle, simulate human social systems with fidelity.

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

Reply via email to