In what acceptable scenario is the behavior not describable in principle? The scenario that comes to mind is in the non-science magical thinking scenario. I doubt that Tesla navigation systems are written in a purely functional language, but surely there is more to this condition than whether I have access to that source code and can send you the million lines in purely functional form? If something is inscrutable, it might exhibit free will?
-----Original Message----- From: Friam <[email protected]> On Behalf Of jon zingale Sent: Friday, April 2, 2021 12:26 PM To: [email protected] Subject: Re: [FRIAM] Free Will in the Atlantic I would say no if you can provide me the function. -- Sent from: http://friam.471366.n2.nabble.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ archives: http://friam.471366.n2.nabble.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ archives: http://friam.471366.n2.nabble.com/
