> The comparator model of agency
> As described above, a sense of agency is generated when voluntary actions 
> match outcomes. In computational models of motor control (FIG. 3) , motor 
> commands are used to predict the sensory consequences of action 56 .  This 
> prediction is thought to involve passing an efference copy of the motor 
> command to a ‘forward model’ (also known as an ‘internal predictive model’) 
> of the moving body part 57 . Sensory information about the body and the 
> environment is then compared with the sensory feedback that would be 
> predicted given the motor command. The result of this comparison is known as 
> a prediction error . For example, when the brain sends the motor command to 
> reach for the light switch, one might predict the resulting movement of the 
> arm and also that the lights will come on. If the arm does not move in the 
> appropriate way, the motor control system must update or alter the motor 
> command to achieve the goal of switching the lights on.
> Comparator models were originally developed to explain how the brain monitors 
> and corrects goal-directed movements. However, the same models have also been 
> used to explain the sense of agency. If an event is caused by one’s own 
> action (and if the internal predictive model is correct), the actual feedback 
> corre-sponds exactly to the prediction, and the result of the comparison is 
> zero; otherwise, the result is a non-zero prediction error.
> According to this view, people have a sense of agency over events that can be 
> predicted given their motor commands.


I'm sure it's confirmation bias. But this sounds a lot like my LOMFW (lost 
opportunity mechanism for free will). FWIW, I'd claim any viable conception of 
free will must rely on interoceptive feedback and the *rates* at which those 
loops run. And while there is no such thing as "traditional free will", where 
our self is some holistic agent, there are collections of reinforcing and 
inhibiting feedbacks that trade off for dominance amongst each other. So, 
iteration/training/exercise over a skill like slowing one's heart rate not only 
improves one's predictive *model* of the process-outcome, it raises the 
likelihood that it'll happen again in the future.

And, legally, emotionally, etc. there's no denying that the *majority* of this 
mechanistic work happens *inside* our skin. So, when one's collection of 
interacting feedbacks chooses to, say, murder someone, it's patently True that 
that particular bag of meat decided to do that of their own "free will". This 
doesn't depend on persnickety academic sophistry teasing out traditional vs. 
scientific free will.

On 4/2/21 8:46 AM, Marcus Daniels wrote:
> Here’s a review paper on the topic.   
> https://www.nature.com/articles/nrn.2017.14 
> <https://www.nature.com/articles/nrn.2017.14>

-- 
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/

Reply via email to