Many truths in the following article,  for example, the concept that a 
perfectly rational

set of premises in research may fail to produce reliable results because someone

has overlooked possible "confounders," something that confounds one's 
investigation.

The trouble is that the article itself is filled with confounders that the 
author was

unable to see because he is operating under political bias.


One of the biggest is the elephant in the room, the part played by race in 
America.

We have two parallel populations in America, black  vs everyone else. Or, more

accurately, the black underclass vs everyone else since at least half of the

African-American population is operationally "white."  That, is, Asians are 
"white,"

most Hispanics are  "white," even  -by a narrow majority- most blacks are 
"white,"

but there still is a large segment of the black population, joined functionally

by a minority of Hispanics, who are "black."


Yet because of Leftist dogmatism, joined by culturally ignorant conservatives 
who

go along with the Left on such matters because they have no alternatives to 
offer,

being ignorant of the behavioral sciences, we habitually get conflated social 
statistics

in studies of such things as crime.  But if we were to toss out the crime rates
for black people, that is, the black part of the black population, not the 
"white" part

of the black population, we would get very different results. And then we would

get usable truths that make actual sense.


But we can't do that because of an ideology that says everyone is equal in all 
things,

which is total nonsense.


You  can take it from here...



BR


----------------------------------------------------





Neurologica  blog


The Law of Unintended 
Consequences<https://theness.com/neurologicablog/index.php/the-law-of-unintended-consequences/>
Published by Steven 
Novella<https://theness.com/neurologicablog/index.php/author/snovella/> under 
Neuroscience<https://theness.com/neurologicablog/index.php/category/neuroscience/>
22 
Comments<https://theness.com/neurologicablog/index.php/the-law-of-unintended-consequences/#disqus_thread>

[https://theness.com/neurologicablog/wp-content/uploads/sites/3/2019/01/teensexresizedmap1280fix-700x560.jpg]Psychologists
 have come to recognize that, because of the complexity of human emotion and 
behavior, we are often motivated to engage in activity which produces the exact 
opposite effect that we intend. If you are fearful of losing someone, you may 
become clingy and possessive, driving them away.


The same is true on a societal level – interventions designed to have one 
effect may have the opposite effect if we are not careful. A classic example is 
the “scared straight” approach to public service announcements – it doesn’t 
work. In fact, it may have the opposite of the intended effect. Warning kids 
about the dangers of alcohol, for example, may just romanticize alcohol use and 
suggest that it is more popular or common than it is, creating social pressure 
to use. This is the main idea behind the social norming 
approach<http://sphweb.bumc.bu.edu/otlt/MPH-Modules/SB/BehavioralChangeTheories/BehavioralChangeTheories7.html>
 – tell kids, instead, statistics about how few of their peers are getting 
drunk regularly, reducing the social pressure to use.


This overall pattern is fairly consistent in the literature (although, of 
course, researching such questions is complex and the details matter to the 
outcome). Another recent example is a study which 
finds<https://www.everydayhealth.com/news/fat-shaming-does-not-motivate-obese-people-to-lose-weight/>
 that fat shaming obese people does not motivate them to lose weight, which is 
sometimes the motivation (or at least the justification) of the person doing 
the fat shaming. Rather, fat shaming leads to more weight gain.

Another study from November 
2018<https://ajph.aphapublications.org/doi/abs/10.2105/AJPH.2018.304896> is 
making the social media rounds which also reflects this basic principle of 
unintended consequences –

Federal abstinence-only funding had no effect on adolescent birthrates overall 
but displayed a perverse effect, increasing adolescent birthrates in 
conservative states. Adolescent pregnancy–prevention and sexuality education 
funding eclipsed this effect, reducing adolescent birthrates in those states.

So in conservative states, abstinence-only education actually increased 
adolescent birthrates, meanwhile sex education reduced adolescent birthrates. 
The conventional interpretation of these and similar results are that by 
teaching children about sex you demystify it, while encouraging abstinence just 
increases the allure of “forbidden fruit.” This interpretation is also probably 
simplistic, but it is consistent with the data. At the very least the notion 
that teaching kids about sex will encourage them to engage in it (rather than 
empower them to make better decisions) is naive, only surpassed by the notion 
that encouraging children to be abstinent will actually work.


While it is tempting to also interpret these results as having something to do 
with the ideology of the states studies (and this was the hypothesis being 
tested) there are many possible confounders. The authors did try to control for 
“state-level confounders” but it is difficult to anticipate them all (as the 
history of psychological research shows over and over). For example, the teen 
birthrate is significantly 
higher<https://www.pbs.org/newshour/health/teen-birth-rate-higher-rural-areas> 
in rural areas vs urban areas, and rural areas are also more conservative.


There is another article making the 
rounds<https://worldofweirdthings.com/2019/01/28/movies-more-violent-but-crime-rates-falling/>
 that relates to a similar misconception about how psychology works – the 
relationship between violence in movies and video games and violent behavior. 
The primary point in this article is that over the last few decades violence in 
movies has been increasing, while overall crime rates have been decreasing.


The evidence is fairly objective that movies are getting more 
violent<http://pediatrics.aappublications.org/content/134/5/1024> – a PG-13 
movie today is more equivalent to an R-rated movie in the 1980s in terms of 
violence (not necessarily nudity – another factor that gets the higher rating). 
The authors of the new 
study<https://link.springer.com/article/10.1007%2Fs11126-018-9615-2> conclude:

Raw correlations suggest that PG-13 rated movie violence is inversely related 
to actual violence in society. However, controlling for autocorrelations 
suggests that the best interpretation is that PG-13 rated movie violence is 
unrelated to violence in society. Caution is advised for scholars to avoid 
implying that PG-13 rated movie violence may have a causal effect on crime in 
society.

So the best current conclusion is that movie violence has no effect on societal 
violence, but the authors are correct to caution about drawing any firm 
conclusions because the question of what causes societal violence is 
horrifically complex. We still can’t fully explain why crime has been generally 
decreasing since the 1990s.

The lesson is all of this is that we need to be exceedingly cautious when 
drawing simplistic conclusions about human psychology and behavior. Naive 
notions of cause and effect are likely not to be true, and in fact may be the 
opposite of what our gut tells us. This not only applies to our personal 
decisions, but to public policy. It’s easy to create perverse incentives, for 
example. We also cannot assume that people are generally rational actors – 
people are emotional and social creatures with a complex web of motivating 
factors. Predicting how that will all play out is like predicting the weather.


What this means practically is that care should be taken when crafting policy. 
Up front it should be as evidence-based as possible. But even then, predicting 
effects will be difficult. Therefore, new policies intended to have a specific 
effect should build in a review period in which objective evidence is gathered 
to determine the net effect.


This is basically what we do in medicine – we may think we can predict the net 
effect of an intervention, but we know from experience that we have to study 
net effects and then make decisions based on this evidence. Otherwise you are 
not practicing medicine – you are practicing witchcraft. Imagine if the same 
were generally true for government policy – evidence-based government. This 
doesn’t mean there is no role for ideology, which determines priorities and 
value judgments. But it does mean that we do our best to ensure that a new 
policy will have the effect we intend, whatever that is.

-- 
-- 
Centroids: The Center of the Radical Centrist Community 
<[email protected]>
Google Group: http://groups.google.com/group/RadicalCentrism
Radical Centrism website and blog: http://RadicalCentrism.org

--- 
You received this message because you are subscribed to the Google Groups 
"Centroids: The Center of the Radical Centrist Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to