John:
Thank you for the compliment. I work hard to make my papers
readable.
Chris Greene and John Kulig are right that there are certain
necessary and useful conventions in writing an empirical report,
such as being precise about your operational definition. Michael
Britt is correct that often the purpose of doing the study is
still obscure even after reading the article.
My conclusion has been that the obscurity in many cases is due to
a combination of poor logic in the argument and poor presentation
of the argument.
My general goal has been to make the read as simple and easy for
the reader as possible.
One activity that has shaped my writing style is teaching how to
read and write an empirical report in a research methods class.
Another is the supervision of theses.
We all have dealt with papers that wander about the landscape,
with bits of the discussion appearing in the introduction,
results appearing in the method section, and operational
definitions being introduced in the results section. Pompous
words are "utilized"; adjectives and adverbs indicate "really
signficant" effects.
I would spend an hour on a paper and still be less than halfway
through the muck. I was very frustrated. Then I realized that
students were just filling space because they were unsure about
their goals. I started restricting the amount of space that
students were allowed to use. I restricted the introduction of
an empirical study to 2 pages. They still had to do a literature
review, propose and explain their hypothesis, and introduce their
methodology. This technique made it much easier to point out
that their literature review had something to do with their
general topic but little to do with their specific investigation.
The general emphasis was on simplicity and logical flow. If the
material was not necessary then it should be eliminated.*
My writing style is now focused on the same goals of simplicity
and logical flow. My complaints about many recent articles I
have read is that the writers are filling up introductions and
discussions with topic-related but unnecessary digressions. My
inner cynic thinks that this kind of defensive writing is made
easy by cut-and-paste from earlier, related manuscripts or PsychInfo.
Ken
---------------------------------------------------------------
Kenneth M. Steele, Ph.D. [email protected]
Professor
Department of Psychology http://www.psych.appstate.edu
Appalachian State University
Boone, NC 28608
USA
---------------------------------------------------------------
*Recently my son switched his major from computer engineering to
philosophy (with emphasis on logic and linguistics). Philosophy
majors at his school are required to contract to write a
2500-word independent paper with a current faculty member. He
chose to write on a topic from Kant because he was taking a
course from a Kant scholar. I talked to him at the end of Fall
semester about the deadine for the paper. He had a 3-day
extension to turn in the paper. I asked what is the issue. The
issue was that the paper was over 5500 words. I told him that he
would need to rank order the sections in importance and cut from
least to most important. He groaned. I told him "I feel your
pain."
On 1/7/2012 9:27 AM, John Kulig wrote:
In addition to what Chris said below, I offer a few other
possibilities. We have to clarify with the utmost detail our
operational definitions to permit replication, and being a
"young" science we have not always settled on standard ways to
measure things, so instead of saying how social-support effects
us, we must specify what sub-scales of what measures of social
support correlated with etc etc etc. And because many of the
"things" we study are constructs, articles must prove that
such-and-such a construct actually exists, so you have to plow
through factor loadings and eigenvalues etc etc.
Also, there are many of us are studying small effect size
phenomena (must avoid Type I false claims) so we must fill the
journals with statistics (hopefully effect sizes and CIs!) to get
it published. We DO have to separate ourselves from junk science.
Is it also possible that our methodological expertise outpaces
the content of our discoveries? So what pops into view on our
journals is the methodology. On the other hand, when you have a
clear finding, I'd say go for the great writing. Many Psych
Science articles are well written, perhaps because they select
clear findings of appeal to a wide audience. When this thread
appeared I thought of articles that were great to read, and one
that came to mind was Ken Steele's (fellow TIPSTER) Psych Science
article on the Mozart effect (what year was that?) - readable,
informative, etc. Ken, care to share publishing secrets?
==========================
John W. Kulig, Ph.D.
Professor of Psychology
Coordinator, University Honors
Plymouth State University
Plymouth NH 03264
==========================
-----------------------------------------------------------------
*From: *"Christopher D. Green" <[email protected]>
*To: *"Teaching in the Psychological Sciences (TIPS)"
<[email protected]>
*Sent: *Friday, January 6, 2012 5:24:23 PM
*Subject: *Re: [tips] Why does published research have to be so
cryptic
Because, science is (correctly) written explicitly to appeal to
the intellect rather than to the emotions (unlike almost every
other form of writing), so scientists make something of a fetish
(okay, "a show" if you find "fetish" too pejorative) of writing
it as un-excitingly as possible. Slightly less cynically,
scientists typically find that everyday categories do not "carve
nature at its joints" (to borrow a phrase), so they have to
invent exotic new terms (or repurpose relatively obscure old
ones) to capture the various portions of everyday language that
go together "in nature" (energy, mass, element, phylogeny,
personality, intelligence) and that makes it hard (and boring)
for "laypeople" to read.
Chris
--
Christopher D. Green
Department of Psychology
York University
Toronto, ON M3J 1P3
Canada
416-736-2100 ex. 66164
[email protected]
http://www.yorku.ca/christo/
==========================
=============
On 1/6/12 2:50 PM, Michael Britt wrote:
I just finished reading another research article for possible use in an
upcoming podcast and while I think the study itself was well done, I am once
again left wondering why it all has to be so boring. I mean, we tell students
(at least I did) that we do research because we're curious about human
behavior. We usually do research because we've observed something about
ourselves and we want to understand it better.
After this initial curiosity we usually talk about our research idea with friends and
colleagues over lunch. We even get excited about it. Now, of course, the research
process itself is a serious matter and I am not saying that we need to dumb down the
process (blah, blah, blah). I'm just saying that what comes out the other end - the
published article - is typically so mindnumbingly boring to read. And it's not just
that. The other thing that discourages me is that all the curiosity, all the excitement
the researchers probably had at the start of the process is nowhere to be found in the
publication. In fact, I'm not even clear as to what the researchers saw as important
(even potentially interesting) about this research I just read. Isn't there a way to
capture ANY of the initial excitement? Can't we have a section in which researchers are
allowed to tell us what the applications of the research are to "real life"? I
know they sometimes do this in the Discu
ssion,
but you'd often be hard pressed to find it. We criticize lawyers for their
cryptic legal documents - what about us?
No wonder students hate research methods. We've sucked the "wonder" out of
it.
Michael
Michael A. Britt, Ph.D.
[email protected]
http://www.ThePsychFiles.com
Twitter: mbritt
---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here:
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=15160
or send a blank email to
leave-15160-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu