On 30 Mar 2002 09:46:45 -0800, [EMAIL PROTECTED] (wuzzy) wrote:

> Maybe someone will point me to other newsgroups or mail groups on
> biological or clinical statistics as I know that sci.stat.edu is about
> the education of statistics not really about stats itself..
> 
> My question (frustration, rather) is: how do  you deal with the fact
> that signs on coefficients of multivariable models change direction
> and size when you remove a predictor of the dependant variable(s).

How do you deal ... ?
You decide on the order -- what comes first rationally.  
You argue for your choice.  You try to show that other choices
are irrational for external reasons, or are contrary to other data.

Some of this has been discussed as 
"path analysis"  and "structural equations"  but the basic
information is often available as the zero-order correlations
and the various partial correlations (or regression coefficients).

> 
> is there a test for this?

There is a test for whether one coefficient changes notably
when another  variable is added/ dropped.  You can find a 
discussion with detail, by using groups.google.com  -- specify
the subject line or the message-id.
This post has a lot of the detail -- 
========== header
From: Gary McClelland <[EMAIL PROTECTED]>
Newsgroups: sci.stat.consult
Subject: Re: sig. of diff. between r and partial-r
Date: Thu, 04 Jan 2001 16:04:43 -0700
Message-ID: <[EMAIL PROTECTED]>
========== end of header.


> 
> It seems to me that if genetics place an important part in determining
> cholesterol levels (say), and you study diet as it relates to
> cholesterol, but forget to insert genetics (say in a twin study), then
> you might find that eating eggs raises your cholesterol whereas if you
> don't include eggs might lower cholesterol. (hyopthetial)
> 
 - that doesn't parse, but we get the idea.
> 
> How do observational and especially exploratory studies overcome this
> problem?

The good ones will admit the problem and explain it.

Secondary variables are helpful for trying to nail down the 
primary ones.  - This is one reason why observational studies 
and surveys that are turned to questions not originally planned
become particularly untrustworthy.   For any survey, you worry 
about potential biases that can't be thoroughly nailed down, even 
though you thought of them beforehand.  For new questions, there
might be no relevant attempt, not even a simple one, to control
for some obvious bias.


> 
> I think you can only overcome it (given lack of theoretical grounds to
> overcome it) by looking at the size of the relationship and "guess" at
> how true your result is, even though the opposite of your result might
> be what happens in real life.

Size of relationship helps.  A BIG effect (like a 5-fold 
mortality risk) is more convincing than a 50% increase. 
A tiny relationship can't account for a huge outcome.

But 'guess'  is not the right word, unless you mean the
educated guess of the professional.  In epidemiological research, 
the biological underpinning has to be believable.

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to