On Sun, 30 Dec 2001, Stan Brown wrote in part:
A. G. McDowell [EMAIL PROTECTED] wrote:
The significance value associated with the one-tailed test will always
be half the significance value associated with the two-tailed test,
For means, yes. Not for proportions, I think.
Oh? Why not?
In trying to clear out my e-mail inbox, I came across this post, for
which there seemed not to have been any responses.
On Fri, 2 Feb 2001, Caroline Brown wrote:
I have an analysis problem, which I am researching solutions to, and
David Howell of UVM suggested I mail the query to you.
My
On Wed, 26 Dec 2001 [EMAIL PROTECTED] wrote (edited):
I came across a table of costume jewelry at a department store with
a sign that said 150% off. I asked them how much they would
pay me to take it all off of their hands. I had to explain to them
what 150% meant, and they
On Tue, 11 Dec 2001, Vadim and Oxana Marmer wrote:
besides, who needs those tables? we have computers now, don't we?
I was told that there were tables for logarithms once. I have not seen
one in my life. Is not it the same kind of stuff?
If you _want_ to see one, you have no farther to go
On 24 Dec 2001, Carol Burris wrote:
I am a doctoral student who wants to use student performance on a
criterion test, a state Regents exam, as a dependent variable in a
quasi-experimental study. The effects of previous achievement can be
controlled for using a standardized test, the Iowa
On Sat, 22 Dec 2001, Ralph Noble asked:
How would you have done this?
A local newspaper asked its readers to rank the year's Top 10 news stories
by completing a ballot form. There were 10 choices on all but one ballot
(i.e. local news, sports news, business news, etc.), and you had to
On Thu, 20 Dec 2001, Johannes Hartig wrote:
Does anyone know the original applications
or the meaning of the S-function in SPSS?
I know the function itself:
Y = e**(b0 + (b1/t)) or
ln(Y) = b0 + (b1/t)
and I know how the curve looks like, but I am wondering in which
fields of research
On Fri, 14 Dec 2001, Wuensch, Karl L wrote:
I came across a table of costume jewelry at a department store with a
sign that said 150% off. I asked them how much they would pay me to
take it all off of their hands. I had to explain to them what 150%
meant, and they then explained to me
On Sun, 9 Dec 2001, Ronny Richardson wrote in part:
Bluman has a figure (2, page 333) that is supposed to show the student
When to Use the z or t Distribution. I have seen a similar figure in
several different textbooks.
So have I, sometimes as a diagram or flow chart, sometimes in
On 1 Dec 2001, jenny wrote:
What should I do with the missing values in my data. I need to
perform a t test of two samples to test the mean difference between
them.
How should I handle them in S-Plus or SAS?
1. What do S-Plus and/or SAS do with missing values by default?
(All
On Tue, 27 Nov 2001, Thom Baguley wrote in part:
Donald Burrill wrote:
On Fri, 23 Nov 2001, L.C. wrote:
The question got me thinking about this problem as a
multiple comparison problem. Exam scores are typically
sums of problem scores. The problem scores may be
thought
On Sat, 24 Nov 2001, L.C. wrote:
Thanks for the reply!
As for the iid, it's reasonable to believe the questions could be
drawn from some population. Why not the answers?
If the questions are selected in accordance with some table of
specifications, they are not from _a_ population, but
On 20 Nov 2001, J. Peter Leeds wrote:
I'm working on a formula for measuring decision making skill and am
trying to estimate the probability that a person of known skill can
distinguish among different response option contrasts and avoid a type
II error.
One effective way
On 17 Nov 2001, Myles Gartland wrote:
In an F distribution, the critical value for the lower tail is the
reciprocal of the the critical value of the upper tail (with the
degrees of freedom switched).
Why? I understand how to calculate it, but do not get why the math
works.
Essentially
On Tue, 13 Nov 2001, Wendy (alias Eric Duton?) wrote:
When applying multiple regression on timeseries data, should I check
(similarly to ARIMA-models) for unit roots in the dependent variable
and the predictor variables and perform the necessary differencing
OR
could
On 12 Nov 2001, Niko Tiliopoulos wrote:
I am acting as the stats advisor for my unit in the psychology
department of the University of Edinburgh, UK. Last week a colleague
of mine presented me with the following issue, and I am not quite sure
how to respond:
She is running a
On Tue, 13 Nov 2001, Wendy (alias Eric Duton?) wrote:
When applying multiple regression on timeseries data, should I check
(similarly to ARIMA-models) for unit roots in the dependent variable
and the predictor variables and perform the necessary differencing
OR
could I simply start
On Wed, 14 Nov 2001, Alan McLean wrote in part:
Herman Rubin wrote:
A good exam would be one which someone who has merely
memorized the book would fail, and one who understands
the concepts but has forgotten all the formulas would
do extremely well on.
Since to understand the
You persist in repeating your original request in your original phrasing,
with no elaboration(s) that might resolve the ambiguities therein.
On Sat, 10 Nov 2001, Mark T wrote:
On Fri, 09 Nov 2001 Rich Ulrich [EMAIL PROTECTED] wrote:
On Thu, 8 Nov 2001 Mark T [EMAIL PROTECTED] wrote:
On Thu, 1 Nov 2001, Chia C Chong wrote:
I am a beginner in the statistical analysis and hypothesis. I have 2
variables (A and B) from an experiment that was observed for a certain
period time. I need to form a statistical model that will model these
two variables.
Seems to me you're
In reviewing some not-yet-deleted email, I came across this one, and have
no record of its error(s) having been corrected.
On Sat, 29 Sep 2001, John Jackson wrote:
How do describe the data that does not reside in the area
described by the confidence interval?
For example, you have a two
On Sun, 28 Oct 2001, Melady Preece wrote:
Hi. I want to compare the percentage of correct identifications (taste
test) to the percentage that would be correct by chance 50%? (only two
items being tasted). Can I use a t-test to compare the percentages?
What would I use for the s.d. for
On Wed, 24 Oct 2001, Rich Ulrich wrote in part:
It has been my impression (from google) that CA is more popular
in European journals than in the US, so there might be better
sites out there in a language I don't read.
(CA = correspondence analysis,
ou en francais analyse des
The story is about six students who ... The instructor ... tells them
to report the next day for an exam with only one question. If they all
get it right they all pass. They were seated at corners of the room and
could not communicate.
Must have been an interesting room, with six corners
On Fri, 12 Oct 2001, Desmond Cheung (of Simon Fraser University,
Vancouver, BC) wrote:
Is there any mathematical analysis to find how much the two peaks stand
out from the other data?
Hard to answer, not knowing where you're coming from with the question.
Any answer depends on the
William B. Ware [EMAIL PROTECTED] wrote:
Anyway, more to the point... the add one is an old argument based on
the notion of real limits. Suppose the range of scores is 50 to
89. It was argued that 50 really goes down to 49.5 and 89 really
goes up to 89.5. Thus the range was defined as
Turns out the method I originally suggested is unnecessarily cumbersome.
A more elegant method is described below.
On Sat, 29 Sep 2001, Donald Burrill wrote in part:
COPY c1-c35 to c41-c75; # Always retain the original data
OMIT c1 = '*';
OMIT c2
On Sun, 30 Sep 2001, John Jackson wrote:
Here is my solution using figures which are self-explanatory:
Sample Size Determination
pi = 50% central area 0.99
confid level= 99% 2 tail area 0.5
sampling
I second Dennis' question. While indeed MINITAB recognizes the missing
values, what it does with them depends on the procedure being used:
e.g., for CORRelation it uses all cases for which each pair of variables
is complete (pairwise deletion of missing data), and therefore, for a
data set like
On Fri, 28 Sep 2001, John Jackson wrote in part:
My formula is a rearrangement of the confidence interval formula shown
below for ascertaining the maximum error.
E = Z(a/2) x SD/SQRT N
The issue is you want to solve for N, but you have no standard
deviation value.
Hi, Carol. I'm taking the liberty of posting this to the Edstat
(statistical education) list as well as the Minitab list.
On Fri, 21 Sep 2001, Carol DiGiorgio wrote:
My question is: I would like to run 2-way ANOVA on my data.
Unfortunately it doesn't meet the assumptions of normality or
On Thu, 13 Sep 2001, Paul R. Swank wrote in part:
Dennis said
other than being able to say that the experimental group ... ON AVERAGE ...
had a mean that was about 1.11 times (control group sd units) larger than
the control group mean, which is purely DESCRIPTIVE ... what can you say
On Sat, 8 Sep 2001, Magenta wrote in part:
(responding to Rich Ulrich's remark:)
Michelle, I hope that you now know that you got tangled up in
hypothetical illustrations which you now regret.
Sure do, I think that if you redid it so that the scale was now:
don't agree
On Tue, 28 Aug 2001, Dennis Roberts wrote in part:
however ... the flagging of outliers is totally arbitrary ... i
see no rationale for saying that if a data point is 1.5 IQRs away from
some point ... that there is something significant about that
If the data are normally distributed (or
On Sun, 26 Aug 2001 [EMAIL PROTECTED] wrote:
I have trouble to solve this probability problem. Hope get help here.
There is N balls, Pick up M1 balls with replacement from them.
what is the expected value of different balls we pick up?
Expected value of what characteristic of the balls?
On 21 Aug 2001, Atul wrote:
How do we calculate the adjusted r-square when the error degrees of
freedom are zero ? (Or in other words, number of samples is equal to
the number of regression terms including the constant.)
Such a situation leads to a zero in the denominator in the
One approach: (I assume that by residual you mean (O-E)/sqrt(E) for
each cell of a two-way frequency table, where O=observed frequency and
E=expected frequency under the null hypothesis). For the several (or
the single) largest residual(s), report O and E as proportions (of total
N).
On 14 Aug 2001, Nolan Madson wrote:
I have a data set of answers to questions on employee performance.
The answers available are:
Exceeded Expectations
Met Expectations
Did Not Meet Expectations
The answers can be assigned weights [that is, scores -- DFB]
of 3,2,1 (Exceeded, Met,
Some clarification would help. See below.
On Wed, 1 Aug 2001, Teen Assessment Project wrote:
I have an overall sample of 5000+ from 40+ different towns and 6
different grades.
In approximately equal numbers per town/grade, or not?
Are all 6 grades (which grades?) represented in each
On 31 Jul 2001, ToM wrote:
what is the opposite of a log?[logarithm]
An antilog [properly, antilogarithm]. Equivalently, 10 to that power
(if, as in your example, you are taking logarithms to the base 10); or
e to that power (if you are taking natural logarithms), which is also
called
Use the table twice -- for P(0Zz1) and P(0Zz2) -- and then subtract
or add, depending on whether the desired signs of z1 and z2 are the same
or different. -- DFB.
On Sat, 28 Jul 2001, Cantor wrote:
I did not try to examine your work thoroughly but at the very beginning
I
If you don't happen to have a convenient r -- Z conversion table
handy, it may be helpful to know, for step 1. below, that
Z = 0.5 log((1+r)/(1-r)) or, equivalently,
Z = tanh^(-1)r = the hyperbolic arctangent of r.
(log is the natural logarithm.)
It follows that, given a
On Fri, 27 Jul 2001, Nadine Wells wrote in part:
Does anyone know what the power link function does in SAS? [...] when
I plot the equation based on the parameter estimates, the model doesn't
seem to look like I want it to. [...] I am trying to get SAS to run a
model that resembles
The answers to your questions depend heavily on structural information
that you almost certainly don't have, else one would not bother to have
arranged a voting process. But consider two very different cases:
A. Voters are absolutely indifferent to candidates: that is, all the
candidates
Hi, Dennis!
Yes, as you point out, most elementary textbooks treat only SRS
types of samples. But while (as you also point out) some more realistic
sampling methods entail larger sampling variance than SRS, some of them
have _smaller_ variance -- notably, stratified designs when the
Hi, Ivan.
I think your problem may not be so simple as you've described it.
But to begin with the simplest: In terms of area in mm^2, simply
multiplying length x width, all of the ultrasound (US) samples except one
have smaller areas than any of the high-speed drill (AR) samples; 6
On Tue, 17 Jul 2001, Cantor wrote:
Does anybody know where I can find program on the website which [can]
compare two texts/articles and settle whether or not they are similar
assuming any significant level.
Sorry, Cantor: this is not possible, in general.
One can discover whether two
On Sun, 15 Jul 2001, Melady Preece wrote:
I have done a paired t-test on a measure of self-esteem before and
after a six-week group intervention.
There is a significant difference (in the right direction!) between
the means using a paired t-test, p=.009. The effect size is .29 if I
On Tue, 10 Jul 2001, Alex Yu wrote:
I am trying to understand Triangular coordinates -- a kind of graph
which combines four dimensions into 2D
You meant, condenses four dimensions into 3D, didn't you? Your
subsequent description indicates three dimensions all together, two
of them used
On Sat, 7 Jul 2001, David Schaefer wrote:
My Stats professor is having us run some correlations and what not
through SPSS. She has asked us to transform some raw scores to
z-scores for a reading achievement test. The commands she has asked
us to type in the syntax editor is:
On Sun, 24 Jun 2001, Melady Preece wrote in part:
I am teaching educational statistics for the first time, and although I
can go on at length about complex statistical techniques, I find myself
at a loss with this multiple choice question in my test bank. I
understand why the range of
On Fri, 22 Jun 2001, Marc Esser wrote:
After a closer look at the trials which I want to summarize, I noticed
that not the means are reported, but the medians.
Do you have an idea how to calculate an effect size with this
information, e.g. median change of hospitalization time.
The
On 17 Jun 2001, Marc wrote (edited):
I have to summarize the results of some clinical trials.
The information given in the trials contain:
Mean effects (days of hospitalization) in treatment control groups;
numbers of patients in the groups; p-values of a t-test (of the
difference
In response to Doug Sawyer's post:
I am trying to locate a journal article or textbook that addresses
whether or not exam quesitons can be normalized, when the questions
are grouped differently. For example, could a question bank be
developed where any subset of questions could be
On 11 Jun 2001, srinivas wrote:
I have a problem in identifying the right multivariate tools to
handle datset of dimension 1,00,000*500. The problem is still
complicated with lot of missing data.
So far, you have not described the problem you want to address, nor the
models you think
On 3 Jun 2001, Bekir wrote, in part:
My aim was to compare groups 2, 3, 4, 5 with control (group 1). ...
The rewiever had written me: Accordingly, a statistical penalty
needs to be paid in order to account for the increased risk of a Type
1 error due to multiple comparisons. The
On 2 Jun 2001, Bekir wrote in part:
I performed a study on different enteral nutrients and bacterial
translocation in experimental obstructive jaundice.
There was 5 groups of rats. Each group consists of 20 rats. Occurred
Translocation incidences in mesenteric lymph nodes were shown in
On Thu, 31 May 2001, W. D. Allen Sr. wrote:
Only from the education field do we hear the statement that over ninety
percent of students ranked above the median! The statement was made on
TV.
(1) I take it that it was the keyword students that led you to suppose
that the statement had
yOn Sat, 12 May 2001, RD wrote, inter alia:
The only approach to deal with z test for means that I have seen so
far was using s^2 = s1^2/n1 + s2^2/n2 formula.
t test is always using pooled variance.
I think not _always_. _Usually_, because (i) there is seldom
a strong need to
On Fri, 18 May 2001, auda wrote (slightly edited):
In my experiment, [when] two dependent variables DV1 and DV2 [were]
analyzed separately with ANOVA, the independent variable [IV (with ]
two levels IV_1 and IV_2) modulated DV1 and DV2 differentially:
mean DV1 in IV_1 mean DV1 in IV_2
If the mean of the predictor X is zero, the intercept is equal to the
mean of the dependent variable Y, however steep or shallow the slope
may be. And as Jim pointed out, the standard error of a predicted value
depends on its distance from the mean of X (being larger the farther
away it is
On Thu, 10 May 2001, Magill, Brett wrote, inter alia:
How should these data be analyzed? The difficulty is that the data
are cross level. Not the traditional multi-level model however.
Hi, Brett. I don't understand this statement. Looks to me like an
obvious place to apply multilevel
On Fri, 4 May 2001, Alan McLean wrote:
Can anyone tell me what is the distribution of the ratio of sample
variances when the ratio of population variances is not 1, but some
specified other number?
Depends. If the two samples on which the variances are based are
_independent_,
I rather think the problem is not adequately defined; but that may
merely reflect the fact that it's a homework problem, and homework
problems often require highly simplifying assumptions in order to be
addressed at all. See comments below.
On Fri, 4 May 2001, Adil Abubakar wrote:
My name
Short answers below; which may or may not adequately address the lurking
questions you had in mind.
On Fri, 4 May 2001, Jeff wrote:
Would like to ask [for] help with the following questions:
1. why designs for experiments should be orthogonal ?
So that results for each factor, and each
Thanks, Rich. My semi-automatic crap detector hits DELETE when it sees
things like this anyway; but... did you notice that although SamFaz
(or whoever, really) claims to cite a bill passed by the U.S. Congress
he she or it is actually writing from Canada?
I'm not quite sure what to
On Tue, 1 May 2001, Dale Glaser wrote in part:
a colleague just approached me with the following problem at
work: he wants to know the number of possible combinations of boxes,
with repeats being viable...so, e.g,. if there are 3 boxes then
what he wants to get at is the following
On Sat, 28 Apr 2001 [EMAIL PROTECTED] wrote:
I just joined the listserv. Our professor is giving us extra credit if
we join an email list re: stats. I was able to pull up one of his
messages from last year. Pretty cool. Have a great day!
You might ask him whether additional extra
On Sat, 28 Apr 2001, Abdul Rahman wrote:
Please help me with my statistics.
If you order a burger from McDonald's you have a choice of the
following condiments: ketchup, mustard , lettuce. pickles, and
mayonnaise. A customer can ask for all these condiments or any subset
of them when
On Wed, 18 Apr 2001, Giuseppe Andrea Paleologo wrote:
I am dealing with a simple conjecture. Given two generic positive
random variables, is it always true that the sum of the quantiles (for
a given value p) is greater or equal than the quantile of the sum?
snip, technical
On Fri, 27 Apr 2001, Lise DeShea wrote in part:
I teach statistics and experimental design at the University of
Kentucky, and I give journal articles to my students occasionally with
instructions to identify what kind of research was conducted, what the
independent and dependent
On Mon, 23 Apr 2001, jim clark wrote:
On 22 Apr 2001, Donald Burrill wrote:
If I were doing it, I'd begin with a full model (or augmented model,
in Judd McClelland's terms) containing three predictors:
y = b0 + b1*X + b2*A + b3*(AX) + error
where A had been recoded to (0,1
On Tue, 10 Apr 2001, Gary Carson wrote:
It's the proportion of success (x/n) which has approxiatmenly a normal
distribution for large n, not the number of success (x).
Both are approximately normal.
(If the r.v. W = (x/n) is (approximately) normally distributed, then
the r.v. V = x = n*W
Everything you need is in what you wrote.
You do understand that "z" is the usual shorthand for "a standard score",
and that a standard score is the representation of a given raw score as
its deviation from the population mean in standard-deviation units?
The rest is merely a lookup in a
On Fri, 30 Mar 2001 [EMAIL PROTECTED] wrote:
Donald Burrill writes:
On Thu, 29 Mar 2001, H.Goudriaan wrote in part:
- my questionnaire items are measured on 5- and 7-point Likert scales,
and consequently not (bivariate) normally distributed;
Real data hardly ever is. Do
On Thu, 29 Mar 2001, H.Goudriaan wrote in part:
- my questionnaire items are measured on 5- and 7-point Likert scales,
so they're not measured on an interval level
Non sequitur.
and consequently not (bivariate) normally distributed;
Real data hardly ever is. Do you need it to
On Thu, 22 Mar 2001, Paul R Swank wrote:
I prefer the ocular test myself.
Were you referring to the intraocular traumatic test?
(It strikes you between the eyes.)
-- Don.
Donald
On Thu, 15 Mar 2001, dennis roberts wrote in part:
ps ... a conclusion that lots of people don't agree with one another
will not be too helpful
Maybe not, but it sure would be realistic -- which might be reassuring
to some of our students who have their own doubts on that score about our
On Tue, 13 Mar 2001, Will Hopkins wrote in part:
Example: you observe an effect of +5.3 units, one-tailed p = 0.04.
Therefore there is a probability of 0.04 that the true value is less
than zero.
Sorry, that's incorrect. The probability is 0.04 that you would find an
effect as large as
Hi, Rich. The only answer I recall having seen on the listserve was one
suggesting multilevel (aka "hierarchical") modelling. If one wanted to
address the problem without ML modelling, I'd be inclined to proceed as
follows:
(1) I assume, in the absence of commentary to the contrary, that
In response to dennis roberts, who wrote in part:
i see "inventing" some algorithm as snip not quite in the same
genre of developing a process for extracting some enzyme from a
substance ... using a particular piece of equipment specially
developed for that purpose
i hope we
Dennis also included [EMAIL PROTECTED] among his addressees,
but I am not on that list and therefore cannot reply to them...
On Tue, 6 Mar 2001, dennis roberts wrote:
may eons ago ... 1974 to be precise ... i had this idea of making a
small plastic normal and skewed curve template ... that
In response to Dennis's earlier statement,
"that is ... power in many cases is a highly overrated CORRECT decision"
I wrote:
Well, no. Overrated it may be (that lies, I think, in the eye of the
beholder); but a _decision_ it is definitely not. Power is the
_probability_ of making a
On Sun, 4 Mar 2001, dennis roberts wrote in part:
i know that sometimes power is "defined" as 1 - beta ... but, beta
could therefore (algebraically and logically) be defined as 1 - power
Only for the conditional definition of power; I would wish to add the
conditional clause "when the
On Sat, 3 Mar 2001, Arenson, Ethan wrote:
Would someone please remind me the formula for Fisher's
z-transformation of correlation coefficients?
Z = 0.5 log[(1 + r)/(1 - r)] (using the natural logarithm).
Its standard error is 1/sqrt(n - 3) ("sqrt" = "square root of").
To
On Sun, 4 Mar 2001, Philip Cozzolino wrote in part:
However, after the cubic non-significant finding, the 4th and 5th
order trends are significant.
Intuitively, it seems that if there is no cubic trend of significance,
there will not be any higher order trend, but this is relatively new
On Sat, 3 Mar 2001, dennis roberts wrote:
when we discuss things like power, beta, type I error, etc. ... we
often show a 2 by 2 table ... similar to
null truenull false
retain correct type II, beta
reject type I, alpha power
Hi, Esa!
You've had a couple of responses; here's another.
You state "pairwise comparisons"; but it strikes me as at least
possible that you might want (or might _also_ want) to consider more
complex comparisons if any such comparisons seemed to offer a more
parsimonious
On Wed, 28 Feb 2001, Mike Granaas wrote in part (and 2 paragraphs of
descriptive prose quoted at the end):
... is there some method that will allow him to get the prediction
equation he wants?
Probably the best approach is the multilevel (aka hierarchical) modelling
advocated by previous
Perhaps jthis is too superficial -- no time to think more deeply just
now. But I suspect the difference between your two scenarios below is
that with exactly 5 computers to deal with (i.e., population size = 5)
you are sampling without replacement (which is only sensible, for the
background
On Sat, 24 Feb 2001, Mike Granaas wrote:
Interesting point. Yes, if the Ss do something other than a random guess
the binomial model would be violated. The question then becomes what
would they do if they are uncertain? I suspect that they would fall back
on visual inspection...which
A quick reply. Looks somewhat like the second course ("Intermediate
Statistics and Research Design") I taught for some years at OISE,
Toronto, which was (and is) the Graduate Department of Education for
the University of Toronto. Ask for more later if you want...
On Tue, 20 Feb 2001, Lise
I note that in the literature cited, the word "nauseam" (in the Latin
phrase "ad nauseam") is misspelled both times it appears.
-- DFB.
On Sat, 17 Feb 2001, Jeff Rasmussen wrote:
a spoof on the glut of journals:
On Thu, 8 Feb 2001, jim clark wrote in part:
We all agree that it is confusing, but I do believe that the use
of one-tailed and two-tailed to refer to directional vs.
non-directional hypotheses (rather than uniquely to one or two
tails of a distribution) is very wide-spread and quite common.
If for each Subject you have 4 Measures in each of the 3 Conditions, then
both Conditions and Measures are repeated-measures factors: you design
may be symbolized as S x C x M -- that is, Subjects (5 levels) are
crossed with both Conditions and Measures. This design is equivalent to
On Tue, 6 Feb 2001, jim clark wrote in part:
The problem is that one-tailed test is taken as synonymous with
directional hypothesis (e.g., Ha: Mu1Mu2). This causes no
confusion with distributions such as the t-test, because
directional implies one-tailed. This correspondence does not
hold
On Tue, 30 Jan 2001, Kathleen Bloom wrote:
If you have unequal n's, and want to determine linear parameters, you can
develop new coefficients by taking the normal unweighted coefficients
(e.g., -1, 0, +1, for three group design) and the formula:
n1(X1) + n2(X2) + n3(X3)/ n1+n2+n3
On Mon, 29 Jan 2001, Chris wrote in part:
My current job requires me to analyze margins from the sales of various
products and provide an average for each during the quarter. I am using a
very large sample of all product sales by month. (Margin, i.e. not markup.
For those not familiar,
On Fri, 26 Jan 2001, Rich Ulrich quoted me:
DB: What most people who use "ordinal" and "disordinal" seem to mean
is a plot of the cell means (or of regression lines), with no
adjustment for main effects: so, a display that includes the
interaction AND the main effects. I take it
On Sun, 28 Jan 2001, Veeral Patel wrote in part:
Out of curiousty i decided to write a small prog to perform the A-D test in
matlab for the gumbel distribution. Obtaining the gumbel parameters is easy.
however the difficulty is in the actual A-D computation formula as stated by
1 - 100 of 188 matches
Mail list logo