ger way?
>
> --
> Jonathan Baron, Professor of Psychology, University of Pennsylvania
> Home page:http://www.sas.upenn.edu/~baron
> R page: http://finzi.psych.upenn.edu/
>
> ______
> [EMAIL PROTECTED] mail
tures of
bivariate normal distributions with equal variances and differences in means in only
one coordinate. :-)
---
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics & Epidem. Dept. of Health Evaluation Science
lp
> >
> > best wishes
> >
> > andi
> >
> > __
> > [EMAIL PROTECTED] mailing list
> > https://www.stat.math.ethz.ch/mailman/listinfo/r-help
>
> __
> [EMAIL PROTECTED
s, which is intended for binary outcomes and incorporates bootstrapping for
estimating predictive accuracy of the network.
You may obtain Nevprop at http://brain.cs.unr.edu
---
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics & Epidem. Dept. of Health E
ts documentation from http://www.medsch.wisc.edu/landemets
ldBands makes ld98 easier to use. Examples show how to do power calculations.
---
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics & Epidem. Dept. of Health Evaluation Sciences
U. Vi
this warning?
Thanks,
Frank
---
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics & Epidem. Dept. of Health Evaluation Sciences
U. Virginia School of Medicine http://hesweb1.med.virginia.edu/biostat
__
[EM
, just
> couldn't find any discussion of such a situation.
>
> Thanks much in advance,
>
> --
> Shravan Vasishth Phone: +49 (681) 302 4504
See W. Cleveland "The Elements of Graphing Data" (Hobart Press) for reasons not to do
this.C
/mailman/listinfo/r-help
I still think that some sort of global option for this is needed. I remain
unconvinced that the current default is the most useful one. In my data analysis work
I have always wanted to have a subset that was formed on a categorica
covariance matrices of regression
coefficient estimates with repeated measures. Details of changes along with
installation instructions may be found at http://hesweb1.med.virginia.edu/s/Hmisc.html
and .../Design.html.
---
Frank E Harrell Jr Prof. of Biostatistics & Sta
gt; > that weights are not supplied as I intend, instead each subset of Sub, when
> > passed to weighted.mean(), receives the whole x$Length as weights, which is
> > not correct.
> >
> > Is there an elegant way to do this, or do I have to have a loop here?
> >
&
the source code for latex.default by adding the line
extracolheads <- c('', extracolheads)
after the line
col.just <- c(rowlabel.just, col.just)
the problem should be fixed. The next release of Hmisc will have this fix.
Frank
---
Frank E Harrell Jr Prof. of Biost
y,x1,x2)
f <- transcan(~y+x1+x2, ..., data=d)
Frank
>
> I am not sure what I am missing.
>
> Vumani
>
> ______________
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
---
Frank E Harrell Jr
On Sat, 14 Jun 2003 09:33:47 +0700
Philippe Glaziou <[EMAIL PROTECTED]> wrote:
> Frank E Harrell Jr <[EMAIL PROTECTED]> wrote:
> > I tried this on the latest version of Hmisc (1.6-0):
> >
> > library(Hmisc)
> > set.seed(1)
> > y <- factor(sample(
em with summary(method="reverse") on
> other datasets, and various combinations of options passed
> to the latex command.
>
> Thanks
>
> --
> Philippe
>
I tried this on the latest version of Hmisc (1.6-0):
library(Hmisc)
set.seed(1)
y <- factor(sample(c(
;ll have simulation studies comparing aregImpute with NORM.
---
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics & Epidem. Dept. of Health Evaluation Sciences
U. Virginia School of Medicine http://hesweb1.med.virginia.edu/biostat
_
logistic regression to time dependent {Cox}
regression analysis: {The} {Framingham} {Heart} {Study}},
journal = Statistics in Medicine,
volume = 9,
pages = {1501-1515},
annote = {time-dependent covariable; repeated measures logistic
model; person-years logistic model}
}
---
Frank E Harre
On Thu, 05 Jun 2003 21:50:13 -0400
"Liaw, Andy" <[EMAIL PROTECTED]> wrote:
> Hi Frank,
>
> > From: Frank E Harrell Jr [mailto:[EMAIL PROTECTED]
>
> [snip]
>
> > The anova method for ols fits 'works' when you penalize the
> > mo
ion(chi))
# lty only for demonstration - omit that for this example. Thick gray scale # lines
are excellent for step functions
Or use same line types but put symbols every so often (point.inc= to override default
spacing; this works well for overlapping step functions also):
labcurve(w, pl=T
n't
compete with each other, or collapse them into summary scores (e.g., principal
components) before putting them in the model.
---
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics & Epidem. Dept. of He
---
>
> I was wondering whether someone can help me understand the
> following behavior of the ace-function:
>
> When ace is called with mon-parameter set to zero, R gives the
> message "response spec can only be lin or ordered
> (default)" and returns immediately. However, accord
w methods provide much more safety. I have found though that I
don't need this kind of protection from myself. I have plenty of other problems to
worry about.
That's my $.02 worth.
--
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatisti
nd LAST.variable. As seen in the examples I mentioned above, you
handle this in a completely different way in S (using lags, aggregation functions, or
for loops).
--
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics &
s/feh/clinreport/dmcreport.pdf
For statistical reports you have chosen well, in considering intergrating R and LaTeX.
The Alzola-Harrell text also covers a bit about using make and Perl to run scripts
(to get data from SAS to R, run R, etc.).
--
Frank E Harrell Jr Prof. of Biost
tatus line, suggesting that something is at least partially blocking
> the link activation in the browser.
>
> Regards,
>
> Marc Schwartz
>
> ______________
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/lis
y displaying complex regression models. A full course
description may be found at http://www.insightful.com/services/course.asp?CID=27
To Register:- Web: http://www.insightful.com/services/register.asp
- Email: [EMAIL PROTECTED]
- Call Kim Kelly at: 800-569-0
validation), lrm (logistic regression model)
Thanks,
Frank
--
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics & Epidem. Dept. of Health Evaluation Sciences
U. Virginia School of Medicine http://hesweb1.med.virginia.edu
quency case weights. The lrm function in the Design package
(http://hesweb1.med.virginia.edu/biostat/s/Design.html) does.
--
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics & Epidem. Dept. of Health Evaluation Sciences
U. Virginia School of Medicine h
a 2000-point
grid. That allows simple numerical integration to be used to factor in the
complications, to get the cumulative hazard function and then do simulations off that.
--
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics & Epidem. Dept.
t; R page: http://finzi.psych.upenn.edu/
>
> __
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
Hi Jonathan,
The "outer" method is elegant but uses too much memory for large datas
Venables' welcome post I'll add :)
There is one scientific basis for choosing type III contrasts. If one desires a
low-precision contrast (or a low power test) in the presence of major imbalances, type
III is for you.
---
Frank E Harrell Jr Prof. of Biostatistics & S
:
function(x, myarg)
What is the proper way to handle this?
Thanks
--
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics & Epidem. Dept. of Health Evaluation Sciences
U. Virginia School of Medicine http://hesweb1.med.virginia.edu
the same.
> For example, would it be possible to use apply() with cor.test instead
> of using the for loops?
> I'm trying to improve my low R skills.
> Thanks
>
> Juli
>
One approach is to install the Hmisc package and run rcorr(X) which will give you a
matrix of P values
is the unique subject
identifier):
f <- lrm(y ~ x1 + x2*x3 + ..., x=T, y=T) # working independence model
g <- robcov(f, id) # cluster sandwich variance adjustment
h <- bootcov(f, id, B=100) # cluster bootstrap adjustment
summary(g) # etc.
--
Frank E Harrell Jr Prof. of
at you want is cumulative probabilities. Just compute
the empirical cumulative distribution function of the original x:
library(stepfun)
ecdf(x)
--
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics & Epidem. Dept. of Health Evaluation Scienc
PROTECTED] mailing list
> http://www.stat.math.ethz.ch/mailman/listinfo/r-help
This was discussed in the group just a few days ago. Please check the r-help archive.
--
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics & Epi
tp://www.uni-kiel.de/agrarpol/ahenningsen.html
>
> __
> [EMAIL PROTECTED] mailing list
> http://www.stat.math.ethz.ch/mailman/listinfo/r-help
This function in the Hmisc library may help:
all.is.numeric <- function(x, what=c('test'
model fit, the types of LaTeX options that apply are drastically
different than if I latex( ) a data frame for making a table with minor and major
groupings. A very small number of arguments are in common. I tried converting latex(
) to use the new methods some time ago, and had to abondon t
the logit framework* (say for propensity score calculation)?
>
>
>
> Thanks!
>
> Stan
The "An Introduction to R" manual that comes with the system covers the glm function.
--
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatis
284 150 319 135
attach(titanic3)
summarize(survived,llist(sex,pclass,m),
function(y)c(died=sum(y==0),lived=sum(y==1)))
sex pclass m survived lived
1 female1st bad014
2 female1st bad,good128
3 female1st bad,u
code(Function(fit)): translate formula to SAS notation
What I think would be very useful would be a function like Function that instead
symbolically creates the design matrix, and translating that function to SQL etc.
This would allow computation of confidence limits.
--
Frank E Harrell Jr
hesweb1.med.virginia.edu/biostat/s/doc/summary.pdf where examples of the
mChoice (multiple choice) function in Hmisc are given.
--
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics & Epidem. Dept. of Health Ev
ate HTML version deleted]]
>
> __
> [EMAIL PROTECTED] mailing list
> http://www.stat.math.ethz.ch/mailman/listinfo/r-help
I guess this means that my reply to you a few weeks ago (which you did not
acknowledge) when you first asked the question was not helpful.
--
Fran
hese use when fetching
labels from XML.
Another possibility is to make a table defining variable-specific metadata. Then you
could just read in the table and write a short function to pull out labels after
matching on variable names, assigning the labels to an attribute of your choosing.
--
F
er.
>
> __
> [EMAIL PROTECTED] mailing list
> http://www.stat.math.ethz.ch/mailman/listinfo/r-help
This was tricky. On RedHat 8.0 I had to put
(setq ps-postscript-code-directory "/usr/share/emacs/21.2/etc")
in .xemacs/i
stic, accelerated failure time, and Cox models you can do cluster
adjustments using either the robcov (Huber-White-Efron methods) or bootcov (cluster
bootstrap) functions in the Design library. See
http://hesweb1.med.virginia.edu/biostat/s/Design.html
Doing this as the same time
at/s/Design.html). By the way nomograms were
created in the French schools of civil engineering.
To draw something fairly simple such as a distribution you could program this in R
easily, and my nomogram function would not apply.
--
Frank E Harrell Jr Prof. of Biostatistics & Statist
##
> ich empfehle www.boag.de
>
> __
> [EMAIL PROTECTED] mailing list
> http://www.stat.math.ethz.ch/mailman/listinfo/r-help
--
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics
Frank Harrell
On Sat, 01 Feb 2003 00:25:54 -0500
[EMAIL PROTECTED] wrote:
--
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics & Epidem. Dept. of Health Evaluation Sciences
U. Virginia School of Medicine http://hesweb1.med.virgini
;t found where to download the fptex software.
> And worst: ich spreche kein Deutch!!!
>
> Danke sehr!
>
> __
> [EMAIL PROTECTED] mailing list
> http://www.stat.math.ethz.ch/mailman/listinfo/r-help
--
Frank E Harrell Jr
who use or would be
interested in using LaTeX, the greatest productivity tool for document processing in
my opinion.
--
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics & Epidem. Dept. of Health Evaluation Sciences
U. Virginia School of Medici
iles created by that
code chunk. The 'validatemodel' chunk creates by default 'validatemodel.lst' to
contain the printed output, and files such as 'validatemodel.ps' for graphics.
It would be nice if Sweave could implement this type of model. In LaTeX I find it
inva
book "Computer Programming for Dummies" 2nd edition by Wallace Wang
(New York: Hungry Minds, Inc, 2001) which looks pretty good. It mainly teaches using
a free version of Basic but introduces many other languages including Java and has a
lot of good background information about comput
rofit from the
University of Wisconsin Dept. of Biostatistics model and will use only S and LaTeX for
statistical reporting.
Frank
--
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics & Epidem. Dept. of Health Evaluation Sciences
U. Virginia School
04 1.338717e+09 4.045300e+04
. . . .
122 3.687825e-40 3.687825e-40 3.687825e-40 5.868918e-40 3.687825e-40
123 5.904941e-40 2.942346e+63 9.068390e+43NA -5.524256e-48
124 3.835229e-93 6.434447e-86 NA 3.687825e-40 3.687825e-40
test.xpt and test2.xpt may be retrieved from
ht
______
> [EMAIL PROTECTED] mailing list
> http://www.stat.math.ethz.ch/mailman/listinfo/r-help
--
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics & Epidem. Dept. of Health Evaluation Sciences
U. Virginia School of Medicine
sweep(z, 1, sums, FUN='/')
> z # each row represents multinomial probabilities summing to 1
> [1,] 0.4125705 0.5874295 0.000
> [2,] 0.000 0.5874295 0.4125705
> [3,] 0.000 0.4011696 0.5988304
> [4,] 0.000 0.2023697 0.7976303
>
>
> Th
03
The code is moderately fast. Does anyone know of a significantly faster method or
have any comments on the choice of weighting function for such sampling? This will
be used in the context of predictive mean matching for multiple imputation. Thanks -
Frank
--
Frank E Harrell Jr
tandard logrank test, see the logrank function in the Hmisc package (it does not
handle stratification though). E.g.
library(Hmisc)
logrank(Surv(d.time,death), treatment) # assumes treatment coded 1,2
See http://hesweb1.med.virginia.edu/biostat/s/Hmisc.html
--
Frank E Harrell Jr Prof.
in advance,
> Sophie
matchCases is in the Hmisc package. See
http://hesweb1.med.virginia.edu/biostat/s/Hmisc.html
Hmisc is not on CRAN yet.
--
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics & Epidem. Dept. of Health Evaluation S
s of the distribution I know with
> confidence?
>
> Damon Wischik.
>
>
My advice would be to plot superposed ECDFs with the one you want to deemphasize shown
in light gray scale.
--
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics &
ams
(see e.g. histbackback in the Hmisc library). But better still would be superposed
ECDFs (e.g., ecdf() in Hmisc or in Martin Maechler's package). ECDFs are much better
for showing distribution differences in my view.
--
Frank E Harrell Jr Prof. of Biostatistics & Sta
> Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
> University of Oxford, Tel: +44 1865 272861 (self)
> 1 South Parks Road, +44 1865 272866 (PA)
> Oxford OX1 3TG, UKFax: +44 1865 272595
>
--
Frank E Harrell Jr
powerpc
> os darwin6.2
> system powerpc, darwin6.2
> status
> major1
> minor6.1
> year 2002
> month11
> day
cter string to POSIXct
what is? On a more minor note why the EST if no time is printed?
Thanks,
Frank
--
Frank E Harrell Jr Prof. of Biostatistics & Statistics
Div. of Biostatistics & Epidem. Dept. of Health Evaluation Sciences
U. Virginia School of Medicine http://hesweb1
x.
Extended documentation for the libraries, and an introduction to the S language have
been updated also (http://hesweb1.med.virginia.edu/biostat/s/doc/splus.pdf) and now
include more R-specific information.
Thanks to those who have reported bugs and fixes, and Happy New Year to all.
Frank E Harrell
als are
almost disallowing P-values in favor of CLs)
- P-values are dangerous, especially large, small, and in-between ones.
See http://hesweb1.med.virginia.edu/biostat/teaching/bayes.short.course.pdf for a
full sermon.
--
Frank E Harrell Jr Prof. of Biostatistics & Statist
601 - 666 of 666 matches
Mail list logo