Jim,
I respectfully disagree, and there is 5 decades of literature to back
me up. Berkson and Gage (1950) is in response to medical papers that
summarized surgical outcomes using only the observed deaths, and shows
important failings of the method. Ignoring the censored cases usually
gives
The Wald, score, and LR tests are discussed in full in my book. They
are not the same.
The LR test is the difference between LR(beta=0) and LR(beta=final). The
score test is a Taylor series approximation to this using an expansion
around beta=0. The Wald test is a similar Taylor series
The date library was written 20 or so years ago. It was a very good
first effort, but the newer Date library has superior functionality in
nearly every way. The date library is still available, for legacy
projects such as yours, but I do not advise it for new work. To answer
your specific
In the example below (or for a censored data) using survfit.coxph, can
anyone point me to a link or a pdf as to how the probabilities
appearing in
bold under summary(pred$surv) are calculated?
These are predicted probabilities that a subject who is age 60 will
still be alive. How this is
Brostram wrote
The survreg function cannot fit left-censored data (correct me if I am wrong).
in response to my suggestion to use that routine.
You are wrong. Try reading the help file for survreg, or the references given
there.
The survreg function does not fit left-truncated data, however.
Ramon and Michael
There were 2 simple issues: first a set of fixes I made in May that hadn't
been pushed out to CRAN yet; second I needed to replace 'terms' with
'delete.response(terms)' in one place so that predict didn't try to find a
response that it doesn't need.
You can grab the up to
Larry,
You found a data set that kills coxph. I'll have to think about what
to do since on the one hand it's your own fault for trying to fit a very
bad model, and on the other I'd like the routine to give a nice error
message before it dies.
In the data set you sent me the predictor
You ask some good questions.
I would like to clarify statistics that ridge coxph returns. Here is
my understanding and please correct me where I am wrong.
1) In his paper Gray [JASA 1992] suggests a Wald-type statistics with
the formula for degree of freedom. The summary function for
On 02/25/2014 05:00 AM, r-help-requ...@r-project.org wrote:
Hi,
I have some measurements and their uncertainties. I'm using an
uncensored subset of the data for a weighted fit (for now---I'll do a
fit to the full, censored, dataset when I understand the results).
survreg() reports a much
The robust variances are a completely different estimate of standard error. For linear
models the robust variance has been rediscovered many times and so has lots of names: the
White estimate in economics, the Horvitz-Thompson in surveys, working independence
esitmate in GEE models,
The help page for the survfit function says it expects a formula as its
first argument so try:
sleepfit - survfit(Surv(timeb, death)~1, data = sleep)
David
Sent from my iPhone ... so unable to test.
This was a recent (well, 2007) change in behaviour. Previously the function
did some tricks
With respect to question 2, I use the wild bootstrap for tau.
Wu, C.F.J. (1986). Jackknife, bootstrap and other
resampling methods in regression analysis (with discussions).
Annals of Statistics, 14, 1261-1350.
-- begin included message --
I want to bootstrap Kendall's tau
TIBCO Software
wdunlap tibco.com
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf
Of Hadley Wickham
Sent: Friday, March 07, 2014 7:46 AM
To: Therneau, Terry M., Ph.D.
Cc: R-help; Thomas Lumley
Subject: Re: [R] Survfit error
On Fri, Mar 7
Try help(quantile.survfit)
Terry Therneau
--- begin included message ---
Hello,
I am using the function survfit in the 'survival' package. Calling the function produces
the median survival
time automatically, as below.
sleepfit - survfit(Surv(timeb, death)~1)
sleepfit
Call:
-- begin included message ---
Hi,
I am fitting a weibull model as follows
my models is
s - Surv(DFBR$Time,DFBR$Censor)
wei - survreg(s~Group+UsefulLife,data = DFBR,dist=weibull)
How can i predict the probabilty of failure in next 10 days, for a new data with group =10
and usefuleLife =100
I don't know much about the frailtyHL package, but from the description it appears to be
fitting the same model as coxme. The latter is designed to work with large data sets.
Terry Therneau
__
R-help@r-project.org mailing list
On 03/20/2014 06:00 AM, r-help-requ...@r-project.org wrote:
My question is related to a cox model with time-dependent variable.
When I think about it more, I get a little confused about
non-increasing assumption for survival probability for an individual.
For example, for a time-dependent ,say
The very first step is to understand the possible nature of proportional hazards. This
is parallel to the usual advice in linear models to graph the data before you start
fitting complicated non-linear models.
zp - cox.zph(CSHR.shore.fly, transform=identity)
plot(zp)# or plot(zp[3]) for
You can do statistical tests within a single model, for whether portions of it fit or do
not fit. But one cannot take three separate fits and compare them. The program needs
context to know how the three relate to one another. Say that group is your strata
variable, trt the variable of
For log-normal you can use the coxme package instead.
On 05/15/2014 05:00 AM, r-help-requ...@r-project.org wrote:
Hi everyone
I am attempting to estimate a model with a frailty effect distributed as
a lognormal variable.I am using the following code:
frailtyPenal(formula, data, ..., RandDist =
On 05/30/2014 05:00 AM, r-help-requ...@r-project.org wrote:
I have a dataset with 2 treatments and want to assess the effect of a
continous covariate on the Hazard ratio between treatment A and B. I want a
smoothed interaction term which I have modelled below with the following
code:
--- begin included message ---
But If I do
fit - coxph(Surv(futime, fustat) ~ resid.ds *rx + ecog.ps, data = ovarian,
subset=ovarian$age50)
anova(fit)
fit2 - coxph(Surv(futime, fustat) ~ resid.ds +rx + ecog.ps, data=ovarian,
subset=ovarian$age50)
anova(fit2,fit)
The first p-value
Actually, it's worse than you think.
Ideal: if the survival curve has a horizontal segment at exactly 50%, report the midpoint
of that segment.
For uncensored data, this makes the routine agree with the ordinary definition
of a median.
Reality: The survfit routine tries for this. However,
I ususally scan the digest for surv, so missed your question on the first
round.
You caught predict with a case that I never thought of; I'll look into making
it smarter.
As Peter D said, the clogit function simply sets up a special data set and then calls
coxph, and is based on an identity
On 06/23/2014 05:00 AM, r-help-requ...@r-project.org wrote:
My problem was how to build a Cox model for the matched data (1:n) with
replacement. Usually, we can use stratified Cox regression model when the
data were matched without replacement. However, if the data were matched
with
1. The computations behind the scenes produce the variance of the cumulative hazard.
This is true for both an ordinary Kaplan-Meier and a Cox model. Transformations to other
scales are done using simple Taylor series.
H = cumulative hazard = log(S); S=survival
var(H) = var(log(S)) = the
I've been off on vacation for a few days and so am arriving late to this
discussion.
Try ?print.survfit, and look at the print.rmean option and the discussion thereof in the
Details section of the page. It will answer your question, in more detail than you
asked. The option applies to
You are asking for a one sample test. Using your own data:
connection - textConnection(
GD2 1 8 12 GD2 3 -12 10 GD2 6 -52 7
GD2 7 28 10 GD2 8 44 6 GD2 10 14 8
GD2 12 3 8 GD2 14 -52 9 GD2 15 35 11
GD2 18 6 13 GD2 20 12 7 GD2 23 -7 13
GD2 24 -52 9 GD2 26 -52 12
On 08/13/2014 05:00 AM, John Purda wrote:
I am curious about this problem as well. How do you go about creating the
weights for each pair, and are you suggesting that we can just incorporate a
weight statement in the model as opposed to the strata statement? And Dr.
Therneau, let's say I
Ok, I will try to do a short tutorial answer.
1. The score statistic for a Cox model is a sum of (x - xbar), where x is the covariate
vector of the subject who had an event, and xbar is the mean covariate vector for the
population, at that event time.
- the usual Cox model uses the mean of
On 08/13/2014 08:38 AM, John Pura wrote:
Thank you for the reply. However, I think I may not have clarified what my
cases are. I'm studying the effect of radiation treatment (vs. none) on
survival. My cases are patients who received radiation and controls are those
who did not. I used a
I missed this question.
1. For survreg.
help(predict.survreg) shows an example of drawing a survival curve
Adding a survfit method has been on my list for a long time, since it would make this
information easier to find.
2. intcox. I had not been familiar with this function. Even though
On 07/30/2014 05:00 AM, r-help-requ...@r-project.org wrote:
A while ago, I inquired about fitting excess relative risk models in R. This is
a follow-up about what I ended up doing in case the question pops up again.
While I was not successful in using standard tools, switching to Bayesian
I would have caught this tomorrow (I read the digest).
Some thoughts:
1. Skip the entire step of subsetting the death.kmat object. The coxme function knows how
to do this on its own, and is more likely to get it correct. My version of your code would be
deathdat.kmat - 2* with(deathdat,
I've attached two functions used locally. (The attachments will be stripped off of the
r-help response, but the questioner should get them). The functions neardate and
tmerge were written to deal with a query that comes up very often in our medical
statistics work, some variety of get the
I'm a bit puzzled by a certain behavior with dates. (R version 3.1.1)
temp1 - as.Date(1:2, origin=2000/5/3)
temp1
[1] 2000-05-04 2000-05-05
temp2 - as.POSIXct(temp1)
temp2
[1] 2000-05-03 19:00:00 CDT 2000-05-04 19:00:00 CDT
So far so good. On 5/4, midnight in Greenwich it was 19:00 on
Well duh -- type c.Date at the command prompt to see what is going on. I suspected I
was being dense.
Now that the behaior is clear can I follow up on David W's comment that redfining the
c.Date function as
structure(c(unlist(lapply(list(...), as.Date))), class = Date)
allows for a
This is fixed in version 2.37-8 of the survival package, which has been in my send to
CRAN real-soon-now queue for 6 months. Your note is a prod to get it done. I've been
updating and adding vignettes.
Terry Therneau
On 11/05/2014 05:00 AM, r-help-requ...@r-project.org wrote:
I am
I have a new package (local use only). R CMD check fails with a messge I
haven't seen before, and I haven't been able to guess the cause.
There are two vignettes, both of which have %\VignetteIndexEntry lines.
Same failure both under R-3.1.1 and R-devel, so it's me and not R. Linux OS.
Hints
Terry T.
On 11/18/2014 08:47 AM, Hadley Wickham wrote:
Do you have a .Rbuildignore? If so, what's in it?
Hadley
On Tue, Nov 18, 2014 at 7:07 AM, Therneau, Terry M., Ph.D.
thern...@mayo.edu wrote:
I have a new package (local use only). R CMD check fails with a messge I
haven't seen before
Use the coxme funtion (package coxme), which has the same syntax as lme4.
The frailty() function in coxph only handles the simple case of a random
intercept.
Terry Therneau
On 12/12/2014 05:00 AM, r-help-requ...@r-project.org wrote:
Hi,
I have a very simple Cox regression model in which I
Three responses to your question
1. Missing values in R are denoted by NA. When reading in your data you want to use
the na.strings option so that the internal form of the data has missing values properly
denoted.
2. If this is done, then coxme will notice the missings and remove them,
On 12/23/2014 05:00 AM, r-help-requ...@r-project.org wrote:
Dear all,
I'm using the package survival for adjusting the Cox model with multiple
events (Prentice, Williams and Peterson Model). I have several covariates,
some of them are time-dependent.
I'm using the functioncox.zph to check
On 12/26/2014 05:00 AM, r-help-requ...@r-project.org wrote:
i want to analyse survival data using typeI HALF LOGISTIC
DISTRIBUTION.how can i go about it?it installed one on R in the
survival package didn't include the distribution...or i need a code to
use maximum likelihood to estimate the
Your work around is not as easy looking to me.
Survival times come in multiple flavors: left censored, right censored, interval censored,
left-truncated and right censored, and multi-state. Can you give me guidance on how each
of these should sort? If a sort method is added to the package it
First:
summary(ss.rpart1)
or summary(ss.rpart, file=whatever)
The printout will be quite long since your tree is so large, so the second form may be
best followed by a perusal of the file with your favorite text editor. The file name of
whatever above should be something you choose, of
The pyears() and survexp() routines in the survival package are designed for
these
calculations.
See the technical report #63 of the Mayo Biostat group for examples
I have no idea. A data set that generates the error would be very helpful to me. What is
the role of the last line BTW, the one with 1% on it?
Looking at the code I would guess that the vector tied has an NA in it, but how that
would happen I can't see. There is a reasonable chance that it
On 04/21/2015 05:00 AM, r-help-requ...@r-project.org wrote:
Dear All,
I am in some difficulty with predicting 'expected time of survival' for each
observation for a glmnet cox family with LASSO.
I have two dataset 5 * 450 (obs * Var) and 8000 * 450 (obs * var), I
considered first one as
Your problem is that PatientID, FatherID, MotherID are factors. The authors of kinship2
(myself and Jason) simply never thought of someone doing this. Yes, that is an oversight.
We will correct it by adding some more checks and balances. For now, turn your id
variables into character or
The perils of backwards compatability
During computation the important quantity is loglik + penalty. That is what is contained
in the third element of the loglik vector.
Originally that is also what was printed, but I later realized that for statistical
inference one wants the loglik
so this saved me substantial time.
Terry T.
On 06/04/2015 03:00 PM, Marc Schwartz wrote:
On Jun 4, 2015, at 12:56 PM, Therneau, Terry M., Ph.D. thern...@mayo.edu
wrote:
I'm checking the survival package and get the following error. How do I find
the offending line? (There are a LOT
I'm checking the survival package and get the following error. How do I find the offending
line? (There are a LOT of files in the man directory.)
Terry T.
--
* checking PDF version of manual ... WARNING
LaTeX errors when creating PDF version.
This typically indicates Rd
The help page for prmatrix states that it only exists for backwards compatability and
strongly hints at using print.matrix instead.
However, there does not seem to be a print.matrix() function.
The help page for print mentions a zero.print option, but that does not appear to affect
matrices.
Frank,
I'm not sure what is going on. The following test function works for me in both 3.1.1
and 3.2, i.e, the second model matrix has fewer columns. As I indicated to you earlier,
the coxph code removes the strata() columns after creating X because I found it easier to
correctly create
Frank,
I don't think there is any way to fix your problem except the way that I
did it.
library(survival)
tdata - data.frame(y=c(1,3,3,5, 5,7, 7,9, 9,13),
x1=factor(letters[c(1,1,1,1,1,2,2,2,2,2)]),
x2= c(1,2,1,2,1,2,1,2,1,2))
fit1 - lm( y ~ x1 *
You were not completely clear, but it appears that you have data where each subject has
results from 8 trials, as a pair of variables is changed. If that is correct, then you
want to have a variance that corrects for the repeated measures. In R the glm command
handles the simple case but not
The difference is that survreg is using a maximum likelihood estimate (MLE) of the
variance and that lm is using the unbiased (MVUE) estimate of variance. For simple linear
regression, the former divides by n and the latter by n-p. The difference in your
variances is exactly n/(n-p) = 10/8.
I read this list a day late as a digest so my answers are rarely the first. (Which is
nice as David W answers most of the survival questions for me!)
What you are asking is reasonable, and in fact is common practice in the realm of
industrial reliability, e.g., Meeker and Escobar, Statistical
Type III is a peculiarity of SAS, which has taken root in the world. There are 3 main
questions wrt to it:
1. How to compute it (outside of SAS). There is a trick using contr.treatment coding that
works if the design has no missing factor combinations, your post has a link to such a
Turner wrote:
On 23/07/15 01:15, Therneau, Terry M., Ph.D. wrote:
SNIP
3. Should you ever use it [i.e. Type III SS]? No. There is a very strong
inverse
correlation between understand what it really is and recommend its
use. Stephen Senn has written very intellgently on the issues.
Terry
This is as much a mathematics as an R question, in the this should be easy but I don't
see it category.
Assume I have a full rank p by p matrix V (aside: V = (X'X)^{-1} for a particular setup),
a p by k matrix B, and I want to complete an orthagonal basis for the space with distance
function
) %*% A = 0?
Peter
On Thu, Jul 16, 2015 at 10:28 AM, Therneau, Terry M., Ph.D.
thern...@mayo.edu wrote:
This is as much a mathematics as an R question, in the this should be easy
but I don't see it category.
Assume I have a full rank p by p matrix V (aside: V = (X'X)^{-1} for a
particular setup
On 10/28/2015 06:00 AM, r-help-requ...@r-project.org wrote:
Hello all!
I?m fitting a mixed effects cox model with coxme function of coxme package.
I want to konw what is the best way to check the model adequacy, once that
function cox.zph that does not work for coxme objects.
Thanks in
Look at the rpart vignette "User written split functions". The code allows you to add
your own splitting method to the code (in R, no C required). This has proven to be very
useful for trying out new ideas.
The second piece would be to do your own cross-validation. That is, turn off the
The error message states that there is an invalid value for the density. A long stretch
of code is not very helpful in understanding this. What we need are the definition of
your density -- as it would be written in a textbook. This formula needs to give a valid
response for the range
Hi, I want to perform a survival analysis using survreg procedure from
survival library in R for a pareto distribution for a time variable, so I
set the new distribution using the following sintax:
library(foreign)
library(survival)
library(VGAM)
mypareto <-
The cutpoint is on the predictor, so the interpretation is the same as it is for any other
rpart model. The subjects with predictor < cutpoint form one group and those > cutpoint
the other. The cutpoint is chosen to give the greatest difference in "average y" between
the groups. For poisson
On 10/14/2015 05:00 AM, r-help-requ...@r-project.org wrote:
I am trying to fit this data to a weibull distribution:
My y variable is:1 1 1 4 7 20 7 14 19 15 18 3 4 1 3 1 1 1
1 1 1 1 1 1
and x variable is:1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24
On 08/30/2015 05:00 AM, r-help-requ...@r-project.org wrote:
I'm unable to fit a parametric survival regression using survreg() in the survival package with
data in "counting-process" ("long") form.
To illustrate using a scaled-down problem with 10 subjects (with data placed on
the web):
I'd like to flatten a list from 2 levels to 1 level. This has to be easy, but
is currently opaque to me.
temp <- list(1:3, list(letters[1:3], duh= 5:8), zed=15:17)
Desired result would be a 4 element list.
[[1]] 1:3
[[2]] "a", "b", "c"
[[duh]] 5:8
[[zed]] 15:17
(Preservation of the names is
This was an FDA/SAS bargain a long while ago. SAS made the XPT format publicly available
and unchanging in return for it becoming a standard for submission. Many packages can
reliably read or write these files. (The same is not true for other SAS file formats, nor
is xport the SAS default.)
text/csv" field coming from an http POST request. This
is an internal service on an internal Mayo server and coded by our own IT department; this
will not be the first case where I have found that their definition of "csv" is not quite
standard.
Terry T.
On 23/09/15 10:00, T
I've been away for a couple weeks and am now catching up on email.
The issue is that the coxme code does not have conversions built-in for all of the
possible types of sparse matrix. Since it assumes that the variance matrix must be
symmetric, the non-neccarily-symmetric dgCMatrix class is
I have a csv file from an automatic process (so this will happen thousands of times), for
which the first row is a vector of variable names and the second row often starts
something like this:
5724550,"000202075214",2005.02.17,2005.02.17,"F", .
Notice the second variable which is
a
I expect that reading the result of print(fit.weib) will answer your question. If there
were any missing values in the data set, then the fit.weib$linear.predictors will be
shorter than the original data set,
and the printout will have a note about "...deleted due to missing".
The simplest
How should the weights be treated? If they are multiple observation weights (a weight of
"3" is shorthand for 3 subjects) that leads to a different likelihood than sampling
weights ("3" means to give this one subject more influence). The clogit command can't
read your mind and so has chosen
As a digest reader I am late to the discussion, but let me toss in 2 further
notes.
1. Three advantages of knitr over Sweave
a. The book "Dynamic documents with R and knitr". It is well written; sitting down for
an evening with the first half (70 pages) is a pretty good way to learn the
I'm traveling so chasing this down more fully will wait until I get home.
Four points.
1. This is an edge case. You will notice that if you add "subset=1:100" to the
coxph call that the function works perfectly. You have to get up to 1000 or so
before it fails.
2. The exact partial
I read the digest form which puts me behind, plus the last 2 days have been solid meetings
with an external advisory group so I missed the initial query. Three responses.
1. The clogit routine sets the data up properly and then calls a stratified Cox model. If
you want the survConcordance
For an interval censored poisson or lognormal, use survreg() in the survival package. (Or
if you are a SAS fan use proc lifereg). If you have a data set where R and SAS give
different answers I'd like to know about it, but my general experience is that this is
more often a user error. I am
On 03/02/2016 05:00 AM, r-help-requ...@r-project.org wrote:
I'd very much appreciate your help in resolving a problem that I'm having with
plotting a spline term.
I have a Cox PH model including a smoothing spline and a frailty term as
follows:
fit<-coxph(Surv(start,end,exit) ~ x +
On 04/02/2016 05:00 AM, r-help-requ...@r-project.org wrote:
Hello,
I'm looking for a way in which R can make my live easier.
Currently i'm using R convert data from a dataframe to json's and then sending
these json's to a rest api using a curl command in the terminal (i'm on a mac).
I've
Failure to converge in a coxph model is very rare. If the program does not make it in 20
iterations it likely will never converge, so your control argument will do little.
Without the data set I have no way to guess what is happening. My first question,
however, is to ask how many events you
Thanks to David for pointing this out. The "time dependent covariates" vignette in the
survival package has a section on time dependent coefficients that talks directly about
this issue. In short, the following model is simply wrong:
coxph(Surv(time, status) ~ trt + prior + karno +
2016-04-15 13:58 GMT+02:00 Therneau, Terry M., Ph.D. <thern...@mayo.edu
<mailto:thern...@mayo.edu>>:
I'd like to get interaction terms in a model to be in another form. Namely,
suppose I
had variables age and group, the latter a factor with levels A, B, C, with
age
I'd like to get interaction terms in a model to be in another form. Namely, suppose I had
variables age and group, the latter a factor with levels A, B, C, with age * group in the
model. What I would like are the variables "age:group=A", "age:group=B" and
"age:group=C" (and group itself of
A new version of the survival package has been released. The biggest change is stronger
support for multi-state models, which is an outgrowth of their increasing use in my own
practice. Interested users are directed to the "time dependent covariates" vignette for
discussion of the tmerge and
Look at the finegray command within the survival package; the competing risks vignette has
coverage of it. The command creates an expanded data set with case weights, such that
coxph() on the new data set = the Fine Gray model for the original data. Anything that
works with coxph is valid on
This simple form of a hyperbola is not well known. I find it useful for change point
models: since the derivative is continuous it often behaves better in a maximizer.
h1 <- function(x, b, k=3) .5 * b * (x + sqrt(x^2 + k^2))
Function h1() has asymptotes of y=0 to the left of 0 and y=x to the
I have a process that I need to parallelize, and have a question about two
different ways to proceed. It is essentially an MCMC exploration where
the likelihood is a sum over subjects (6000 of them), and the per-subject
computation is the slow part.
Here is a rough schematic of the code using
On 08/20/2016 05:00 AM, Vinzenz wrote:
For some days I have been struggling with a problem concerning the
?survSplit?-function of the package ?survival?. Searching the internet I
have found a pretty good -German- description of Daniel Wollschl?r
describing how to use survSplit:
The survSplit
You will need to give more detail of exactly what you mean by "prune using a validation
set". THe prune.rpart function will prune at any value you want, what I suspect you are
looking for is to compute the error of each possible tree, using a validation data set,
then find the best one, and
my question.
Best,
Alfredo
-Messaggio originale-
Da: Therneau, Terry M., Ph.D. [mailto:thern...@mayo.edu]
You will need to give more detail of exactly what you mean by "prune using a
validation
set". THe prune.rpart function will prune at any value you want, what I suspect
you are
You can ignore the message below. The maximizing routine buried within the frailty()
command buried with coxph() has a maximizer that is not the brightest. It sometimes gets
lost but then finds its way again. The message is from one of those. It likely took a
not-so-good update step, and
On 09/07/2016 05:00 AM, r-help-requ...@r-project.org wrote:
Dear R-Team,
I have been trying to use the finegray routine that creates a special data
so that Fine and Gray model can be fit. However, it does not seem to work.
Could you please help me with this issue?
Thanks,
Ahalya.
You
I'm off on vacation and checking email only intermittently.
Wrt the offset issue, I expect that you are correct. This is not a case that I
had ever envisioned, and so was not on my "list" when writing the code and
certainly has no test case. That does not mean that it shouldn't work, just
Survival version 2.40 has been relased to CRAN. This is a warning that some users may see
changes in results, however.
The heart of the issue can be shown with a simple example. Calculate the following simple
set of intervals:
<>=
birth <- as.Date("1973/03/10")
start <- as.Date("1998/09/13")
On 11/29/2016 05:00 AM, r-help-requ...@r-project.org wrote:
Independent censoring is one of the fundamental assumptions in the survival
analysis. However, I cannot find any test for it or any paper which discusses
how real that assumption is.
I would be grateful if anybody could point me
I'm looking for advice on which of the parallel systems to use.
Context: maximize a likelihood, each evaluation is a sum over a large number of
subjects (>5000) and each of those per subject terms is slow and complex.
If I were using optim the context would be
fit <- optim(initial.values,
1 - 100 of 125 matches
Mail list logo