k]] <- update(if (k==1) Cox0 else Cox[[k-1]], form)
}
From: Frank S.
Sent: Friday, August 30, 2019 6:36:39 PM
To: Andrews, Chris
Cc: r-help@r-project.org
Subject: RE: [R] Efficient way to update a survival model
External Email - Use Caution
Chris, thank y
The updated formula needs to have a different term rather than cos(k * v) every
time. Here is one way to explicitly change the formula.
library("survival")
set.seed(1)
v <- runif(nrow(pbc), min = 0, max = 2)
Cox0 <- coxph(Surv(pbc$time,pbc$status == 2) ~ v, data = pbc)
Cox <- vector("list",
Message-
From: Andrews, Chris
Sent: Friday, August 23, 2019 9:18 AM
To: 'Göran Broström'; r-help@r-project.org; tamas.fere...@medstat.hu
Subject: RE: [R] results of a survival analysis change when converting the data
to counting process format
# For what it is worth, even the second fit (cuts
# For what it is worth, even the second fit (cuts at observation times) does
not give identical coefficient estimates as using the original data structure.
answer <- coef( coxph(Surv(time, status) ~ trt + prior + karno, data = veteran)
)
veteran2 <- survSplit( Surv(time, status) ~ trt + prior
Denise,
Interacting a fixed effect with a strata variable is not uncommon. No main
effect of the strata variable is possible but the interaction term allows the
fixed effect variable to have different values in different strata. I've more
often seen it coded as a direct interaction:
coxph(
While looking at the code of print.aov for a different reason, I noticed that
'coefficient' was spelled with 3 'f's in one location. Perhaps this is on
purpose but in another location it has just 2 'f's. This has not caused me any
problem (that I know of) but I found it curious.
Chris
R
After making scatterplot matrix, I determined I only needed the first 2 columns
of the matrix so I added verInd=1:2 to my pairs() call. However, it did not
turn out as I expected.
Perhaps the attached pdf of the example code will make it through. If not, my
description is "the wrong
I only get the digest, sorry if this has already been answered.
When I run your code (after creating some data) I get a warning that "weights
are ignored in clogit". This is a result of miscalling the clogit function.
The first 2 commas should be +s.
library(survival)
nn <- 1000
dat <-
The package maintainer may be able to give help. However, I don't get the same
output as you (3.1.2). Perhaps you can update and solve your problem.
require(survMisc)
Loading required package: survMisc
Loading required package: survival
Loading required package: splines
Loading required
Your definition of p-value is not correct. See, for example,
http://en.wikipedia.org/wiki/P-value#Misunderstandings
-Original Message-
From: Monnand [mailto:monn...@gmail.com]
Sent: Wednesday, January 14, 2015 2:17 AM
To: Andrews, Chris
Cc: r-help@r-project.org
Subject: Re: [R] two
.
From: Monnand [monn...@gmail.com]
Sent: Monday, January 12, 2015 10:14 PM
To: Andrews, Chris
Cc: r-help@r-project.org
Subject: Re: [R] two-sample KS test: data becomes significantly different after
normalization
Thank you, Chris!
I think it is exactly the problem you mentioned. I did
The main issue is that the original distributions are the same, you shift the
two samples *by different amounts* (about 0.01 SD), and you have a large
(n=1000) sample size. Thus the new distributions are not the same.
This is a problem with testing for equality of distributions. With large
If you are testing H0: p = 0.6 vs H1: p != 0.6 with a sample of size 10 and you
observe X=2, then Pr(X = 2) + Pr(X = 8) is not what you want. You can argue
that you want Pr(X = 2) + Pr(X = 10). Both 2 and 10 are 4 away from the null.
binom.test(2, 10, 0.6, alternative=two.sided) # 0.01834
Hi all,
The default display of random dates with breaks=months is less than optimal.
set.seed(20141024)
random.dates - as.Date(2001/1/1) + round(365*stats::runif(100))
hist(random.dates, months)
The 13 edges of the 12 bars are labelled Dec, Jan, Feb, ..., Nov, Dec.
The first bar represents the
: Thursday, July 10, 2014 8:59 AM
To: Andrews, Chris
Cc: r-help@r-project.org
Subject: RE: [R] Survival Analysis with an Historical Control
Hi Chris,
Thanks for pointing out the use of View page source. Very helpful to know.
Do you happen to know anything about how to perform the analysis itself
Hi Lars,
Graph it:
plot(pred_leuk)
will show that some of the survival curves do not reach 0.5 before you run out
of data. Thus the median is not estimated and you get NA.
Chris
-Original Message-
From: Lars Bishop [mailto:lars...@gmail.com]
Sent: Thursday, July 10, 2014 6:23 AM
The code is actually available at the websites you provide. Try View page
source in your browser. The most cryptic code isn't needed because the math
functions (e.g, incomplete gamma function) are available in R.
-Original Message-
From: Paul Miller [mailto:pjmiller...@yahoo.com]
has). If a
person is found guilty of two crimes (e.g. two robberies in a single spree) and
the jail terms are to be served consecutively, is that one event or two?
Chris
-Original Message-
From: John Kane [mailto:jrkrid...@inbox.com]
Sent: Wednesday, June 11, 2014 9:13 AM
To: Andrews
I wouldn't consider the person at risk for re incarceration if he is currently
imprisoned. So I wouldn't use those intervals as part of the response
variable. Perhaps time in custody would be a covariate used to model the time
until re incarceration. One variable that is commonly needed for
Please include example data in the future. Perhaps the following is useful.
(1) Your model is redundant. The * produces both main effects and the
interaction. So I removed the main effects from your call
(2) For my simulated data, the df=0 option chose a model that resulted in a
singular
Try adding plots, e.g.
set.seed(20140514)
x - rnorm(100)
hist(x, prob=TRUE, ylim=c(0,10))
dd - density(x, n=10001, bw=0.001)
lines(dd, col=2, type=s)
dd - density(x, n=101, bw=0.001)
lines(dd, col=3, type=s)
The density function you produce with bw=0.001 is very irregular (many sharp,
narrow
I don't think cov.wt uses frequency weights. However, I don't think this is
mentioned in its help page.
Here is some information about the difference:
http://stats.stackexchange.com/questions/61225/correct-equation-for-weighted-unbiased-sample-covariance
The frequency version isn't hard to
The survival function, S(t), gives you the probability of surviving beyond time
t starting from time 0. If you want to know the probability of surviving
beyond time t *given* that you survived to get a heart surgery at time u0,
that is a different function. It might be S(t)/S(u) depending on
I get what I would expect. The tstat and the Fstat are both undefined (0/0);
as are the p-values
n=10;k=1;summary(lm(rep(k,n)~rnorm(n)))
Call:
lm(formula = rep(k, n) ~ rnorm(n))
Residuals:
Min 1Q Median 3QMax
0 0 0 0 0
Coefficients:
To: Andrews, Chris; r-help@r-project.org
Subject: Re: [R] summary.lm() for zero variance response
Hi Chris,
Here my output (I have not yet installed R 3.0.3)
n=10;k=1;summary(lm(rep(k,n)~rnorm(n)))
Call:
lm(formula = rep(k, n) ~ rnorm(n))
Residuals:
Min 1Q Median 3Q
Try cph in rms (where rcs is defined).
library(rms)
fit-cph(Surv(time,status==2)~rcs(age,4)+sex, data=pbc, y=TRUE, x=TRUE)
id1-pbc[1,]
surv.id1-survfit(fit,newdata=id1)
plot(surv.id1)
summary(surv.id1)
-Original Message-
From: Zhiyuan Sun [mailto:sam.d@gmail.com]
Sent: Wednesday,
That subset will give you right truncation, not right censoring. See code
below. Use Thomas's solution.
Chris
library(survival)
set.seed(20131121)
ngroup - 100
xxx - rep(0:1, e=ngroup)
ttt - rexp(length(xxx), rate=xxx+.5)
plot(survfit(Surv(ttt) ~ xxx))
survdiff(Surv(ttt) ~ xxx)
# impose
I don't get an error message. However, I had to make up fstatus and all the
rest because your example is not reproducible. Please provide the list with
more information and you'll likely get a more helpful answer from someone.
-Original Message-
From: Elan InP
Consider the function f(x) = x on the open interval (0,1). It does not have a
maximum.
That is what your likelihood function will look like. The MLE does not exist.
Chris
(Although if everything is continuous and you are okay with limits there is an
extension that gets you to Terry's original
I think you need to divide by sqrt(sum(th^2)) rather than sum(th^2)
ch - as.numeric(t(yh) %*% th)/sqrt(sum(th^2)) #
modified: / sqrt(SS)
#ch - as.numeric(t(yh) %*% th)/sum(th^2) # modified:
/ SS
# ch -
Based on the plot of Schoenfeld residuals and Terry's explanation is it safe
to say that proportional hazards assumption holds despite the significant
global p-values?
No. I don't want to put words in Terry's mouth, but he seems to be saying that
proportional hazards does NOT hold but it may
Terry,
I recently noticed the censor argument of survfit. For some analyses it
greatly reduces the size of the resulting object, which is a nice feature.
However, when combined with the id argument, only 1 prediction is made.
Predictions can be made individually but I'd prefer to do them
It seems a line through the origin doesn't fit the data very well. That may be
throwing off the fitting routine.
data = c(144.53, 1687.68, 5397.91)
err = c(8.32, 471.22, 796.67)
model = c(71.60, 859.23, 1699.19)
id = c(1, 2, 3)
# display
plot(data ~ model)
library(Hmisc)
errbar(model,
survreg allows interval censored data, if that is how you want to represent
measurement uncertainty. See
?Surv
-Original Message-
From: Kyle Penner [mailto:kpen...@as.arizona.edu]
Sent: Tuesday, June 11, 2013 8:02 PM
To: r-help@r-project.org
Subject: [R] survreg with measurement
It isn't that complex:
myDataLong - data.frame(Time=c(A, C), Censored=c(B, D), group=rep(0:1,
times=c(length(A), length(C
Fit = survfit(Surv(Time, Censored==0) ~ group, data=myDataLong)
plot(Fit, col=1:2)
survdiff(Surv(Time, Censored==0) ~ group, data=myDataLong)
However, your approach (a
1. A censored observation
2. It does not relate to either
3. See ?print.survfit . Recall also that the mean of a positive random
variable is the integral from 0 to infinity of the survival function. The
truncated mean is the integral from 0 to tau (819 in your case) of the survival
function.
Couldn't reproduce your error (but I did get a warning message):
ads1S - with(ads1,Surv(INTERVAL_START,EVENT,STATUS,type=c('counting')))
Warning message:
In Surv(INTERVAL_START, EVENT, STATUS, type = c(counting)) :
Stop time must be start time, NA created
cox_out -
In one particular situation predict.coxph gives an error message. Namely:
stratified data, predict='expected', new data, se=TRUE. I think I found the
error but I'll leave that to you to decide.
Thanks,
Chris
CODE
library(survival)
set.seed(20121221)
nn - 10 # sample size in each
survreg does work. Hard to tell what went wrong without any code from you.
As for smoothing a Cox survival function, see example below. However, just
because you can, doesn't mean you should.
Chris
library(survival)
nn - 10
zz - rep(0:1, nn)
xx - rexp(2*nn)
cc - rexp(2*nn)
tt - pmin(xx, cc)
Does Surv in psm handle interval2 data? The argument list seems to indicate it
does but I get an error.
Thanks,
Chris
# code
library('survival')
left - c(1, 3, 5, NA)
right -c(2, 3, NA, 4)
Surv(left, right, type='interval2')
survreg(Surv(left, right, type='interval2') ~ 1)
library('rms')
Your example seems strange because a line fits on the x-y scale; not on the
log(x)-log(y) scale. Anyway, here is my example. You can build on it for more
general data.
x - exp(1:10)
y - exp(10:1 + rnorm(10))
logmod - lm(log(y)~log(x))
logypred - predict(logmod)
plot(y~x)
Your model is additive so the effect of rx is the same at every age. There is
not one survival curve for all ages (unless the beta for age is 0). The curves
will shift up and down as you vary age, but they will retain the same relation.
A common approach is to use the sample mean of age.
It doesn't look like 'survplot' allows you to control the yaxis formatting in
that way. You can edit the function survplot.survfit directly if you really
need to: fix(survplot.survfit). The relevant line to change is (I believe)
mgp.axis(2, at = pretty(ylim)) to mgp.axis(2, at = pretty(ylim),
sapply(seq(4,ncol(dat)), function(i)
survdiff(Surv(time,completion==2)~dat[,i], data=dat,
subset=group==3)$chisq)
[1] 0.0944 4.4854 3.4990
Chris
-Original Message-
From: Charles Determan Jr [mailto:deter...@umn.edu]
Sent: Thursday, October 18, 2012 3:04 PM
To: r-help@r-project.org
Mike,
My guess is that you have censored observations in the middle.
When using the minimum time, the events are happening prior to censorings.
Then the riskset is large and the curve decreases slightly.
When using the maximum time, the events are happening after the censorings.
Then the
Hi Roland (and Terry),
Is this the model you want to fit?
--a separate 'baseline' hazard for each stratum defined by cov1
--a coefficient for cov2 that is different for each stratum defined by cov1
Then you can have a separate call to coxph for each stratum.
sCox0 - coxph(Surv(time, status) ~
46 matches
Mail list logo