Thank you akk.
I know it is not statistically sounded to check the distribution of
response before glm.
I will check the distribution of xmodel$residuals later on.
About the program problem.
It can print summary(xmodel) but not confint(xmodel) by amending my
code as suggested by Bill Venables.
Hello,
I have a question. I creat a PDF file with four rows and
two cols.
Is it possible to:
-create a plot regions with different hights (example: rows 1 2-4)
-ro center an image in the whole width (example: rows 4)
Thank's a lot.
Felix
example:
Title
Felix Wave wrote:
Hello,
I have a question. I creat a PDF file with four rows and
two cols.
Is it possible to:
-create a plot regions with different hights (example: rows 1 2-4)
-ro center an image in the whole width (example: rows 4)
Check the help page for layout(). Read it
There is now an xspline() function in R-devel, with an example showing how
to add arrows.
I thought a bit more about a 'circular arc' function, but there really is
a problem with that. Few R plot regions have a 1:1 aspect ratio including
some that are intended to do so (see the rw-FAQ).
On 5/6/07, nathaniel Grey [EMAIL PROTECTED] wrote:
Hello R-Users,
I have been using (nnet) by Ripley to train a neural net on a test dataset,
I have obtained predictions for a validtion dataset using:
PP-predict(nnetobject,validationdata)
Using PP I can find the -2 log likelihood for the
Hello,
I would like to draw the contour of a posterior distribution. However I only
have simulated points from this posterior distribution.
So I estimate the density of my posterior distribution :
density(points)
I get the mean, the max, the median
I know I can plot this density estimate.
Dear R useRs,
A new package 'Reliability' is now available on CRAN. It is mainly a set
of functions functions for estimating parameters in software reliability
models. Only infinite failure models are implemented so far.
This is the first version of the package.
The canonical reference is:
well, how to do you know which ones are the best out of several hundreds?
I will average all results out of several hundreds.
On 5/7/07, hadley wickham [EMAIL PROTECTED] wrote:
On 5/6/07, nathaniel Grey [EMAIL PROTECTED] wrote:
Hello R-Users,
I have been using (nnet) by Ripley to train a
Pick the one with the lowest error rate on your training data?
Hadley
On 5/7/07, Wensui Liu [EMAIL PROTECTED] wrote:
well, how to do you know which ones are the best out of several hundreds?
I will average all results out of several hundreds.
On 5/7/07, hadley wickham [EMAIL PROTECTED] wrote:
options(digits = 2)
?options
Atenciosamente,
Ana Patricia Martins
---
Serviço Métodos Estatísticos
Departamento de Metodologia Estatística
INE - Portugal
Telef: 218 426 100 - Ext: 3210
E-mail: [EMAIL PROTECTED]
-Original Message-
From: elyakhlifi
The combination of survfit, coxph, and factors is getting confused. It is
not smart enough to match a new data frame that contains a numeric for sitenew
to a fit that contained that variable as a factor. (Perhaps it should be smart
enough to at least die gracefully -- but it's not).
The
At 15:48 06/05/2007, Ron E. VanNimwegen wrote:
Hi,
Is there a function for raising a matrix to a power? For example if
you like to compute A%*%A%*%A, is there an abbreviation similar to A3?
Atte Tenkanen
I may be revealing my ignorance here, but is MatrixExp in the msm
package (available
hello R-users,
i like to know if there exists a tool in R to find different
trajectories of a variable in a dataset with three resp. four points of
measurement.
i search something like PROC TRAJ for SAS, but i don't have SAS nor do i
know to use it. other alternatives would be M-plus or latent
On Mon, 7 May 2007, Terry Therneau wrote:
The combination of survfit, coxph, and factors is getting confused. It is
not smart enough to match a new data frame that contains a numeric for sitenew
to a fit that contained that variable as a factor. (Perhaps it should be
smart
enough to at
Howdy!
I guess what you want to do is compare Q1/T1 among the sections? If you
want to compute the sum of Q1/T1 by Section, you can do something like:
sum.by.section - with(mydata, tapply(Q1/T1, section, sum))
Substitute sum with anything you want to compute.
Cheers,
Andy
From: Salvatore
Hello.
I have a 2d plot which looks like this:
http://www.nabble.com/file/8242/1.JPG
This plot is derived from a file that holds statistics about each point on
the plot and looks like this:
abc d e
a00.4980.473
hie l would like to create a 6th column actual surv time from the following
data
the condition being
if censoringTimesurvivaltime then actual survtime =survival time
else actual survtime =censoring time
the code l used to create the data is
s=2
while(s!=0){ n=20
check ?rainbow to generate the colours (which also shows you other
related functions like 'heat.colors' that you may also find useful.
To plot the coloured points, check ?points.
hope this helps a bit
Jose
Quoting mister_bluesman [EMAIL PROTECTED]:
Hello.
I have a 2d plot which looks
Douglas Bates:
It may seem obvious how to define the multiple correlation
coefficient R^2 for a non-linear regression model but it's not.
A couple of articles explaining why, that may be of interest:
Anderson-Sprecher R. (1994). ‘Model comparisons and R²’. The American
Statistician, volume
To create 6th column in the matrix m, you should use the cbind function.
To calculate the vector of pairwise min or max values, you should use the
pmin and pmax functions:
act.surv.time-pmin(m[,censoringTime],m[,survivalTime])
m-cbind(m,act.surv.time)
raymond chiruka wrote:
hie l would like
Dear all,
I have several files with Matlab code, which I am translating to R.
For the zero-level approach, I took the very old shell script from R-help
archives, which has made some obvious leg-work such as replacement of =
with -.
Now I am translating indexing, matrix operations and function
Vladimir == Vladimir Eremeev [EMAIL PROTECTED]
on Mon, 7 May 2007 07:58:48 -0700 (PDT) writes:
Vladimir Dear all,
Vladimir I have several files with Matlab code, which I am translating to
R.
Vladimir For the zero-level approach, I took the very old
Vladimir shell script
The log likelihood is
const + n/2* [ log(det(Sigma^-1)) - trace(Sigma^-1*S) ] where Sigma is the
population covariance matrix
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Spencer Graves
Sent: Friday, May 04, 2007 9:20 PM
To: R-devel mailing list
Folks:
If I understand correctly, the following may be pertinent.
Note that the procedure:
min.nnet = nnet[k] such that error rate of nnet[k] = min[i] {error
rate(nnet(training data) from ith random start) }
does not guarantee a classifier with a lower error rate on **new** data than
any
Emmanuel Paradis, the maintainer of the ape package was very helpful
in solving this problem. It seems that it heatmap does not reorder
the rows, so you must reorder them or change the heatmap code to do
so. The heatmap maintainers will document this, but not change the
behavior. The following
Using layout I am plotting 5 boxplots on top of each other,
all of them using colored boxes in different order.
In a 6th box below I want to give the legend explaining the box colors,
and I want this box to be centered in an otherwise empty plot.
I am creating this plot by
plot(0:1,0:1,type=n)
Is
I am graphing data using barchart (barchart(DV ~ IV | subject). I
have 2 groups of 9 subjects each. How can I easily identify which
group each subject belongs to? I have been trying to color code them,
but can't seem to get that to work. Any suggestions would be
appreciated.
Thanks,
Matt
On Mon, 2007-05-07 at 19:44 +0200, Erich Neuwirth wrote:
Using layout I am plotting 5 boxplots on top of each other,
all of them using colored boxes in different order.
In a 6th box below I want to give the legend explaining the box colors,
and I want this box to be centered in an otherwise
On 5/7/07, Matthew Bridgman [EMAIL PROTECTED] wrote:
I am graphing data using barchart (barchart(DV ~ IV | subject). I
have 2 groups of 9 subjects each. How can I easily identify which
group each subject belongs to? I have been trying to color code them,
but can't seem to get that to work. Any
Hi,
I need to multiply all columns in a matrix so something like
apply(x,2,sum), but using multiplication should do.
I have tried apply(x,2,*)
I know this must be trivial, but I get:
Error in FUN(newX[, i], ...) : invalid unary operator
The help for apply states that unary operators must be
Try:
apply(x, 2, prod)
--
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40 S 49° 16' 22
Ohttp://maps.google.com/maps?f=qhl=enq=Curitiba,+Brazillayer=ie=UTF8z=18ll=-25.448315,-49.276916spn=0.002054,0.005407t=kom=1
On 07/05/07, Jose Quesada [EMAIL PROTECTED] wrote:
Hi,
I need to
see
?prod
b
On May 7, 2007, at 2:25 PM, Jose Quesada wrote:
Hi,
I need to multiply all columns in a matrix so something like
apply(x,2,sum), but using multiplication should do.
I have tried apply(x,2,*)
I know this must be trivial, but I get:
Error in FUN(newX[, i], ...) : invalid unary
Jose Quesada said the following on 5/7/2007 11:25 AM:
Hi,
I need to multiply all columns in a matrix so something like
apply(x,2,sum), but using multiplication should do.
I have tried apply(x,2,*)
I know this must be trivial, but I get:
Error in FUN(newX[, i], ...) : invalid unary
Howdo folks,
So I have my data (attached). There are two columns I'm interested in;
algname and dur. I'd like to know how dur changes with algname.
algname is nominal and there are 7 possibilities. There are two more
nominal independents, task and id, so my model is:
dur ~ algname+task+id
Hello:
I would like to make h-scatter plot.
My data looks like as follow:
x, y, z,
12.0, 11.2, 12,
10.21, 5.42, 8,
5.12, -8.25, 7,
I want to make h-scatter plot for the z values by difference distance h from
1 to 20. There are 1023 observations.
Do I need to change data class from
Dear Andy and all others who have replied:
On 07/05/07, Liaw, Andy [EMAIL PROTECTED] wrote:
I guess what you want to do is compare Q1/T1 among the sections? If you
want to compute the sum of Q1/T1 by Section, you can do something like:
sum.by.section - with(mydata, tapply(Q1/T1, section,
Sorry.
I have attached my data frame: DV = dv; IV = bins; subject = id,
Group = group.
barchart(dv ~ bins | id + group, groups = group, data = matt.df)
The two suggestions you offered give me error messages regarding
invalid line type and do not plot all of the data. If I drop the
We don't have the data (nothing useful was attached - see the posting
guide for what you can attach), but it looks like you should be using the
'which' argument to TukeyHSD.
On Mon, 7 May 2007, Gav Wood wrote:
Howdo folks,
So I have my data (attached). There are two columns I'm interested
Michael Dewey wrote:
I may be revealing my ignorance here, but is MatrixExp in the msm
package (available from CRAN) not relevant here?
Never heard about it. Searching with help.search(matrix) didn't
show any function that might be used as Matrix^n.
Alberto Monteiro
hie how do you compute the logrank test using R
what commands do you use
thanks
-
Don't pick lemons.
[[alternative HTML version deleted]]
__
R-help@stat.math.ethz.ch mailing list
hie how do you compute the logrank test using R
what commands do you use my data looks something like just an example
treatmentgrp strata censoringTime survivalTime censoring act.surv.time
[1,] 2 2 42.89005 1847.3358 1
42.89005
[2,]
Prof Brian Ripley ripley at stats.ox.ac.uk writes:
We don't have the data (nothing useful was attached - see the posting
guide for what you can attach), but it looks like you should be using the
'which' argument to TukeyHSD.
But I get this back:
Error in rep.int(n, length(means)) :
On Mon, 7 May 2007, Alberto Vieira Ferreira Monteiro wrote:
Michael Dewey wrote:
I may be revealing my ignorance here, but is MatrixExp in the msm
package (available from CRAN) not relevant here?
Never heard about it. Searching with help.search(matrix) didn't
show any function that might
On Mon, 2007-05-07 at 14:02 -0700, raymond chiruka wrote:
hie how do you compute the logrank test using R
what commands do you use my data looks something like just an example
treatmentgrp strata censoringTime survivalTime censoring act.surv.time
[1,] 2 2
2007/5/7, raymond chiruka [EMAIL PROTECTED]:
hie how do you compute the logrank test using R
what commands do you use my data looks something like just an example
treatmentgrp strata censoringTime survivalTime censoring act.surv.time
[1,] 2 2 42.89005
You might try this, from 9/22/2006 with subject line Exponentiate a matrix:
I am getting a bit rusty on some of these things, but I seem to recall
that there is a numerical advantage (speed and/or accuracy?) to
diagonalizing:
expM - function(X,e) { v - La.svd(X); v$u %*% diag(v$d^e) %*% v$vt
Certain errors seem to generate messages that are less informative than
most -- they just tell you which function an error happened in, but
don't indicate which line or expression the error occurred in.
Here's a toy example:
f - function(x) {a - 1; y - x[list(1:3)]; b - 2; return(y)}
Dear All
I am trying to perform the below optimization problem, but getting
(0.5,0.5) as optimal solution, which is wrong; the correct solution
should be (1,0) or (0,1).
Am I doing something wrong? I am using R 2.5.0 on Fedora Core 6 (Linux).
Thanks in advance,
Paul
Paul,
I think the problem is the starting point. I do not remember the details
of the BFGS method, but I am almost sure the (.5, .5) starting point is
suspect, since the abs function is not differentiable at 0. If you perturb
the starting point even slightly you will have no problem.
Andy
Paul,
Your solution based on SVD does not work even for the matrix in your example
(the reason it worked for e=3, was that it is an odd power and since P is a
permutation matrix. It will also work for all odd powers, but not for even
powers).
However, a spectral decomposition will work for
On 5/7/07, Matthew Bridgman [EMAIL PROTECTED] wrote:
Sorry.
I have attached my data frame: DV = dv; IV = bins; subject = id,
Group = group.
barchart(dv ~ bins | id + group, groups = group, data = matt.df)
The two suggestions you offered give me error messages regarding
invalid line type and
On 5/7/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
I think the problem is the starting point. I do not remember the details
of the BFGS method, but I am almost sure the (.5, .5) starting point is
suspect, since the abs function is not differentiable at 0. If you perturb
the starting point
On 5/7/07, Paul Smith [EMAIL PROTECTED] wrote:
I think the problem is the starting point. I do not remember the details
of the BFGS method, but I am almost sure the (.5, .5) starting point is
suspect, since the abs function is not differentiable at 0. If you perturb
the starting point
Paul Smith said the following on 5/7/2007 3:25 PM:
On 5/7/07, Paul Smith [EMAIL PROTECTED] wrote:
I think the problem is the starting point. I do not remember the details
of the BFGS method, but I am almost sure the (.5, .5) starting point is
suspect, since the abs function is not
Your function, (x1-x2)^2, has zero gradient at all the starting values such
that x1 = x2, which means that the gradient-based search methods will
terminate there because they have found a critical point, i.e. a point at
which the gradient is zero (which can be a maximum or a minimum or a saddle
I have a question about pooling or stacking several time series
“samples” (sorry in advance for the long, possibly confusing, message).
I'm sure I'm revealing far more ignorance than I'm aware of, but
that's why I'm sending this...
[Example at bottom]
I have regional migration flows
Paul Gilbert wrote:
I am getting a bit rusty on some of these things, but I seem to recall
that there is a numerical advantage (speed and/or accuracy?) to
diagonalizing: (...)
I think this also works for non-integer, negative, large, and complex
This is diverging into mathematics, maybe
I see the function getTree, which is very interesting. As I'm trying to
teach myself more and more about R, and dealing with lists, it occurred to
me that it might be fun to remove (as in delete) a single tree from a
forest...say to go from 500 to 499.
I know, I know... why?
Why, to play, of
I'd like to estimate an ordinal logistic regression with a random
effect for a grouping variable. I do not find a pre-packaged
algorithm for this. I've found methods glmmML (package: glmmML) and
lmer (package: lme4) both work fine with dichotomous dependent
variables. I'd like a model similar
Paul--
I think the options are pretty limited for mixed-effects ordinal regression; it
might be worth taking a look at Laura Thompson's R/Splus companion to Alan
Agresti's text on categorical data analysis:
https://home.comcast.net/~lthompson221/Splusdiscrete2.pdf
She discusses some options
G'day Paul,
On Mon, 7 May 2007 22:30:32 +0100
Paul Smith [EMAIL PROTECTED] wrote:
I am trying to perform the below optimization problem, but getting
(0.5,0.5) as optimal solution, which is wrong;
Why?
As far as I can tell you are trying to minimize |x1-x2| where x1 and x2
are between 0 and
G'day Paul,
On Mon, 7 May 2007 23:25:52 +0100
Paul Smith [EMAIL PROTECTED] wrote:
[...]
Furthermore, X^2 is everywhere differentiable and notwithstanding the
reported problem occurs with
myfunc - function(x) {
x1 - x[1]
x2 - x[2]
(x1-x2)^2
}
Same argument as with abs(x1-x2)
G'day all,
On Tue, 8 May 2007 12:10:25 +0800
Berwin A Turlach [EMAIL PROTECTED] wrote:
I am trying to perform the below optimization problem, but getting
(0.5,0.5) as optimal solution, which is wrong;
Why?
As far as I can tell you are trying to minimize |x1-x2|
It was pointed out to
On the definitional question, some texts do indeed consider multi-category
logistic regression as a glm. But the original definition by Nelder does
not. I've never seen polr considered to be a glm (but it way well have
been done).
Adding random effects is a whole different ball game: you
64 matches
Mail list logo