Is there a complete list of these very handy and power functions in the base
R?
--
View this message in context:
http://r.789695.n4.nabble.com/testing-for-emptyenv-tp2432922p2525031.html
Sent from the R help mailing list archive at Nabble.com.
__
R doesn't simply treat a row vector as a matrix.
--
View this message in context:
http://r.789695.n4.nabble.com/how-to-get-row-name-of-matrix-when-result-is-a-vector-tp2525631p2525666.html
Sent from the R help mailing list archive at Nabble.com.
__
Thanks
--
View this message in context:
http://r.789695.n4.nabble.com/testing-for-emptyenv-tp2432922p2525757.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
?persp
--
View this message in context:
http://r.789695.n4.nabble.com/3d-graph-surface-tp2525859p2525958.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
you need install and load package {reldist} before you call function gini().
HTH.
--
View this message in context:
http://r.789695.n4.nabble.com/Function-Gini-or-Ineq-tp2525852p2525966.html
Sent from the R help mailing list archive at Nabble.com.
__
One can write an R function to produce a kernel density curve with a
confidence band. See, for example, the steps of doing this in a technical
report at
http://fmwww.bc.edu/repec/usug2003/bsciker.pdf
--
View this message in context:
In the bar plot, the vertical axis is a numerical axis representing the
frequency (the height of the vertival bar -= frequency). If you really want
to have vertical bar corresponding to the negative values go downward, you
need to make your own function to achieve the goal.
--
View this message
If A is a squared matrix, solve(A) gives the inverse of A; if you have a
system of linear equation AX=B, solve(A,B) gives the solution to this system
of equations. For example:
x-2y =1
-2x+3y=-3
A=matrix(c(1,-2,-2,3), ncol=2, byrow=T)
B=c(1,-3)
# to get the inverse of A
solve(A)
To change the default maximum number of iterations (mxit =100 for derivative
based algorithm), add mxit = whatever number you want.
In most cases, you need a very good initial value! This is a real challenge
in using optim(). Quite often, if the initial values is not well selected,
optim() can
Calum-4 wrote:
Hi I know asking which test to use is frowned upon on this list... so
please do read on for at least a couple on sentences...
I have some multivariate data slit as follows
Tumour Site (one of 5 categories) #
Chemo Schedule (one of 3 cats) ##
Cycle (one of 3 cats*) ##
Since the default initial value is not good enough. You should choose one
based on your experience or luck. I choose start=rep(1,5) since there are
parameters in the model.
polr(Species~Sepal.Length+Sepal.Width+Petal.Length+Petal.Width,iris,
start=rep(1,6), method = logistic)
Call:
sorry: start=rep(1,6) since there are 6 parameters in the model.
--
View this message in context:
http://r.789695.n4.nabble.com/likelyhood-maximization-problem-with-polr-tp2528818p2529176.html
Sent from the R help mailing list archive at Nabble.com.
Scott, it seems to have bug in your code:
A=matrix(c(1,-2,3,9,-1,3,0,-4,2,-5,5,17),ncol=4,byrow=T)
ref(A)
[,1] [,2] [,3] [,4]
[1,]1 -2.5 2.17 7.83
[2,]0 1.0 3.00 5.00
[3,]0 0.0 1.00 2.00
the row echelon is apparently incorrect.
--
View this
This this what you want?
A=matrix(1:16,ncol=4)
A
[,1] [,2] [,3] [,4]
[1,]159 13
[2,]26 10 14
[3,]37 11 15
[4,]48 12 16
diag(A[1:4,4:1])
[1] 13 10 7 4
--
View this message in context:
Is this what you want to have:
x - c(1,5,100,300,250,200,550,900,1000)
# assume you want the position of 25th percentile
which(x==quantile(x,0.25))
[1] 3
Note that position is meaningful only when the percentile is one of the
observed data values. If you want to know the position of 70th
diag has 4 letters
cbind has 5 letters :)
--
View this message in context:
http://r.789695.n4.nabble.com/minor-diagonal-in-R-tp2529676p2529746.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing
what is ESP package? Thanks.
--
View this message in context:
http://r.789695.n4.nabble.com/R-package-to-identify-model-tp2529525p2529766.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
Modified from Josh's code:
Is this you want to see?
barplot(-50:50)
# add points into the existing plot at the coordinates set by x and y
# and use a line to connect them
points(x = 1:101, y = seq(from = 30, to = -20, length.out = 101), type =
l)
# add right hand side axis
In addition to Cedric's comments, these are large sample procedures. Your
sample sizes are two small. I don't think any procedures using normal
approximations are inappropriate for your case. I would suggest making a
reasonable distribution on the populations to avoid asymptotic results.
--
View
It seems to produce some strange values:
xx=1:10
which(xx==quantile(x,0.2,type=3))
[1] 5
which(xx==quantile(x,0.5,type=3))
integer(0)
--
View this message in context:
http://r.789695.n4.nabble.com/Percentile-rank-for-each-element-in-list-tp2529523p2530060.html
Sent from the R help mailing
You may be interested in the tutorial of repeated measure ANOVA at UCLA
computing page at:
http://www.ats.ucla.edu/stat/R/seminars/Repeated_Measures/repeated_measures.htm
--
View this message in context:
If you can prove that the Fisher information matrix is positive definite, the
resulting estimate is MLE. Otherwise you can only claim it a local MLE (the
Hessian matrix at the estimate is negative definite).
--
View this message in context:
You can find the layout of data for repeated measuremnt ANOVA in examples
from UCLA computing page:
http://www.ats.ucla.edu/stat/R/seminars/Repeated_Measures/repeated_measures.htm
--
View this message in context:
http://r.789695.n4.nabble.com/repeated-measurements-ANOVA-tp2530368p2530600.html
The ANOVA adds the factors only in the order given in the model formula from
left to right. You may try
drop1(out, test=Chisq)
--
View this message in context:
http://r.789695.n4.nabble.com/anova-of-glm-output-tp2528336p2530620.html
Sent from the R help mailing list archive at Nabble.com.
try:
?title
--
View this message in context:
http://r.789695.n4.nabble.com/How-to-change-font-size-in-plot-function-tp2531127p2531161.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
predict x for a given y(response)? If this is the case, you will have
multiple x for a single y for this exponential model. In terms of logistic
regression, If y =1, logit([P(Y=1)] = a + b*bx has infinite many x. The
question seems not quite clear to me.
--
View this message in context:
Can you be a little bit more specific? For example, the base vectors of the
subspace and the vector you want to project. Specific artificial
vectors/matrices are helpful.
--
View this message in context:
http://r.789695.n4.nabble.com/How-to-project-a-vector-on-a-subspace-tp2530886p2531245.html
try packages:
{yaImpute}, {impute}, etc.
--
View this message in context:
http://r.789695.n4.nabble.com/Interpolation-missing-data-tp2530871p2531288.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
?runif
--
View this message in context:
http://r.789695.n4.nabble.com/Uniform-Distribution-tp2531282p2531292.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
The defintion of the sequency of probability was wrong!
p_11=seq(0,1,0.1)
p_12=seq(0,1,0.1)
Since your multinomial distribution requires P_11, P_12, P13=1-P_11-P12 be
greater than or equal to zero and P_11+P_12+P_13 = 1. In your above
definition of P_11[6] =P_12[6]= 0.6, P_11[7] = P_12[7] =
I look at this question in a different angle. My understanding is:
1. If treat tumor_grade as a numerical variable, you assume the hazard ratio
is invariant between any two adjacent levels of the tumor grade (assuming
invariant covariate patterns of other risks);
2. If you treat the tumor_grade
glm() is another choice. Using glm(), you response variable can be a discrete
random bariable, however, you need to specify the distribution in the
argument: family = distriubtion name
Use Teds simulated data and glm(), you get the same result as that produced
in lm():
summary(glm(Y ~ X + F,
Sorry, result is not the same, since our datasets are different. I also run
lm() based on the dataset that used in glm(). THe results are exactly the
same:
summary(lm(Y ~ X + F))
Call:
lm(formula = Y ~ X + F)
Residuals:
Min 1Q Median 3Q Max
-0.53796 -0.16201 -0.08087
FYI: If you use LaTex, you can work out on something between R and LaTex.
--
View this message in context:
http://r.789695.n4.nabble.com/Reproducible-research-tp2532353p2532361.html
Sent from the R help mailing list archive at Nabble.com.
__
q-0.15 + c(-.1,0,.1)
h-10 + c(-.1,0,.1)
5*q/h[3:1]
[1] 0.02475248 0.0750 0.12626263
--
View this message in context:
http://r.789695.n4.nabble.com/Re-Calculating-with-tolerances-error-propagation-tp2532640p2532991.html
Sent from the R help mailing list archive at Nabble.com.
Can you set the multinomial prob. to zero for p1+p2+p3 != 1 if you have to
use the multinomial distribution in guete(). Otherwise, I would say the
problem/guete() itself is problematic.
--
View this message in context:
http://r.789695.n4.nabble.com/problem-with-outer-tp2532074p2533050.html
Sent
Yanwei!!!
Have you tried to write the likelihood function using log-normal directly?
if you haven't so, you may want to check ?rlnorm
--
View this message in context:
http://r.789695.n4.nabble.com/Help-on-simple-problem-with-optim-tp2533420p2533487.html
Sent from the R help
There are two z-scores reported in the summary: Naive z and Robust z.
pvalue=2*min(pnorm(z-score), 1-pnorm(z-score)) # two-sided test
--
View this message in context:
http://r.789695.n4.nabble.com/gee-p-values-tp2533835p2534302.html
Sent from the R help mailing list archive at Nabble.com.
x=1:16
S=summary(x)
S
Min. 1st Qu. MedianMean 3rd Qu.Max.
1.004.758.508.50 12.25 16.00
S[-4]
Min. 1st Qu. Median 3rd Qu.Max.
1.004.758.50 12.25 16.00
par(mfrow=c(1,2))
boxplot(S[-4]) # based on the summarized stats
boxplot(x)
try:
?ftable
--
View this message in context:
http://r.789695.n4.nabble.com/Counting-occurances-of-a-letter-by-a-factor-tp2534993p2535002.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
just store the broken sequences in a matrix:
M=matrix(1:12, ncol=3, byrow=FALSE)
M
[,1] [,2] [,3]
[1,]159
[2,]26 10
[3,]37 11
[4,]48 12
--
View this message in context:
These are character values. Is there any way to get 001, 010, ..., as actual
numeric values?
--
View this message in context:
http://r.789695.n4.nabble.com/convert-1-10-and-100-to-0001-0010-0100-etc-tp2535023p2535296.html
Sent from the R help mailing list archive at Nabble.com.
I mean to display 001,010, ..., as there are. In other words, whether there
is a function, say func(), such that func(001,010) displays 001, 010.
--
View this message in context:
http://r.789695.n4.nabble.com/convert-1-10-and-100-to-0001-0010-0100-etc-tp2535023p2535318.html
Sent from the R
Thanks David.
func() simply prints out the 0010 as a text value. It is still not numeric.
I am just curious about it.
is.numeric(func4(0100))
00100[1] FALSE
--
View this message in context:
http://r.789695.n4.nabble.com/convert-1-10-and-100-to-0001-0010-0100-etc-tp2535023p2535345.html
Sent
or:
k=0
for (i in 1:k) if(k0) print(i)
--
View this message in context:
http://r.789695.n4.nabble.com/for-loop-tp2535626p2535640.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
package: {mnormt}
--
View this message in context:
http://r.789695.n4.nabble.com/Generating-multinomial-distribution-and-plotting-tp2535895p2535934.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
Is this something you want to have (based on a simulated dataset)?
counts - c(18,17,15,20,10,20,25,13,12)
#risk - round(rexp(9,0.5),3)
risk- c(2.242, 0.113, 1.480, 0.913, 5.795, 0.170, 0.846, 5.240, 0.648)
gm - glm(counts ~ risk, family=quasipoisson)
summary(gm)
new.risk=seq(min(risk),
Have you tried scan()?:
y=scan()
1: 2
2:
Read 1 item
y
[1] 2
--
View this message in context:
http://r.789695.n4.nabble.com/Supplying-function-inputs-interactively-tp2536003p2536004.html
Sent from the R help mailing list archive at Nabble.com.
Is this what you would expect to have. Definitely you can make this function
more elegant:
fn1 - function(x = 10) {
cat(Please type the option number to get your Y value:\n\n)
cat( 1. Y = 1.\n
2. Y = 2.\n
3. Use the default y.\n
4. Choose my own value for y.\n\n)
opt=scan()
Oh,You actually want a mixture of two different normal random variables.
--
View this message in context:
http://r.789695.n4.nabble.com/Generating-multinomial-distribution-and-plotting-tp2535895p2536026.html
Sent from the R help mailing list archive at Nabble.com.
Code:
fn1 - function(n = 5){
+ mat - matrix(rnorm(5*5), 5, 5)
+ cat(print(mat))
+ }
fn1()
[,1][,2] [,3][,4] [,5]
[1,] -0.7101952 0.78992424 -0.8310871 2.49560703 -0.9543827
[2,] -0.1425682 -2.69186367 -0.5937949 0.03188572 -0.5512154
[3,]
It is still visible even it is set invisible(NULL):
fn1 - function(n = 5){
+ mat - matrix(rnorm(5*5), 5, 5)
+ cat(print(mat))
+ invisible(NULL)}
fn1()
[,1][,2] [,3][,4] [,5]
[1,] -1.22767085 -1.41468587 -2.0156231 0.29732942 0.5755600
[2,]
The question is wehter cat() can print out a matrix as it is. For example,
Let's assume that we have matrices A, B, D(= A+B), if it is possible that
cat(\n, A, +,B,=, D, some control arguments , \n)
prints out
matrix A + matrix B = matrix D
where matrices A, B, D (= A+B) should be in the
do the same thing for female and then take the weighted average of the two
means.
--
View this message in context:
http://r.789695.n4.nabble.com/Survival-curve-mean-adjusted-for-covariate-tp2548387p2550178.html
Sent from the R help mailing list archive at Nabble.com.
do the same thing for female and then take the weighted average of the two
means.
--
View this message in context:
http://r.789695.n4.nabble.com/Survival-curve-mean-adjusted-for-covariate-tp2548387p2550179.html
Sent from the R help mailing list archive at Nabble.com.
unique(unlist(list.array))
--
View this message in context:
http://r.789695.n4.nabble.com/apply-union-function-vectorially-tp2550162p2550193.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
In general Shapiro's normality test is more powerful than the KS. For this
specific case, I don't see the significantly different results from both
tests. The normality assumption in this example seems to be questionable.
--
View this message in context:
svd() does not return eigeinvectors!
--
View this message in context:
http://r.789695.n4.nabble.com/eigen-and-svd-tp2550210p2550257.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
I think you need to set up a cut-off of Cp and then get the good values of
Cp from adjr$Cp.
--
View this message in context:
http://r.789695.n4.nabble.com/best-model-cp-mallow-tp2550015p2550283.html
Sent from the R help mailing list archive at Nabble.com.
v=data.frame(c1=c(e,r,t),v=c(1,4,2) )
m=matrix(c(r,t,r,s,e,5,6,7,8,9),nr=5)
colnames(m)=c(c1,c2)
m=as.data.frame(m)
merge(v, m, by =c1 )
c1 v c2
1 e 1 9
2 r 4 5
3 r 4 7
4 t 2 6
--
View this message in context:
you have to load package {lmtest} first. That is,
library(lmtest)
bptest(modelCH, data=KP)
--
View this message in context:
http://r.789695.n4.nabble.com/bptest-tp2553506p2579815.html
Sent from the R help mailing list archive at Nabble.com.
__
Is as.integer() redundant for this vector of integers?
any(c(1, 3) == 3.0 )
--
View this message in context:
http://r.789695.n4.nabble.com/Problem-with-any-tp2553226p2580659.html
Sent from the R help mailing list archive at Nabble.com.
__
You may want to use combinations() in package {gtools} and write a function
with a few lines to perform your leave-k-out procedure.
--
View this message in context:
http://r.789695.n4.nabble.com/delete-d-jackknife-with-R-tp2553192p2585783.html
Sent from the R help mailing list archive at
The output of logisitic procdure only gives you the log(odds-ratio) and the
associated standard error of the log(odds-ratio). You need to exponentiate
the log(odds-ratio) to get your odds ratio. The code tells you how to obtain
the odds ratio from log(odds-ratio).
--
View this message in
?constrOptim
--
View this message in context:
http://r.789695.n4.nabble.com/constrained-optimization-which-package-tp2717677p2717719.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
constrOptim() can do linear and quadratic programming problems! See the
following example from the help document.
## Solves linear and quadratic programming problems
## but needs a feasible starting value
#
# from example(solve.QP) in 'quadprog'
# no derivative
sum(foo==o)/length(foo)
--
View this message in context:
http://r.789695.n4.nabble.com/How-to-get-a-proportion-of-a-Vector-Member-tp2720060p2720067.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
67 matches
Mail list logo