[R] penalized quantile regression (rq.fit.lasso)
Dear all: I have a question about how to get the optimal estimate of coefficients using the penalized quantile regression (LASSO penalty in quantile regression defined in Koenker 2005). In R, I found both rq(y ~ x, method=lasso,lambda = 30) and rq.fit.lasso(x, y, tau = 0.5, lambda = 1, beta = .9995, eps = 1e-06) can give the estimates. But, I didn't find a way using either of these command to get the optimal estimates. Is there any way to specify the optimal lambda (the value of penalty parameter) and then get the optimal estimates? Thanks a lot. Any comment will be appreciated. sophie [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] rwmetrop
Hi, all: Can anybody check where is wrong with my code? I tried a lot of times, but did not find an error. The parameters' estimator is not accurate. It's a simple model about a multiple regression, with five covariates. rwmetrop is supposed to give a much more accurate estimand. Thanks a lot. rm(list=ls()) n=100; p=5; xTrue=matrix(rnorm(n*p),nrow=n, ncol=p) betaTrue=c(1,2,0,3,1) yTrue=xTrue%*%betaTrue+rnorm(n) d=list(y=yTrue, x=xTrue) datapost=function(theta,data){ x=data$x y=data$y mu=rep(0,times=100) for(j in 1:5){ mu=mu+x[,j]*theta[j] } logdensity=-(y-mu)^2/2-log(sqrt(2*pi)) sum(logdensity) } covariance=array(0,dim=c(p,p)) covariance[row(covariance)==col(covariance)]=1 proposal=list(var=covariance, scale=2) start=c(1,1,1,1,1) fit=rwmetrop(datapost, proposal, start, 10, d) colMeans(fit$par[50001:10,]) [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] cv.glmnet
Hi, all: Does anybody know how to avoid the intercept term in cv.glmnet coefficient? When I say avoid, it does not mean using coef()[-1] to omit the printout of intercept, it means no intercept at all when doing the analysis. Thanks. [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Jags problem
Hi, all: I met Non-conforming parameters for function %*% problem, when I run the Jags model in R. My model is like this: model{ for(i in 1:n){ for(j in 1:t[i]){ et[i,j]-yt[i,j]-beta0+betax*xt[i,j]+betat*t[i,j] } for(a in 1:t[i]){ for(b in 1:t[i]){ sigma[i,a,b]-pow(rho0,abs(t[a]-t[b])) } } phi[i]- -log(exp(-(et[i,1:t[i]])%*%inverse(sigma[i, 1:t[i],1:t[i]])%*%t(et[i,1:t[i]])))+1 zeros[i]~dpois(phi[i]) } beta0~dnorm(0,1) betat~dnorm(0,1) betax~dnorm(4,1) rho0~dunif(0,1) } Does anybody know what the problem is? Thank you. Sophie [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Cannot evaluate subset expression for sigmainverse
Hi, anybody can help me with this? can JAGS solve the inverse of a matrix in the 3-way array? Thank you! for(i in 1:n){ for(a in 1:t[i]){ for(b in 1:t[i]){ sigma[i,a,b]-pow(rho,t[a]-t[b]) } } sigmainverse[i,,]-inverse(sigma[i,,]) # this is where jags got error } [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] bug in rpart?
Greetings, I checked the Indian diabetes data again and get one tree for the data with reordered columns and another tree for the original data. I compared these two trees, the split points for these two trees are exactly the same but the fitted classes are not the same for some cases. And the misclassification errors are different too. I know how CART deal with ties --- even we are using the same data, the subjects to the left and right would not be the same if we just rearrange the order of covariates. But the problem is, the fitted trees are exactly the same on the split points. Shouldn't we get the same fitted values if the decisions are the same at each step? Why the same structured trees have different observations on the nodes? The source code for running the diabetes data example and the output of trees are attached. Your professional opinion is very much appreciated. library(mlbench) data(PimaIndiansDiabetes2) mydata-PimaIndiansDiabetes2 library(rpart) fit2-rpart(diabetes~., data=mydata,method=class) plot(fit2,uniform=T,main=CART for original data) text(fit2,use.n=T,cex=0.6) printcp(fit2) table(predict(fit2,type=class),mydata$diabetes) ## misclassifcation table: rows are fitted class neg pos neg 437 68 pos 63 200 pmydata-data.frame(mydata[,c(1,6,3,4,5,2,7,8,9)]) fit3-rpart(diabetes~., data=pmydata,method=class) plot(fit3,uniform=T,main=CART after exchaging mass glucose) text(fit3,use.n=T,cex=0.6) printcp(fit3) table(predict(fit3,type=class),pmydata$diabetes) ##after exchage the order of BODY mass and PLASMA glucose neg pos neg 436 64 pos 64 204 Best, -- -- Yuanyuan Huang Email: sunnyua...@gmail.com ReorderedTree.pdf Description: Adobe PDF document OriginalTree.pdf Description: Adobe PDF document __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] questions on rpart (tree changes when rearrange the order of covariates?!)
Greetings, I am using rpart for classification with class method. The test data is the Indian diabetes data from package mlbench. I fitted a classification tree firstly using the original data, and then exchanged the order of Body mass and Plasma glucose which are the strongest/important variables in the growing phase. The second tree is a little different from the first one. The misclassification tables are different too. I did not change the data, but why the results are so different? Does anyone know how rpart deal with ties? Here is the codes for running the two trees. library(mlbench) data(PimaIndiansDiabetes2) mydata-PimaIndiansDiabetes2 library(rpart) fit2-rpart(diabetes~., data=mydata,method=class) plot(fit2,uniform=T,main=CART for original data) text(fit2,use.n=T,cex=0.6) printcp(fit2) table(predict(fit2,type=class),mydata$diabetes) ## misclassifcation table: rows are fitted class neg pos neg 437 68 pos 63 200 #Klimt(fit2,mydata) pmydata-data.frame(mydata[,c(1,6,3,4,5,2,7,8,9)]) fit3-rpart(diabetes~., data=pmydata,method=class) plot(fit3,uniform=T,main=CART after exchaging mass glucose) text(fit3,use.n=T,cex=0.6) printcp(fit3) table(predict(fit3,type=class),pmydata$diabetes) ##after exchage the order of BODY mass and PLASMA glucose neg pos neg 436 64 pos 64 204 #Klimt(fit3,pmydata) Thanks, -- Yuanyuan Huang [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.