Re: [R] Optimization in R similar to MS Excel Solver
On 11-03-2013, at 23:31, Pavel_K kuk...@vsb.cz wrote: Dear all, I am trying to find the solution for the optimization problem focused on the finding minimum cost. I used the solution proposed by excel solver, but there is a restriction in the number of variables. My data consists of 300 rows represent cities and 6 columns represent the centres. It constitutes a cost matrix, where the cost are distances between each city and each of six centres. ..+ 1 column contains variables, represents number of firms. I want to calculate the minimum cost between cities and centres. Each city can belong only to one of the centres. A model example: costs: distance between municipalities and centres + plus number of firms in each municipality MunicipalityCentre1 Centre2 Centre3 Centre4 Centre5 Centre6 Firms Muni1 30 20 60 40 66 90 15 Muni2 20 30 60 40 66 90 10 Muni3 25 31 60 40 66 90 5 Muni4 27 26 60 40 66 90 30 The outcome of excel functon Solver is: cost assigned MunicipalityCentre1 Centre2 Centre3 Centre4 Centre5 Centre6 Solution Muni1 0 20 0 0 0 0 300 Muni2 20 0 0 0 0 0 200 Muni3 25 0 0 0 0 0 125 Muni4 0 26 0 0 0 0 780 objective : 1405 I used package lpSolve but there is a problem with variables firms: s - as.matrix(read.table(C:/R/OPTIMALIZATION/DATA.TXT, dec = ,, sep=;,header=TRUE)) [2] [3] [4] [5] [6] [1] 30 20 60 40 66 90 [2] 20 30 60 40 66 90 [3] 25 31 60 40 66 90 [4] 27 26 60 40 66 90 row.signs - rep (=, 4) row.rhs - c(15,10,5,30) col.signs - rep (=, 6) col.rhs - c(1,1,1,1,1,1) lp.transport (costs, min, row.signs, row.rhs, col.signs, col.rhs, presolve=0, compute.sens=0) lp.transport (costs, min, row.signs, row.rhs, col.signs, col.rhs, presolve=0, compute.sens=0)$solution Outcome: Error in lp.transport(costs, min, row.signs, row.rhs, col.signs, col.rhs, : Error: We have 6 signs, but 7 columns Does anyone know where could the problem ? Does there exist any other possibility how to perform that analysis in R ? I am bit confused here about how can I treat with the variables firms. Please provide a reproducible example including the necessary library() statements. In the call of lp.transport you are using a variable costs but where is it defined? You read a file with read.table into a variable s. Use dput. Berend __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Test of Parallel Regression Assumption in R
Dear Heather, You can make this test using the ordinal package. Here the function clm fits cumulative link models where the ordinal logistic regression model is a special case (using the logit link). Let me illustrate how to test the parallel regression assumption for a particular variable using clm in the ordinal package. I am using the wine dataset from the same package, I fit a model with two explanatory variables; temp and contact, and I test the parallel regression assumption for the contact variable in a likelihood ratio test: library(ordinal) Loading required package: MASS Loading required package: ucminf Loading required package: Matrix Loading required package: lattice head(wine) response rating temp contact bottle judge 1 36 2 cold no 1 1 2 48 3 cold no 2 1 3 47 3 cold yes 3 1 4 67 4 cold yes 4 1 5 77 4 warm no 5 1 6 60 4 warm no 6 1 fm1 - clm(rating ~ temp + contact, data=wine) fm2 - clm(rating ~ temp, nominal=~ contact, data=wine) anova(fm1, fm2) Likelihood ratio tests of cumulative link models: formula:nominal: link: threshold: fm1 rating ~ temp + contact ~1 logit flexible fm2 rating ~ temp ~contact logit flexible no.parAIC logLik LR.stat df Pr(Chisq) fm1 6 184.98 -86.492 fm2 9 190.42 -86.209 0.5667 3 0.904 The idea is to fit the model under the null hypothesis (parallel effects - fm1) and under the alternative hypothesis (non-parallel effects for contact - fm2) and compare these models with anova() which performs the LR test. From the high p-value we see that the null cannot be rejected and there is no evidence of non-parallel slopes in this case. For additional information, I suggest that you take a look at the following package vignette (http://cran.r-project.org/web/packages/ordinal/vignettes/clm_tutorial.pdf) where these kind of tests are more thoroughly described starting page 6. I think you can also make similar tests with the VGAM package, but I am not as well versed in that package. Hope this helps, Rune Rune Haubo Bojesen Christensen Postdoc DTU Compute - Section for Statistics --- Technical University of Denmark Department of Applied Mathematics and Computer Science Richard Petersens Plads Building 324, Room 220 2800 Lyngby Direct +45 45253363 Mobile +45 30264554 http://www.imm.dtu.dk On 11 March 2013 22:52, Nicole Ford nicole.f...@me.com wrote: here's some code as an example hope it helps! mod-polr(vote~age+demsat+eusup+lrself+male+retnat+union+urban, data=dat) summary(mod) mod-polr(vote~age+demsat+eusup+lrself+male+retnat+union+urban, data=dat) levs-levels(dat$vote) tmpdat-list() for(i in 1:(nlevels(dat$vote)-1)){ tmpdat[[i]] - dat tmpdat[[i]]$z - as.numeric(as.numeric(tmpdat[[1]]$vote) = levs[i]) } form-as.formula(z~age+demsat+eusup+lrself+male+retnat+union+urban) mods-lapply(tmpdat, function(x)glm(form, data=x, family=binomial)) probs-sapply(mods, predict, type=response) p.logits-cbind(probs[,2], t(apply(probs, 1, diff)), 1-probs[,ncol(probs)]) p.ologit-predict(mod, type='probs') n-nrow(p.logits) bin.ll - p.logits[cbind(1:n, dat$vote)] ologit.ll - p.ologit[cbind(1:n, dat$vote)] binom.test(sum(bin.ll ologit.ll), n) dat$vote.fac-factor(dat$vote, levels=1:6) mod-polr(dat$vote.fac~age+demsat+eusup+lrself+male+retnat+union+urban, data=dat) source(http://www.quantoid.net/cat_pre.R ) catpre(mod) install.packages(rms) library(rms) olprobs-predict(mod, type='probs') pred.cat-apply(olprobs, 1, which.max) table(pred.cat, dat$vote) round(prop.table(table(pred.cat, dat$vote), 2), 3) On Mar 11, 2013, at 5:02 PM, Heather Kettrey wrote: Hi, I am running an analysis with an ordinal outcome and I need to run a test of the parallel regression assumption to determine if ordinal logistic regression is appropriate. I cannot find a function to conduct such a test. From searching various message boards I have seen a few useRs ask this same question without a definitive answer - and I came across a thread that indicated there is no such function available in any R packages. I hope this is incorrect. Does anyone know how to test the parallel regression assumption in R? Thanks for your help! -- Heather Hensman Kettrey PhD Candidate Department of Sociology Vanderbilt University [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help
[R] Generating QFs from same sample
Dear All I am kind of stuck up with a code a part of which seems to be causing a problem, or at least I think so. May be the community can help me. Its simple but I suppose I am missing something. I generate a data matrix X, say of order n*p, where n represents independent row-vectors and p correlated col vectors. Let the row representation be X = (X'_1, . . ., X'_n)'. I generate the differences of these vectors as D_{ab} = X_a X_b, with different indices a and b, and similarly D_{cd} = X_c X_d, and make a quadratic form D'_{ab}D_{cd} = A_{abcd}, say. Please note that, here a is unequal b, c is unequal d, a is unequal c, b is unequal d. By slightly shuffling the same set of vectors, I generate two other similar quadratic forms, say A_{acbd} and A_{adcb}, where the indices are again as unequal as stated above. In the R-code (a part of ) below, I compute these three forms using pata, patb, patc, respectively, using kronecker products. But from the final result of this code I get the feeling there is a difference between what I want the code to do and what it actually does. en - matrix(1, n, 1) jn - matrix(1, n, n) dev - jn - diag(n) pata - dev%x%dev patb - jn%x%jn - diag(n)%x%diag(n) patc - jn%x%diag(n) - diag(n)%x%jn Any help will be sincerely appreciated! Best regards RA [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] cumulative sum by group and under some criteria
fun1- function(maxN,p1L,p1H,p0L,p0H,c11,c12,c1,c2,beta,alpha){ d-do.call(rbind,lapply(2:(maxN-6),function(m1) do.call(rbind,lapply(2:(maxN-m1-4),function(n1) do.call(rbind,lapply(0:m1,function(x1) do.call(rbind,lapply(0:n1,function(y1) data.frame(m1,n1,x1,y1) colnames(d)-c(m1,n1,x1,y1) d1-within(d,{ term1_p1- dbinom(x1,m1, p1L, log=FALSE)* dbinom(y1,n1,p1H, log=FALSE) term1_p0- dbinom(x1,m1, p0L, log=FALSE)* dbinom(y1,n1,p0H, log=FALSE) }) set.seed(8) d2-do.call(rbind,lapply(seq_len(nrow(d)),function(i){ Pm- rbeta(1000,0.2+d[i,x1],0.8+d[i,m1]-d[i,x1]); Pn- rbeta(1000,0.2+d[i,y1],0.8+d[i,n1]-d[i,y1]); Fm- ecdf(Pm); Fn- ecdf(Pn); #Fmm- Fm(d[i,p11]); #Fnn- Fn(d[i,p12]); Fmm- Fm(p1L); Fnn- Fn(p1H); R- (Fmm+Fnn)/2; Fmm_f- max(R, Fmm); Fnn_f- min(R, Fnn); Qm- 1-Fmm_f; Qn- 1-Fnn_f; data.frame(Qm,Qn)})) d2-cbind(d1,d2) library(zoo) lst1- split(d2,list(d$m1,d$n1)) d2New-do.call(rbind,lapply(lst1[lapply(lst1,nrow)!=0],function(x){ x[,9:14]-NA; x[,9:10][x$Qm=c11,]-cumsum(x[,5:6][x$Qm=c11,]); x[,11:12][x$Qn=c12,]-cumsum(x[,5:6][x$Qn=c12,]); x[,13:14]-cumsum(x[,5:6]); colnames(x)[9:14]- c(cterm1_P0L,cterm1_P1L,cterm1_P0H,cterm1_P1H,sumTerm1_p0,sumTerm1_p1); x1-na.locf(x); x1[,9:14][is.na(x1[,9:14])]-0; x1} )) row.names(d2New)-1:nrow(d2New) res1-aggregate(.~m1+n1,data=d2New[,c(1:2,9:12)],max) d3-res1[res1[,4]=beta res1[,6]beta,] f1-function(dat){ stopifnot(nrow(dat)!=0) library(plyr) join(dat,d2New,type=inner) } d4- f1(d3) res2-do.call(rbind,lapply(unique(d4$m1),function(m1) do.call(rbind,lapply(unique(d4$n1),function(n1) do.call(rbind,lapply(unique(d4$x1),function(x1) do.call(rbind,lapply(unique(d4$y1),function(y1) #do.call(rbind,lapply(0:m1,function(x1) #do.call(rbind,lapply(0:n1,function(y1) do.call(rbind,lapply((m1+2):(maxN-2-n1),function(m) do.call(rbind,lapply((n1+2):(maxN-m),function(n) do.call(rbind,lapply(x1:(x1+m-m1), function(x) do.call(rbind,lapply(y1:(y1+n-n1), function(y) data.frame(m1,n1,x1,y1,m,n,x,y)) ))) colnames(res2)- c(m1,n1,x1,y1,m,n,x,y) res3- join(res2,d4,by=c(m1,n1,x1,y1),type=inner) res3-res3[,c(1:16)] res3New- within(res3,{ term2_p0- dbinom(x1,m1, p0L, log=FALSE)* dbinom(y1,n1,p0H, log=FALSE)*dbinom(x-x1,m-m1, p0L, log=FALSE)* dbinom(y-y1,n-n1,p0H, log=FALSE); term2_p1- dbinom(x1,m1, p1L, log=FALSE)* dbinom(y1,n1,p1H, log=FALSE)*dbinom(x-x1,m-m1, p1L, log=FALSE)* dbinom(y-y1,n-n1,p1H, log=FALSE)}) res4-do.call(rbind,lapply(seq_len(nrow(res3New)),function(i){ Pm2-rbeta(1000,0.2+res3[i,x],0.8+res3[i,m]-res3[i,x]); Pn2- rbeta(1000,0.2+res3[i,y],0.8+res3[i,n]-res3[i,y]); Fm2- ecdf(Pm2); Fn2- ecdf(Pn2); #Fmm2- Fm2(res3[i,p1]); #Fnn2- Fn2(res3[i,p2]); Fmm2- Fm2(p1L); Fnn2- Fn2(p1H); R2- (Fmm2+Fnn2)/2; Fmm_f2- max(R2, Fmm2); Fnn_f2- min(R2, Fnn2); Qm2- 1-Fmm_f2; Qn2- 1-Fnn_f2; data.frame(Qm2,Qn2)})) res5-cbind(res3New,res4) lst2- split(res5,list(res5$m1,res5$n1,res5$m,res5$n)) res5New-do.call(rbind,lapply(lst2[lapply(lst2,nrow)!=0], function(x){ x[,21:26]-NA; x[,21:22][x$Qm2=c1 x$Qmc11,]-cumsum(x[,17:18][x$Qm2=c1 x$Qmc11,]); x[,23:24][x$Qn2=c2 x$Qnc12,]-cumsum(x[,17:18][x$Qn2=c2 x$Qnc12,]); x[,25:26]-cumsum(x[,17:18]); colnames(x)[21:26]- c(cterm2_P1L,cterm2_P0L,cterm2_P1H,cterm2_P0H,sumT2p0,sumT2p1); x2-na.locf(x); x2[,21:26][is.na(x2[,21:26])]-0; x2} )) row.names(res5New)-1:nrow(res5New) res6-aggregate(.~m1+n1+m+n,data=res5New[,c(1:6,9:12,21:24)] ,max) res6New-within(res6,{AL-1-(cterm1_P0L + cterm2_P0L); AH-1-(cterm1_P0H + cterm2_P0H); BL-(cterm1_P1L + cterm2_P1L); BH-(cterm1_P1H + cterm2_P1H); EN- m1 + (m-m1)*(1-cterm1_P0L) + n1 + (n-n1)*(1-cterm1_P0H); N-m+n }) res7-res6New[,c(1:4,7,9,15:20)] res7New-res7[res7[,9]=beta res7[,10]=beta res7[,11] = alpha res7[,12] = alpha,] return(res7New) } ###Different scenarios 1 fun1(10,0.25,0.25,0.05,0.05,0.07,0.07,0.15,0.20,0.6,0.6) #Joining by: m1, n1, cterm1_P0L, cterm1_P1L, cterm1_P0H, cterm1_P1H # m1 n1 m n cterm1_P0L cterm1_P0H N EN BH BL AH #8 2 3 4 5 0.7737809 0.7737809 9 5.904876 0.2373047 0.4202271 0.2262191 #11 2 3 5 5 0.7737809 0.7737809 10 6.131095 0.2748470 0.3744965 0.1631941 #12 3 3 5 5 0.8573750 0.7350919 10 6.815066 0.2342920 0.4218750 0.1703707 #14 2 3 4 6 0.7737809 0.7737809 10 6.131095 0.2373047 0.4202271 0.2262191 #15 2 4 4 6 0.7350919 0.7350919 10 7.059632 0.1779785 0.3942719 0.2649081 # AL #8 0.1100501 #11 0.1158586 #12 0.1426250 #14 0.1100501 #15 0.1138223 2 fun1(10,0.25,0.25,0.05,0.05,0.07,0.07,0.15,0.20,0.1,0.1) #Error: nrow(dat) != 0 is not TRUE 3 running again previous case fun1(10,0.25,0.25,0.05,0.05,0.07,0.07,0.15,0.20,0.6,0.6) Joining by: m1, n1, cterm1_P0L, cterm1_P1L, cterm1_P0H, cterm1_P1H # m1 n1 m n cterm1_P0L cterm1_P0H N EN BH BL AH #8 2 3 4 5 0.7737809 0.7737809 9 5.904876 0.2373047 0.4202271 0.2262191 #11 2 3 5 5 0.7737809 0.7737809 10 6.131095 0.2748470 0.3744965 0.1631941 #12 3 3 5 5 0.8573750 0.7350919 10 6.815066 0.2342920 0.4218750 0.1703707 #14 2 3
[R] Fwd: some questions about ARIMA and FARIMA
Sara, On 11 March 2013 18:26, cyl123 505186...@qq.com wrote: I have some quesions about about ARIMA and FARIMA: Looks like they're all answered in the PDF for fArma[1]. -- Sent from my mobile device Envoyait de mon portable 1. http://cran.r-project.org/web/packages/fArma/fArma.pdf __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Optimization in R similar to MS Excel Solver
Dear Mr Hasselman, for a better understanding I have attached an example solved in excel by using the tool Solver. I want to assign for each municipality one of the centres and apply it for calculating the minimum cost as you can see in an example. I used package lpsolve, but it does not work. I am not sure how to treat with this part of statement, I think I made mistake in it: row.rhs - c(15,10,5,30) and col.rhs - c(1,1,1,1,1,1) The example in R: library(lpSolve) costs - as.matrix(read.table(C:/R/OPTIMIZATION/DATA.TXT, dec = ,, sep=;,header=TRUE)) row.signs - rep (=, 4) row.rhs - c(15,10,5,30) col.signs - rep (=, 6) col.rhs - c(1,1,1,1,1,1) lp.transport (costs, min, row.signs, row.rhs, col.signs, col.rhs, presolve=0, compute.sens=0) lp.transport (costs, min, row.signs, row.rhs, col.signs, col.rhs, presolve=0, compute.sens=0)$solution Outcome: Error in lp.transport(costs, min, row.signs, row.rhs, col.signs, col.rhs, : Error: We have 6 signs, but 7 columns Hope the example solved in excel will help you to understand my problem. Thank you Pavel example.xls http://r.789695.n4.nabble.com/file/n4661019/example.xls -- View this message in context: http://r.789695.n4.nabble.com/Optimization-in-R-similar-to-MS-Excel-Solver-tp4660997p4661019.html Sent from the R help mailing list archive at Nabble.com. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] split data into columns from dataframe
Dear r-users, Originally my data is in notepad. I use data.frame(...) to convert the data into columns but it has no header. The data below is what I got in R. I would like to extract these data: 19710629 39.3 19701126 19720629 33.8 19720517 ... when I use data2_30min[1, cbind(1,3,5)] command the result is NULL. Then I tried data2_30min[1], it gives me the whole set of data. I suspect the data is stored in a cell. My question is, how do I split the data from dataframe into some columns which separating the data by spaces. Many examples given on how to split the data with the header. The codes and data are given below. Thank you so much for your help. head(data_30min,10); tail(data_30min,10) data2_30min - data.frame(data_30min) head(data2_30min,10); tail(data2_30min,10) head(data_30min,10); tail(data_30min,10) data2_30min[1] data_30min 1 19710629 08(PARTIAL) 39.3 at interval beginning 19701126 010326 2 19720629 08(PARTIAL) 33.8 at interval beginning 19720517 144507 3 19730629 08(PARTIAL) 32.2 at interval beginning 19720910 135747 4 19740629 08(PARTIAL) 38.9 at interval beginning 19731003 124849 5 19750629 08 43.3 at interval beginning 19750105 173500 6 19760629 08(PARTIAL) 101.5 at interval beginning 19750809 111800 7 19770629 08(PARTIAL) 39.4 at interval beginning 19760917 141200 8 19780629 08 38.5 at interval beginning 19780508 195400 9 19790629 08(PARTIAL) 39.0 at interval beginning 19790602 14 10 19800629 08(PARTIAL) 94.6 at interval beginning 19800329 063532 [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Optimization in R similar to MS Excel Solver
Pavel_K kuk064 at vsb.cz writes: Dear all, I am trying to find the solution for the optimization problem focused on the finding minimum cost. I used the solution proposed by excel solver, but there is a restriction in the number of variables. My data consists of 300 rows represent cities and 6 columns represent the centres. It constitutes a cost matrix, where the cost are distances between each city and each of six centres. ..+ 1 column contains variables, represents number of firms. I want to calculate the minimum cost between cities and centres. Each city can belong only to one of the centres. (1) The solution you say the Excel Solver returns does not appear to be correct: The column sum in columns 3 to 5 is not (greater or) equal to 1 as you request. (2) lpSolve does not return an error, but says no feasible solution found, which seems to be correct: The equality constraints are too strict. (3) If you relieve these constraints to inequalities, lpSolves does find a solution: costs - matrix(c( 30, 20, 60, 40, 66, 90, 20, 30, 60, 40, 66, 90, 25, 31, 60, 40, 66, 90, 27, 26, 60, 40, 66, 90), 4, 6, byrow = TRUE) firms - c(15, 10, 5, 30) row.signs - rep (=, 4) row.rhs - firms col.signs - rep (=, 6) col.rhs - c(1,1,1,1,1,1) require(lpSolve) T - lp.transport (costs, min, row.signs, row.rhs, col.signs, col.rhs, presolve = 0, compute.sens = 0) T$solution sum(T$solution * costs) # 1557 Of course, I don't know which constraints you really want to impose. Hans Werner A model example: costs: distance between municipalities and centres + plus number of firms in each municipality Municipality Centre1 Centre2 Centre3 Centre4 Centre5 Centre6 Firms Muni1 30 20 60 40 66 90 15 Muni2 20 30 60 40 66 90 10 Muni3 25 31 60 40 66 90 5 Muni4 27 26 60 40 66 90 30 The outcome of excel functon Solver is: cost assigned Municipality Centre1 Centre2 Centre3 Centre4 Centre5 Centre6 Solution Muni1 0 20 0 0 0 0 300 Muni2 20 0 0 0 0 0 200 Muni3 25 0 0 0 0 0 125 Muni4 0 26 0 0 0 0 780 objective : 1405 I used package lpSolve but there is a problem with variables firms: s - as.matrix(read.table(C:/R/OPTIMALIZATION/DATA.TXT, dec = ,, sep=;,header=TRUE)) [2] [3] [4] [5] [6] [1] 30 20 60 40 66 90 [2] 20 30 60 40 66 90 [3] 25 31 60 40 66 90 [4] 27 26 60 40 66 90 row.signs - rep (=, 4) row.rhs - c(15,10,5,30) col.signs - rep (=, 6) col.rhs - c(1,1,1,1,1,1) lp.transport (costs, min, row.signs, row.rhs, col.signs, col.rhs, presolve=0, compute.sens=0) lp.transport (costs, min, row.signs, row.rhs, col.signs, col.rhs, presolve=0, compute.sens=0)$solution Outcome: Error in lp.transport(costs, ...): Error: We have 6 signs, but 7 columns Does anyone know where could the problem ? Does there exist any other possibility how to perform that analysis in R ? I am bit confused here about how can I treat with the variables firms. Thanks Pavel __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] split data into columns from dataframe
What about read.table()? -- Ivan CALANDRA Université de Bourgogne UMR CNRS/uB 6282 Biogéosciences 6 Boulevard Gabriel 21000 Dijon, FRANCE +33(0)3.80.39.63.06 ivan.calan...@u-bourgogne.fr http://biogeosciences.u-bourgogne.fr/calandra Le 12/03/13 10:03, Roslina Zakaria a écrit : Dear r-users, Originally my data is in notepad. I use data.frame(...) to convert the data into columns but it has no header. The data below is what I got in R. I would like to extract these data: 1971062939.3 19701126 1972062933.8 19720517 ... when I use data2_30min[1, cbind(1,3,5)] command the result is NULL. Then I tried data2_30min[1], it gives me the whole set of data. I suspect the data is stored in a cell. My question is, how do I split the data from dataframe into some columns which separating the data by spaces. Many examples given on how to split the data with the header. The codes and data are given below. Thank you so much for your help. head(data_30min,10); tail(data_30min,10) data2_30min - data.frame(data_30min) head(data2_30min,10); tail(data2_30min,10) head(data_30min,10); tail(data_30min,10) data2_30min[1] data_30min 1 19710629 08(PARTIAL) 39.3 at interval beginning 19701126 010326 2 19720629 08(PARTIAL) 33.8 at interval beginning 19720517 144507 3 19730629 08(PARTIAL) 32.2 at interval beginning 19720910 135747 4 19740629 08(PARTIAL) 38.9 at interval beginning 19731003 124849 5 19750629 0843.3 at interval beginning 19750105 173500 6 19760629 08(PARTIAL) 101.5 at interval beginning 19750809 111800 7 19770629 08(PARTIAL) 39.4 at interval beginning 19760917 141200 8 19780629 0838.5 at interval beginning 19780508 195400 9 19790629 08(PARTIAL) 39.0 at interval beginning 19790602 14 10 19800629 08(PARTIAL) 94.6 at interval beginning 19800329 063532 [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Optimization in R similar to MS Excel Solver
On 12-03-2013, at 08:45, Pavel_K kuk...@vsb.cz wrote: Dear Mr Hasselman, for a better understanding I have attached an example solved in excel by using the tool Solver. I want to assign for each municipality one of the centres and apply it for calculating the minimum cost as you can see in an example. I used package lpsolve, but it does not work. I am not sure how to treat with this part of statement, I think I made mistake in it: row.rhs - c(15,10,5,30) and col.rhs - c(1,1,1,1,1,1) The example in R: library(lpSolve) costs - as.matrix(read.table(C:/R/OPTIMIZATION/DATA.TXT, dec = ,, sep=;,header=TRUE)) row.signs - rep (=, 4) row.rhs - c(15,10,5,30) col.signs - rep (=, 6) col.rhs - c(1,1,1,1,1,1) lp.transport (costs, min, row.signs, row.rhs, col.signs, col.rhs, presolve=0, compute.sens=0) lp.transport (costs, min, row.signs, row.rhs, col.signs, col.rhs, presolve=0, compute.sens=0)$solution Outcome: Error in lp.transport(costs, min, row.signs, row.rhs, col.signs, col.rhs, : Error: We have 6 signs, but 7 columns Hope the example solved in excel will help you to understand my problem. You post is not available on Nabble. The excel file is inaccessible because it doesn't exist. Apart from that: show the contents of costs. Use dput(costs) and put the result in the message to R-help. That is the only way one can find out why lp.transport gives an error. And please read the posting guide (link is at the bottom of each posting to this list). Berend Thank you Pavel example.xls http://r.789695.n4.nabble.com/file/n4661019/example.xls -- View this message in context: http://r.789695.n4.nabble.com/Optimization-in-R-similar-to-MS-Excel-Solver-tp4660997p4661019.html Sent from the R help mailing list archive at Nabble.com. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] split data into columns from dataframe
Hello, Like Ivan said, use read.table to read in the data and then, to select only some of the columns you can use several ways. data_30min - read.table(text = 1 19710629 08(PARTIAL) 39.3 at interval beginning 19701126 010326 2 19720629 08(PARTIAL) 33.8 at interval beginning 19720517 144507 3 19730629 08(PARTIAL) 32.2 at interval beginning 19720910 135747 4 19740629 08(PARTIAL) 38.9 at interval beginning 19731003 124849 5 19750629 0843.3 at interval beginning 19750105 173500 6 19760629 08(PARTIAL) 101.5 at interval beginning 19750809 111800 7 19770629 08(PARTIAL) 39.4 at interval beginning 19760917 141200 8 19780629 0838.5 at interval beginning 19780508 195400 9 19790629 08(PARTIAL) 39.0 at interval beginning 19790602 14 10 19800629 08(PARTIAL) 94.6 at interval beginning 19800329 063532 ) # Simple data_30min[, c(2, 4, 8)] # More elaborate, but equivalent subset(data_30min, select = c(V2, V4, V8)) Hope this helps, Rui Barradas Em 12-03-2013 09:03, Roslina Zakaria escreveu: Dear r-users, Originally my data is in notepad. I use data.frame(...) to convert the data into columns but it has no header. The data below is what I got in R. I would like to extract these data: 1971062939.3 19701126 1972062933.8 19720517 ... when I use data2_30min[1, cbind(1,3,5)] command the result is NULL. Then I tried data2_30min[1], it gives me the whole set of data. I suspect the data is stored in a cell. My question is, how do I split the data from dataframe into some columns which separating the data by spaces. Many examples given on how to split the data with the header. The codes and data are given below. Thank you so much for your help. head(data_30min,10); tail(data_30min,10) data2_30min - data.frame(data_30min) head(data2_30min,10); tail(data2_30min,10) head(data_30min,10); tail(data_30min,10) data2_30min[1] data_30min 1 19710629 08(PARTIAL) 39.3 at interval beginning 19701126 010326 2 19720629 08(PARTIAL) 33.8 at interval beginning 19720517 144507 3 19730629 08(PARTIAL) 32.2 at interval beginning 19720910 135747 4 19740629 08(PARTIAL) 38.9 at interval beginning 19731003 124849 5 19750629 0843.3 at interval beginning 19750105 173500 6 19760629 08(PARTIAL) 101.5 at interval beginning 19750809 111800 7 19770629 08(PARTIAL) 39.4 at interval beginning 19760917 141200 8 19780629 0838.5 at interval beginning 19780508 195400 9 19790629 08(PARTIAL) 39.0 at interval beginning 19790602 14 10 19800629 08(PARTIAL) 94.6 at interval beginning 19800329 063532 [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Assumptions of oneway_test in coin package
Hi everybody, I want to test differences in means among independent groups (more that two). The assumptions of normality and homocedasticity are violated, but I don´t want to use rank tests like Kruskal-Wallis, because I lose too much information under this rank transformation. Can I use oneway_test(VAR~GROUPS, method=approximate(B=)) in coin package in this situation? I think, as this test uses permutation, I don´t have to meet the assumptions of normality and homocedasticity. Am I right? Thank you very much A hug Diego PJ [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Testing for significance of overlap in three sets - mantelhaen test?
Hi, My apologies for the naive question! I have three overlapping sets and I want to find the probability of finding a larger/greater intersection for 'A intersect B intersect C' (in the example below, I want to find the probability of finding more than 135 elements that are common in sets A, B C). For a two set problem, I guess I would do a Fisher or chi-square test. Here is what I have attempted so far: # ### Prepare a 3 way contingency table: mytable - array(c(135,116,385,6256, 48,97,274,9555), dim = c(2,2,2), dimnames = list( Is_C = c('Yes','No'), Is_B = c('Yes','No'), Is_A = c('Yes','No'))) ## test mantelhaen.test(myrabbit, exact = TRUE, alternative = greater) ## end code Is this the right test (alongwith the current parameters) to determine what I want or is there a more appropriate test for this? many thanks!! [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] aggregate(), tapply(): Why is the order of the grouping variables not kept?
I'm no expeRt, but suppose that we change the setup slightly: xx - x[sample(nrow(x)), ] Now what would you like aggregate(value ~ group + year, data=xx, FUN=function(z) z[1]) to return? Personally, I prefer to have R return the same thing regardless of how the input dataframe is sorted, Personally, I prefer to have R not to change my input as much as possible... but I totally agree with you that there are other instances where it's preferable that the output does not depend on the input. i.e. the result should depend only on the formula. You just have to know that the order is to have the first factor vary most rapidly, ... which I still find very confusing/unnatural, but okay. then the next, etc. I think that's documented somewhere, but I don't know where. it's also the default behavior of expand.grid() for example. Cheers, Marius Peter Ehlers __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Testing for significance of overlap in three sets - mantelhaen test?
As this seems to be a statistics, not an R, question, it is off topic here. Post on a statistics list like stats.stackexchange.com instead. -- Bert On Tue, Mar 12, 2013 at 6:22 AM, Brian Smith bsmith030...@gmail.com wrote: Hi, My apologies for the naive question! I have three overlapping sets and I want to find the probability of finding a larger/greater intersection for 'A intersect B intersect C' (in the example below, I want to find the probability of finding more than 135 elements that are common in sets A, B C). For a two set problem, I guess I would do a Fisher or chi-square test. Here is what I have attempted so far: # ### Prepare a 3 way contingency table: mytable - array(c(135,116,385,6256, 48,97,274,9555), dim = c(2,2,2), dimnames = list( Is_C = c('Yes','No'), Is_B = c('Yes','No'), Is_A = c('Yes','No'))) ## test mantelhaen.test(myrabbit, exact = TRUE, alternative = greater) ## end code Is this the right test (alongwith the current parameters) to determine what I want or is there a more appropriate test for this? many thanks!! [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Bert Gunter Genentech Nonclinical Biostatistics Internal Contact Info: Phone: 467-7374 Website: http://pharmadevelopment.roche.com/index/pdb/pdb-functional-groups/pdb-biostatistics/pdb-ncb-home.htm __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] ICPSR data archive to release datasets in R format
Dear r-help list members, I thought that many of you might be interested in the following announcement from the data archive of the Inter-University Consortium for Political and Social Research (a world-wide organization of nearly 800 universities and other institutions headquartered at the University of Michigan): -- snip --- ICPSR releases new datasets in R format ICPSR is pleased to announce that most new datasets we release will now be available in R format. Along with data files readable by software packages SAS, SPSS and Stata, data can now be downloaded as R datasets with the .rda extension. ICPSR has been releasing files in R since the beginning of the year, and currently has nearly 150 datasets available in the format. ICPSR Council member John Fox, a contributor to the R Project for Statistical Computing, and professor of sociology at McMaster University in Hamilton, Ontario, said: Since its advent in the 1990s, R has become standard software for statisticians, and its use has subsequently spread dramatically to other fields, including the social and behavioral sciences. The ICPSR data archive is arguably the most important resource of its kind. It is therefore a very welcome development that data in the ICPSR archive will soon become much more conveniently accessible to R users. R files are available from each dataset's download page under the Dataset(s) section. ICPSR will be adding R files for studies that undergo updates, but is not planning to retrofit the full collection. 2013-03-12 -- snip --- You'll find further information about the ICPSR data archive and other Consortium activities at www.icpsr.umich.edu/. Best, John --- John Fox Senator McMaster Professor of Social Statistics Department of Sociology McMaster University Hamilton, Ontario, Canada __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] loading data frames and rbind them
Hello everybody I have the following problem. I have to load a number of xls files from different folders (each xls file has the same number of columns, and different numbers of rows). Each xls file is named with a number, i.e. 12345.xls and is contained in a folder with same name, say 12345) Once loaded, I want to rbind all of them to obtain a single database. I think I successfully did the first part, using assign: for (i in lista_rea_c){ name=paste(reaNum,i,sep=,collapse=NULL) assign(name,read.xlsx(file=paste(path,i,/,i,.xls,sep=,collapse=NULL),1,header=TRUE,as.data.frame=TRUE)) } where lista_rea_c contains the numbers and is obtained as: lista_rea_c = list.files(path = /Users/mario/Dropbox/..., and path is defined elsewhere At this point I have a number of variables, names as reaNum12345 ecc. I would like to rbind all of them, but what I get is that I rbind reaNum12345, i.e. the variable name and not the data it contains. Can anyone help? thanks! Mario -- -- PLEASE NOTICE NEW EMAIL ADDRESS AND HOME PAGE URL Andrea Mario Lavezzi Dipartimento di Studi su Politica, Diritto e Società Università di Palermo Piazza Bologni 8 90134 Palermo, Italy tel. ++39 091 23892208 fax ++39 091 6111268 skype: lavezzimario email: mario.lavezzi (at) unipa.it web: http://www.unipa.it/~mario.lavezzi [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] how to convert a data.frame to tree structure object such as dendrogram
You can use the lapply or rapply functions on the resulting list to break each piece into a list itself, then apply the lapply or rapply function to those resulting lists, ... On Mon, Mar 11, 2013 at 3:41 PM, Not To Miss not.to.m...@gmail.com wrote: Thanks. That's just an simple example - what if there are more columns and more rows? Is there any easy way to create nested list? Best, Zech On Mon, Mar 11, 2013 at 2:12 PM, MacQueen, Don macque...@llnl.gov wrote: You will have to decide what R data structure is a tree structure. But maybe this will get you started: foo - data.frame(x=c('A','A','B','B'), y=c('Ab','Ac','Ba','Bd')) split(foo$y, foo$x) $A [1] Ab Ac $B [1] Ba Bd I suppose it is at least a little bit tree-like. -- Don MacQueen Lawrence Livermore National Laboratory 7000 East Ave., L-627 Livermore, CA 94550 925-423-1062 On 3/10/13 9:19 PM, Not To Miss not.to.m...@gmail.com wrote: I have a data.frame object like: data.frame(x=c('A','A','B','B'), y=c('Ab','Ac','Ba','Bd')) x y 1 A Ab 2 A Ac 3 B Ba 4 B Bd how could I create a tree structure object like this: |---Ab A---| _| |---Ac | | |---Ba B---| |---Bb Thanks, Zech [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Gregory (Greg) L. Snow Ph.D. 538...@gmail.com [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] funtion equivalent of jitter to move figures on device
hello all, I'm overlaying numerous scatter plots onto a map (done in PBSmodelling). In this case I'm placing each plot by setting par(fig) to the centroid of map polygons. The location/mapping part is not so important. There are cases of small overlaps for some plots (ie figures) so I'm keen to write or find a function that moves my small scatter plots so they don't overlap. A little like jitter, but not random in behaviour, it needs to move away from the plots it's overlapping. thanks to all Michael ___ Michael Folkes Salmon Stock Assessment Canadian Dept. of Fisheries Oceans Pacific Biological Station [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] loading data frames and rbind them
Hi Mario! I'm not really familiar with this kind of manipulations, but I think you can do it more or less like this (some people on this list might give a more detailed answer): #Create an empty named list list_df - vector(mode=list, length=length(lista_rec_c)) names(list_df) - lista_rea_c##or some part of it using gsub or something similar #Import for (i in lista_rea_c) { list_df[[i]] - read.xlsx(...) } #rbind do.call(rbind, list_df) This probably won't work like this exactly but you should be able to make the modifications. HTH, Ivan -- Ivan CALANDRA Université de Bourgogne UMR CNRS/uB 6282 Biogéosciences 6 Boulevard Gabriel 21000 Dijon, FRANCE +33(0)3.80.39.63.06 ivan.calan...@u-bourgogne.fr http://biogeosciences.u-bourgogne.fr/calandra Le 12/03/13 15:52, A M Lavezzi a écrit : Hello everybody I have the following problem. I have to load a number of xls files from different folders (each xls file has the same number of columns, and different numbers of rows). Each xls file is named with a number, i.e. 12345.xls and is contained in a folder with same name, say 12345) Once loaded, I want to rbind all of them to obtain a single database. I think I successfully did the first part, using assign: for (i in lista_rea_c){ name=paste(reaNum,i,sep=,collapse=NULL) assign(name,read.xlsx(file=paste(path,i,/,i,.xls,sep=,collapse=NULL),1,header=TRUE,as.data.frame=TRUE)) } where lista_rea_c contains the numbers and is obtained as: lista_rea_c = list.files(path = /Users/mario/Dropbox/..., and path is defined elsewhere At this point I have a number of variables, names as reaNum12345 ecc. I would like to rbind all of them, but what I get is that I rbind reaNum12345, i.e. the variable name and not the data it contains. Can anyone help? thanks! Mario __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Bootstrap BCa confidence limits with your own resamples
I like to bootstrap regression models, saving the entire set of bootstrapped regression coefficients for later use so that I can get confidence limits for a whole set of contrasts derived from the coefficients. I'm finding that ordinary bootstrap percentile confidence limits can provide poor coverage for odds ratios for binary logistic models with small N. So I'm exploring BCa confidence intervals. Because the matrix of bootstrapped regression coefficients is generated outside of the boot packages' boot() function, I need to see if it is possible to compute BCa intervals using boot's boot.ci() function after constructing a suitable boot object. The function below almost works, but it seems to return bootstrap percentile confidence limits for BCa limits. The failure is probably due to my being new to BCa limits and not understanding the inner workings. Has anyone successfully done something similar to this? Thanks very much, Frank bootBCa - function(estimate, estimates, n, conf.int=0.95) { require(boot) if(!exists('.Random.seed')) runif(1) w - list(sim= 'ordinary', stype = 'i', t0 = estimate, t = as.matrix(estimates), R = length(estimates), data= 1:n, strata = rep(1, n), weights = rep(1/n, n), seed = .Random.seed, statistic = function(...) 1e10, call = list('rigged', weights='junk')) np - boot.ci(w, type='perc', conf=conf.int)$percent m - length(np) np - np[c(m-1, m)] bca - tryCatch(boot.ci(w, type='bca', conf=conf.int), error=function(...) list(fail=TRUE)) if(length(bca$fail) bca$fail) { bca - NULL warning('could not obtain BCa bootstrap confidence interval') } else { bca - bca$bca m - length(bca) bca - bca[c(m-1, m)] } list(np=np, bca=bca) } - Frank Harrell Department of Biostatistics, Vanderbilt University -- View this message in context: http://r.789695.n4.nabble.com/Bootstrap-BCa-confidence-limits-with-your-own-resamples-tp4661045.html Sent from the R help mailing list archive at Nabble.com. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Bootstrap BCa confidence limits with your own resamples
Dear Frank, I'm not sure that it will help, but you might look at the bootSem() function in the sem package, which creates objects that inherit from boot. Here's an artificial example: -- snip -- library(sem) for (x in names(CNES)) CNES[[x]] - as.numeric(CNES[[x]]) model.cnes - specifyModel() F - MBSA2, lam1 F - MBSA7, lam2 F - MBSA8, lam3 F - MBSA9, lam4 F - F, NA, 1 MBSA2 - MBSA2, the1 MBSA7 - MBSA7, the2 MBSA8 - MBSA8, the3 MBSA9 - MBSA9, the4 sem.cnes - sem(model.cnes, data=CNES) summary(sem.cnes) set.seed(12345) # for reproducibility system.time(boot.cnes - bootSem(sem.cnes, R=5000)) class(boot.cnes) boot.ci(boot.cnes) -- snip -- I hope this helps, John -Original Message- From: r-help-boun...@r-project.org [mailto:r-help-bounces@r- project.org] On Behalf Of Frank Harrell Sent: Tuesday, March 12, 2013 11:27 AM To: r-help@r-project.org Subject: [R] Bootstrap BCa confidence limits with your own resamples I like to bootstrap regression models, saving the entire set of bootstrapped regression coefficients for later use so that I can get confidence limits for a whole set of contrasts derived from the coefficients. I'm finding that ordinary bootstrap percentile confidence limits can provide poor coverage for odds ratios for binary logistic models with small N. So I'm exploring BCa confidence intervals. Because the matrix of bootstrapped regression coefficients is generated outside of the boot packages' boot() function, I need to see if it is possible to compute BCa intervals using boot's boot.ci() function after constructing a suitable boot object. The function below almost works, but it seems to return bootstrap percentile confidence limits for BCa limits. The failure is probably due to my being new to BCa limits and not understanding the inner workings. Has anyone successfully done something similar to this? Thanks very much, Frank bootBCa - function(estimate, estimates, n, conf.int=0.95) { require(boot) if(!exists('.Random.seed')) runif(1) w - list(sim= 'ordinary', stype = 'i', t0 = estimate, t = as.matrix(estimates), R = length(estimates), data= 1:n, strata = rep(1, n), weights = rep(1/n, n), seed = .Random.seed, statistic = function(...) 1e10, call = list('rigged', weights='junk')) np - boot.ci(w, type='perc', conf=conf.int)$percent m - length(np) np - np[c(m-1, m)] bca - tryCatch(boot.ci(w, type='bca', conf=conf.int), error=function(...) list(fail=TRUE)) if(length(bca$fail) bca$fail) { bca - NULL warning('could not obtain BCa bootstrap confidence interval') } else { bca - bca$bca m - length(bca) bca - bca[c(m-1, m)] } list(np=np, bca=bca) } - Frank Harrell Department of Biostatistics, Vanderbilt University -- View this message in context: http://r.789695.n4.nabble.com/Bootstrap- BCa-confidence-limits-with-your-own-resamples-tp4661045.html Sent from the R help mailing list archive at Nabble.com. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting- guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Constrain slope in segmented package
Hello, I'm currently using the segmented package of M.R. Muggeo to fit a two-slope segmented regression. I would like to constrain a null-left-slope, but I cannot make it. I followed the explanations of the package (http://dssm.unipa.it/vmuggeo/segmentedRnews.pdf) to write the following code : fit.glm - glm(y~x) fit.seg - segmented(fit.glm, seg.Z=~x,psi=0.3) fit.glm -update(fit.glm,.~.-x) fit.seg1 -update(fit.seg) If I got well, the two last lines are supposed to constrain the slope, but it did not change anything. Could you help me with this ? By the way, I don't understand the .-.-x code in the update function. Thanks a lot by anticipation ! Pierre -- Pierre Hainaut Université Catholique de Louvain Earth and Life Institute Croix du Sud, 2 - bât. De Serres, B337 - bte L7.05.14 1348 Louvain-la-Neuve - Belgique +3210479326 [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] soap film smoother
Dear all, I'm trying to fit a soap film smoother by mgcv, accounting for a polygonal boundary. Everything works well untill I plot the results, when I get the right shape, but plot coordinates are exchanged. Function vis.gam() doesn't run into the same problem. Here I enclose the script and attach an essential R workspace. Can anybody provide some help or hints? Regards, SA # # points, polygon, knots plot(aa[[1]]$lon,aa[[1]]$lat,type=l) points(knots,pch=20,col=2) points(data$lon,data$lat,pch=20,col=blue) # # data den - data$den lon - data$lon lat - data$lat # # model require(mgcv) nmax - 100 b - gam(I(den^0.5)~s(lon,lat,k=30,bs=so,xt=list(bnd=aa,nmax=nmax)), knots=knots) # # default plot x11() plot(b) # The plot shape is OK, but the coordinates are exchanged: # lon is in the range of lat and vice-versa __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] rugarch: GARCH with Johnson Su innovations
Hey, I'm trying to implement a GARCH model with Johnson-Su innovations in order to simulate returns of financial asset. The model should look like this: r_t = alpha + lambda*sqrt(h_t) + sqrt(h_t)*epsilon_t h_t = alpha0 + alpha1*epsilon_(t-1)^2 + beta1 * h_(t-1). Alpha refers to a risk-free return, lambda to the risk-premium. I've implemented it like this: #specification of the model spec = ugarchspec(variance.model = list(model = sGARCH, garchOrder = c(1,1), submodel = NULL, external.regressors = NULL, variance.targeting = FALSE), mean.model = list( armaOrder = c(0,0), include.mean = TRUE, archm = TRUE, archpow = 1, arfima = FALSE, external.regressors = NULL, archex = FALSE), distribution.model = jsu, start.pars = list(), fixed.pars = list()) #fit the model to historical closing price (prices) fit = ugarchfit(data = prices, spec = spec) #save coefficients of the fitted model into 'par' par - coef(fit) m = coef(fit)[mu] lambda = coef(fit)[archm] gamma = coef(fit)[skew] delta = coef(fit)[shape] #GARCH parameter a0 = coef(fit)[omega] a1 = coef(fit)[alpha1] b1 = coef(fit)[beta1] My problem is that I often get negative values for lambda, i.e. for the intended risk-premium. So I'm wondering if I've made a mistake in the implementation, as one would usually expect a positive lambda. And a second question is about the Johnson-Su distribution: Am I right by extracting the Johnson-Su parameters gamma (delta) by the keywords skew (shape)? Many thanks in advance, Patrick [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Empty cluster / segfault using vanilla kmeans with version 2.15.2
This example dataset breaks the kmeans in version 2.15.2, installed from the Belgian CRAN, on an Ubuntu 12.04 LTS 64bit my.sample.2 Day1 Day2 Day3 Day4 Day5 Day6 [1,]455355 [2,]776566 [3,]665555 [4,]534324 [5,]432532 [6,]666566 [7,]676676 [8,]435455 [9,]355556 [10,]453244 [11,]777577 [12,]342222 [13,]466466 [14,]565666 [15,]455543 [16,]566666 [17,]777676 [18,]323342 [19,]655454 [20,]541513 [21,]455465 [22,]346563 [23,]232333 [24,]565345 [25,]666666 [26,]545555 [27,]566136 [28,]444335 [29,]675546 [30,]322232 [31,]241643 [32,]464545 [33,]322333 [34,]236544 [35,]221112 [36,]232323 [37,]365535 [38,]733735 [39,]224424 [40,]243232 ## Define a variable hm.clusters - 5 ## Performing kmeans with 100 random starts, several times; for 7 times I ## get the 'empty cluster' error k.liking.ts - kmeans(my.sample.2, hm.clusters, nstart=100, iter.max=50) Error: empty cluster: try a better set of initial centers k.liking.ts - kmeans(my.sample.2, hm.clusters, nstart=100, iter.max=50) Error: empty cluster: try a better set of initial centers k.liking.ts - kmeans(my.sample.2, hm.clusters, nstart=100, iter.max=50) Error: empty cluster: try a better set of initial centers k.liking.ts - kmeans(my.sample.2, hm.clusters, nstart=100, iter.max=50) Error: empty cluster: try a better set of initial centers k.liking.ts - kmeans(my.sample.2, hm.clusters, nstart=100, iter.max=50) Error: empty cluster: try a better set of initial centers k.liking.ts - kmeans(my.sample.2, hm.clusters, nstart=100, iter.max=50) Error: empty cluster: try a better set of initial centers k.liking.ts - kmeans(my.sample.2, hm.clusters, nstart=100, iter.max=50) Error: empty cluster: try a better set of initial centers k.liking.ts - kmeans(my.sample.2, hm.clusters, nstart=100, iter.max=50) ## The next attempt provokes the segmentation fault. Please note that there is ## nothing special with the 7 times reported above; next time it can happen on ## the very first time k.liking.ts - kmeans(my.sample.2, hm.clusters, nstart=100, iter.max=50) *** caught segfault *** address 0x10, cause 'memory not mapped' Segmentation fault (core dumped) that's about it ... the attached file has been written with write.table(x, file=...) I clustered the same dataset with R 2.14.1, same computer, same OS, using nstart=1000. And I did it 1000 times. Never had the slightest problem. Moreover, at the cost of repeating myself, the 'empty cluster' is plausibly the symptom of a bug, because it _should_ never happen with the Hartigan-Wong algorithm (default for Kmeans) Kind regards, and thanks again for your time. Luca Nanetti On Sat, Feb 9, 2013 at 8:52 PM, Uwe Ligges lig...@statistik.tu-dortmund.dewrote: We need a reproducible example. Uwe Ligges On 03.02.2013 15:03, Luca Nanetti wrote: Dear experts, I am encountering a version-dependent issue. My laptop runs Ubuntu 12.04 LTS 64-bit, R 2.14.1; the issue explained below never occurred with this version of R My desktop runs Ubuntu 11.10 64-bit, R 2.13.2; what follows applies to this setup. The data I'm clustering is constituted by the rows of a 320 x 6 matrix containing integers ranging from 1 to 7, no missing data. I applied kmeans() to this matrix, literally, 256 x 10â ¶ times using R version 2.13.2 or 2.14.1, without never experiencing the slightest problem. My usual setup is with k=5, nstart=256, iter.max=50. Upgrading to R 2.15.2, I experienced either a warning message ('Empty cluster. Choose a better set of initial centers') or a catastrophic segfault. The only way I can get a solution whatsoever is putting nstart to its default value, i.e. 1. However, just repeating the clustering, the same issue still happen. Moreover, this is vastly suboptimal, because the risk of local minima. Something similar was reported many years ago, see https://stat.ethz.ch/**pipermail/r-help/2003-**November/041784.htmlhttps://stat.ethz.ch/pipermail/r-help/2003-November/041784.html. It was then suggested that R's behaviour was
Re: [R] Re move row.names column in dataframe
Thank you. This does what I want (apparently by coercing data.frame to list with no other structure). -- View this message in context: http://r.789695.n4.nabble.com/Remove-row-names-column-in-dataframe-tp856647p4661049.html Sent from the R help mailing list archive at Nabble.com. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] cancel
take me off the list please -- Pat Jackson Graduate Student Utah State University Wildland Resources Department 5230 Old Main Hill Logan, UT 84322-5230 pat.jack...@aggiemail.usu.edu 816-716-6924 [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] how to convert a data.frame to tree structure object such as dendrogram
Thanks. Is there any more elegant solution? What if I don't know how many levels of nesting ahead of time? On Tue, Mar 12, 2013 at 8:51 AM, Greg Snow 538...@gmail.com wrote: You can use the lapply or rapply functions on the resulting list to break each piece into a list itself, then apply the lapply or rapply function to those resulting lists, ... On Mon, Mar 11, 2013 at 3:41 PM, Not To Miss not.to.m...@gmail.comwrote: Thanks. That's just an simple example - what if there are more columns and more rows? Is there any easy way to create nested list? Best, Zech On Mon, Mar 11, 2013 at 2:12 PM, MacQueen, Don macque...@llnl.gov wrote: You will have to decide what R data structure is a tree structure. But maybe this will get you started: foo - data.frame(x=c('A','A','B','B'), y=c('Ab','Ac','Ba','Bd')) split(foo$y, foo$x) $A [1] Ab Ac $B [1] Ba Bd I suppose it is at least a little bit tree-like. -- Don MacQueen Lawrence Livermore National Laboratory 7000 East Ave., L-627 Livermore, CA 94550 925-423-1062 On 3/10/13 9:19 PM, Not To Miss not.to.m...@gmail.com wrote: I have a data.frame object like: data.frame(x=c('A','A','B','B'), y=c('Ab','Ac','Ba','Bd')) x y 1 A Ab 2 A Ac 3 B Ba 4 B Bd how could I create a tree structure object like this: |---Ab A---| _| |---Ac | | |---Ba B---| |---Bb Thanks, Zech [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Gregory (Greg) L. Snow Ph.D. 538...@gmail.com [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] cancel
Please take yourself off the list at the URL given in the footer: https://stat.ethz.ch/mailman/listinfo/r-help On Tue, Mar 12, 2013 at 12:17 PM, Patrick Jackson pat.jack...@aggiemail.usu.edu wrote: take me off the list please -- -- Sarah Goslee http://www.functionaldiversity.org __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] ls() with different defaults: Solution;
Dear useRs, Some time ago I queried the list as to an efficient way of building a function which acts as ls() but with a different default for all.names: http://tolstoy.newcastle.edu.au/R/e6/help/09/03/7588.html I have struck upon a solution which so far has performed admirably. In particular, it uses ls() and not its explicit source code, so only has a dependency on its name and the name of its all.names argument. Here is my solution: lsall - function(...) { thecall - as.call(c(as.name('ls'), list(...))) newcall - match.call(definition=ls, call=thecall) if( !('all.names' %in% names(newcall)) ) newcall[['all.names']] - TRUE eval(newcall, envir=parent.frame()) } end lsall In my hands this function has always acted exactly as I expected, identically to ls() but with the default of all.names=TRUE (which can be overridden as usual). In particular, it (i) gets the proper search path position right; (ii) works as expected within a browser() session. I am sharing this in case someone finds this construction helpful for other purposes, and in case someone finds a failing case they then have the option of communicating it back here. John John Szumiloski, Ph.D. Associate Principle Scientist, Biostatistics Biometrics Research WP53B-120 Merck Research Laboratories P.O. Box 0004 West Point, PA 19486-0004 USA (215) 652-7346 (PH) (215) 993-1835 (FAX) johndotszumiloskiatmerckdotcom ___ These opinions are my own and do not necessarily reflect that of Merck Co., Inc. Notice: This e-mail message, together with any attachme...{{dropped:14}} __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Knn by using R
Dear useRs, I have a matrix which contains population data at 12 points of every station. In total there are 179 stations. So, its a matrix of 179 columns and 12 rows. Through a self inventory distance calculation method, a distance matrix was prepared for this data. Now i want to do k-nearest neighborhood classification by considering all the nearest neighbors of each station. i tried the following command to find neighbors of station 1, but first, it didn't work and second i would like to use my own distance matrix for the classification. knnx.dist(X[-1,], X[1, , drop=FALSE], k=178) knnx.index(X[-1,], X[1, , drop=FALSE], k=178) Thankyou very indeed in advance.Elisa [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] ls() with different defaults: Solution;
On Tue, Mar 12, 2013 at 12:59 PM, Szumiloski, John john_szumilo...@merck.com wrote: Dear useRs, Some time ago I queried the list as to an efficient way of building a function which acts as ls() but with a different default for all.names: http://tolstoy.newcastle.edu.au/R/e6/help/09/03/7588.html I have struck upon a solution which so far has performed admirably. In particular, it uses ls() and not its explicit source code, so only has a dependency on its name and the name of its all.names argument. Here is my solution: lsall - function(...) { thecall - as.call(c(as.name('ls'), list(...))) newcall - match.call(definition=ls, call=thecall) if( !('all.names' %in% names(newcall)) ) newcall[['all.names']] - TRUE eval(newcall, envir=parent.frame()) } end lsall In my hands this function has always acted exactly as I expected, identically to ls() but with the default of all.names=TRUE (which can be overridden as usual). In particular, it (i) gets the proper search path position right; (ii) works as expected within a browser() session. I am sharing this in case someone finds this construction helpful for other purposes, and in case someone finds a failing case they then have the option of communicating it back here. Also have a look at the Defaults package, e.g. library(Defaults) ls2 - ls setDefaults(ls2, all.names = TRUE) # test .a - 1 ls() ls2() -- Statistics Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] ls() with different defaults: Solution;
On Tue, Mar 12, 2013 at 12:59 PM, Szumiloski, John john_szumilo...@merck.com wrote: Dear useRs, Some time ago I queried the list as to an efficient way of building a function which acts as ls() but with a different default for all.names: http://tolstoy.newcastle.edu.au/R/e6/help/09/03/7588.html I have struck upon a solution which so far has performed admirably. In particular, it uses ls() and not its explicit source code, so only has a dependency on its name and the name of its all.names argument. Here is my solution: lsall - function(...) { thecall - as.call(c(as.name('ls'), list(...))) newcall - match.call(definition=ls, call=thecall) if( !('all.names' %in% names(newcall)) ) newcall[['all.names']] - TRUE eval(newcall, envir=parent.frame()) } end lsall Why not just do: lsall - function(..., all.names = TRUE) { ls(..., all.names = all.names) } ? Then the function practically documents itself. Hadley -- Chief Scientist, RStudio http://had.co.nz/ __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] ls() with different defaults: Solution;
-Original Message- From: Hadley Wickham [mailto:h.wick...@gmail.com] Sent: Tuesday, 12 March, 2013 1:34 PM To: Szumiloski, John Cc: r-help@r-project.org Subject: Re: [R] ls() with different defaults: Solution; On Tue, Mar 12, 2013 at 12:59 PM, Szumiloski, John john_szumilo...@merck.com wrote: Dear useRs, Some time ago I queried the list as to an efficient way of building a function which acts as ls() but with a different default for all.names: http://tolstoy.newcastle.edu.au/R/e6/help/09/03/7588.html I have struck upon a solution which so far has performed admirably. In particular, it uses ls() and not its explicit source code, so only has a dependency on its name and the name of its all.names argument. Here is my solution: lsall - function(...) { thecall - as.call(c(as.name('ls'), list(...))) newcall - match.call(definition=ls, call=thecall) if( !('all.names' %in% names(newcall)) ) newcall[['all.names']] - TRUE eval(newcall, envir=parent.frame()) } end lsall Why not just do: lsall - function(..., all.names = TRUE) { ls(..., all.names = all.names) } ? Then the function practically documents itself. The search path of the internal ls() is not the same as that of the called lsall(). You then get (e.g.) lsall2() [1] ... all.names John Hadley -- Chief Scientist, RStudio http://had.co.nz/ Notice: This e-mail message, together with any attachme...{{dropped:11}} __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Troubleshooting code
On 2013-03-12 05:29, Sahana Srinivasan wrote: Hi everyone, I am having trouble understanding where I went wrong with my code. It seems to logically be all there but the output says otherwise. I know this is a bit long but I can't seem to find the errors so I would appreciate your help :) This is my program : files-Sys.glob(*.rescount.txt);length-length(files);* #selecting all files of a particular extension, saving in a list* Your use of the name 'length' is a bad idea (but not your problem); see fortune(dog). If you think that your object 'files' is a 'list', you're mistaken; try str(files) to see that it's a character vector. a-1; while(a=length) *#going through every element of the list* { df1-read.table(files[a]); I would _always_ look at the result of any file import with str() to ensure that the reading went as expected. Do (some of) your files have variable variable name headers? c.leng-length(files[,1]); I don't see why this instruction would not throw an error re incorrect number of dimensions. Okay, at this point I gave up. Your code is not reproducible and you're posting in HTML. Please give at least a cursory look at the Posting Guide. [... rest of code sample snipped ...] Peter Ehlers __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] extract values
Hello all! I have a problem to extract values greater that for example 1820. I try this code: x[x[,1]1820,]-x1 Please help me! Thank you! The data structure is: structure(c(2.576, 1.728, 3.434, 2.187, 1.928, 1.886, 1.2425, 1.23, 1.075, 1.1785, 1.186, 1.165, 1.732, 1.517, 1.4095, 1.074, 1.618, 1.677, 1.845, 1.594, 1.6655, 1.1605, 1.425, 1.099, 1.007, 1.1795, 1.3855, 1.4065, 1.138, 1.514, 1.4605, 1.756, 1.4165, 1.22, 1.825, 1.8365, 1.81, 1.818, 2.1085, 2.233, 2.5605, 2.285, 2.821, 2.16, 1.914, 1.747, 2.031, 1.847, 1.7715, 1.7925, 1.651, 1.4345, 1.291, 1.9895, 1.99, 1.73, 1.912, 1.776, 1.596, 1.6915, 1.8245, 1.773, 2.173, 2.2345, 2.105, 1.922, 1.802, 1.6385, 1.6545, 2.1785, 1.868, 2.1855, 2.5175, 2.025, 2.435, 1.809, 1.628, 1.327, 1.3485, 1.4335, 2.052, 2.2465, 2.151, 1.7945, 1.79, 1.6055, 1.616, 1.633, 1.665, 2.002, 2.152, 1.736, 1.7985, 1.9155, 1.7135, 1.548, 1.568, 1.713, 2.079, 1.875, 2.12, 2.072, 1.906, 1.4645, 1.3025, 1.407, 1.5445, 1.437, 1.463, 1.5235, 1.609, 1.738, 1.478, 1.573, 1.0465, 1.429, 1.632, 1.814, 1.933, 1.63, 1.482, 1.466, 1.4025, 1.6055, 1.279, 1.827, 1.201, 1.425, 1.678, 1.5535, 1.599, 1.826, 1.964, 1.68, 1.492, 1.509, 1.666, 1.5665, 1.666, 1.4885, 1.8205, 1.5965, 1.84, 1.551, 1.4835, 1.805, 1.7145, 1.902, 1.2085, 0.9095, 0.9325, 1.34, 1.6135, 1.5825, 1.757, 1.7105, 1.3115, 1.288, 1.567, 1.7795, 1.642, 1.4375, 1.4495, 1.4225, 1.4885, 1.251, 1.179, 1.188, 1.3605, 1.373, 1.2185, 1.405, 1.016, 0.979, 1.018, 1.0335, 1.39, 1.3005, 1.3955, 1.301, 1.6475, 1.1945, 1.3215, 1.0535, 1.1645, 1.0895, 1.041, 1.155, 1.322, 1.1615, 0.933, 1.1215, 1.022, 0.922, 0.8465, 1.103, 1.1375, 1.23, 1.289, 1.222, 1.4865, 1.4025, 1.4295, 1.156, 0.9085, 0.8755, 0.9135, 0.982, 1.145, 1.1295, 1.3475, 1.2415, 1.2505), .Names = c(1799, 1800, 1801, 1802, 1803, 1804, 1805, 1806, 1807, 1808, 1809, 1810, 1811, 1812, 1813, 1814, 1815, 1816, 1817, 1818, 1819, 1820, 1821, 1822, 1823, 1824, 1825, 1826, 1827, 1828, 1829, 1830, 1831, 1832, 1833, 1834, 1835, 1836, 1837, 1838, 1839, 1840, 1841, 1842, 1843, 1844, 1845, 1846, 1847, 1848, 1849, 1850, 1851, 1852, 1853, 1854, 1855, 1856, 1857, 1858, 1859, 1860, 1861, 1862, 1863, 1864, 1865, 1866, 1867, 1868, 1869, 1870, 1871, 1872, 1873, 1874, 1875, 1876, 1877, 1878, 1879, 1880, 1881, 1882, 1883, 1884, 1885, 1886, 1887, 1888, 1889, 1890, 1891, 1892, 1893, 1894, 1895, 1896, 1897, 1898, 1899, 1900, 1901, 1902, 1903, 1904, 1905, 1906, 1907, 1908, 1909, 1910, 1911, 1912, 1913, 1914, 1915, 1916, 1917, 1918, 1919, 1920, 1921, 1922, 1923, 1924, 1925, 1926, 1927, 1928, 1929, 1930, 1931, 1932, 1933, 1934, 1935, 1936, 1937, 1938, 1939, 1940, 1941, 1942, 1943, 1944, 1945, 1946, 1947, 1948, 1949, 1950, 1951, 1952, 1953, 1954, 1955, 1956, 1957, 1958, 1959, 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 1970, 1971, 1972, 1973, 1974, 1975, 1976, 1977, 1978, 1979, 1980, 1981, 1982, 1983, 1984, 1985, 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011)) -- --- Catalin-Constantin ROIBU Forestry engineer, PhD Forestry Faculty of Suceava Str. Universitatii no. 13, Suceava, 720229, Romania office phone +4 0230 52 29 78, ext. 531 mobile phone +4 0745 53 18 01 +4 0766 71 76 58 FAX:+4 0230 52 16 64 silvic.usv.ro [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] extract values
I guess you are referring to names() vec1# dataset vec1[names(vec1)1820] head(vec1[names(vec1)1820]) # 1821 1822 1823 1824 1825 1826 #1.4250 1.0990 1.0070 1.1795 1.3855 1.4065 A.K. - Original Message - From: catalin roibu catalinro...@gmail.com To: r-help@r-project.org Cc: Sent: Tuesday, March 12, 2013 1:50 PM Subject: [R] extract values Hello all! I have a problem to extract values greater that for example 1820. I try this code: x[x[,1]1820,]-x1 Please help me! Thank you! The data structure is: structure(c(2.576, 1.728, 3.434, 2.187, 1.928, 1.886, 1.2425, 1.23, 1.075, 1.1785, 1.186, 1.165, 1.732, 1.517, 1.4095, 1.074, 1.618, 1.677, 1.845, 1.594, 1.6655, 1.1605, 1.425, 1.099, 1.007, 1.1795, 1.3855, 1.4065, 1.138, 1.514, 1.4605, 1.756, 1.4165, 1.22, 1.825, 1.8365, 1.81, 1.818, 2.1085, 2.233, 2.5605, 2.285, 2.821, 2.16, 1.914, 1.747, 2.031, 1.847, 1.7715, 1.7925, 1.651, 1.4345, 1.291, 1.9895, 1.99, 1.73, 1.912, 1.776, 1.596, 1.6915, 1.8245, 1.773, 2.173, 2.2345, 2.105, 1.922, 1.802, 1.6385, 1.6545, 2.1785, 1.868, 2.1855, 2.5175, 2.025, 2.435, 1.809, 1.628, 1.327, 1.3485, 1.4335, 2.052, 2.2465, 2.151, 1.7945, 1.79, 1.6055, 1.616, 1.633, 1.665, 2.002, 2.152, 1.736, 1.7985, 1.9155, 1.7135, 1.548, 1.568, 1.713, 2.079, 1.875, 2.12, 2.072, 1.906, 1.4645, 1.3025, 1.407, 1.5445, 1.437, 1.463, 1.5235, 1.609, 1.738, 1.478, 1.573, 1.0465, 1.429, 1.632, 1.814, 1.933, 1.63, 1.482, 1.466, 1.4025, 1.6055, 1.279, 1.827, 1.201, 1.425, 1.678, 1.5535, 1.599, 1.826, 1.964, 1.68, 1.492, 1.509, 1.666, 1.5665, 1.666, 1.4885, 1.8205, 1.5965, 1.84, 1.551, 1.4835, 1.805, 1.7145, 1.902, 1.2085, 0.9095, 0.9325, 1.34, 1.6135, 1.5825, 1.757, 1.7105, 1.3115, 1.288, 1.567, 1.7795, 1.642, 1.4375, 1.4495, 1.4225, 1.4885, 1.251, 1.179, 1.188, 1.3605, 1.373, 1.2185, 1.405, 1.016, 0.979, 1.018, 1.0335, 1.39, 1.3005, 1.3955, 1.301, 1.6475, 1.1945, 1.3215, 1.0535, 1.1645, 1.0895, 1.041, 1.155, 1.322, 1.1615, 0.933, 1.1215, 1.022, 0.922, 0.8465, 1.103, 1.1375, 1.23, 1.289, 1.222, 1.4865, 1.4025, 1.4295, 1.156, 0.9085, 0.8755, 0.9135, 0.982, 1.145, 1.1295, 1.3475, 1.2415, 1.2505), .Names = c(1799, 1800, 1801, 1802, 1803, 1804, 1805, 1806, 1807, 1808, 1809, 1810, 1811, 1812, 1813, 1814, 1815, 1816, 1817, 1818, 1819, 1820, 1821, 1822, 1823, 1824, 1825, 1826, 1827, 1828, 1829, 1830, 1831, 1832, 1833, 1834, 1835, 1836, 1837, 1838, 1839, 1840, 1841, 1842, 1843, 1844, 1845, 1846, 1847, 1848, 1849, 1850, 1851, 1852, 1853, 1854, 1855, 1856, 1857, 1858, 1859, 1860, 1861, 1862, 1863, 1864, 1865, 1866, 1867, 1868, 1869, 1870, 1871, 1872, 1873, 1874, 1875, 1876, 1877, 1878, 1879, 1880, 1881, 1882, 1883, 1884, 1885, 1886, 1887, 1888, 1889, 1890, 1891, 1892, 1893, 1894, 1895, 1896, 1897, 1898, 1899, 1900, 1901, 1902, 1903, 1904, 1905, 1906, 1907, 1908, 1909, 1910, 1911, 1912, 1913, 1914, 1915, 1916, 1917, 1918, 1919, 1920, 1921, 1922, 1923, 1924, 1925, 1926, 1927, 1928, 1929, 1930, 1931, 1932, 1933, 1934, 1935, 1936, 1937, 1938, 1939, 1940, 1941, 1942, 1943, 1944, 1945, 1946, 1947, 1948, 1949, 1950, 1951, 1952, 1953, 1954, 1955, 1956, 1957, 1958, 1959, 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 1970, 1971, 1972, 1973, 1974, 1975, 1976, 1977, 1978, 1979, 1980, 1981, 1982, 1983, 1984, 1985, 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011)) -- --- Catalin-Constantin ROIBU Forestry engineer, PhD Forestry Faculty of Suceava Str. Universitatii no. 13, Suceava, 720229, Romania office phone +4 0230 52 29 78, ext. 531 mobile phone +4 0745 53 18 01 +4 0766 71 76 58 FAX: +4 0230 52 16 64 silvic.usv.ro [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] extract values
First take a look at your data and understand its structure. What you have is a named vector and not a matrix: str(x) Named num [1:213] 2.58 1.73 3.43 2.19 1.93 ... - attr(*, names)= chr [1:213] 1799 1800 1801 1802 ... x1 - x[names(x) '1820'] x1 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1.4250 1.0990 1.0070 1.1795 1.3855 1.4065 1.1380 1.5140 1.4605 1.7560 1.4165 1.2200 1.8250 1.8365 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1.8100 1.8180 2.1085 2.2330 2.5605 2.2850 2.8210 2.1600 1.9140 1.7470 2.0310 1.8470 1.7715 1.7925 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1.6510 1.4345 1.2910 1.9895 1.9900 1.7300 1.9120 1.7760 1.5960 1.6915 1.8245 1.7730 2.1730 2.2345 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 On Tue, Mar 12, 2013 at 1:50 PM, catalin roibu catalinro...@gmail.comwrote: Hello all! I have a problem to extract values greater that for example 1820. I try this code: x[x[,1]1820,]-x1 Please help me! Thank you! The data structure is: structure(c(2.576, 1.728, 3.434, 2.187, 1.928, 1.886, 1.2425, 1.23, 1.075, 1.1785, 1.186, 1.165, 1.732, 1.517, 1.4095, 1.074, 1.618, 1.677, 1.845, 1.594, 1.6655, 1.1605, 1.425, 1.099, 1.007, 1.1795, 1.3855, 1.4065, 1.138, 1.514, 1.4605, 1.756, 1.4165, 1.22, 1.825, 1.8365, 1.81, 1.818, 2.1085, 2.233, 2.5605, 2.285, 2.821, 2.16, 1.914, 1.747, 2.031, 1.847, 1.7715, 1.7925, 1.651, 1.4345, 1.291, 1.9895, 1.99, 1.73, 1.912, 1.776, 1.596, 1.6915, 1.8245, 1.773, 2.173, 2.2345, 2.105, 1.922, 1.802, 1.6385, 1.6545, 2.1785, 1.868, 2.1855, 2.5175, 2.025, 2.435, 1.809, 1.628, 1.327, 1.3485, 1.4335, 2.052, 2.2465, 2.151, 1.7945, 1.79, 1.6055, 1.616, 1.633, 1.665, 2.002, 2.152, 1.736, 1.7985, 1.9155, 1.7135, 1.548, 1.568, 1.713, 2.079, 1.875, 2.12, 2.072, 1.906, 1.4645, 1.3025, 1.407, 1.5445, 1.437, 1.463, 1.5235, 1.609, 1.738, 1.478, 1.573, 1.0465, 1.429, 1.632, 1.814, 1.933, 1.63, 1.482, 1.466, 1.4025, 1.6055, 1.279, 1.827, 1.201, 1.425, 1.678, 1.5535, 1.599, 1.826, 1.964, 1.68, 1.492, 1.509, 1.666, 1.5665, 1.666, 1.4885, 1.8205, 1.5965, 1.84, 1.551, 1.4835, 1.805, 1.7145, 1.902, 1.2085, 0.9095, 0.9325, 1.34, 1.6135, 1.5825, 1.757, 1.7105, 1.3115, 1.288, 1.567, 1.7795, 1.642, 1.4375, 1.4495, 1.4225, 1.4885, 1.251, 1.179, 1.188, 1.3605, 1.373, 1.2185, 1.405, 1.016, 0.979, 1.018, 1.0335, 1.39, 1.3005, 1.3955, 1.301, 1.6475, 1.1945, 1.3215, 1.0535, 1.1645, 1.0895, 1.041, 1.155, 1.322, 1.1615, 0.933, 1.1215, 1.022, 0.922, 0.8465, 1.103, 1.1375, 1.23, 1.289, 1.222, 1.4865, 1.4025, 1.4295, 1.156, 0.9085, 0.8755, 0.9135, 0.982, 1.145, 1.1295, 1.3475, 1.2415, 1.2505), .Names = c(1799, 1800, 1801, 1802, 1803, 1804, 1805, 1806, 1807, 1808, 1809, 1810, 1811, 1812, 1813, 1814, 1815, 1816, 1817, 1818, 1819, 1820, 1821, 1822, 1823, 1824, 1825, 1826, 1827, 1828, 1829, 1830, 1831, 1832, 1833, 1834, 1835, 1836, 1837, 1838, 1839, 1840, 1841, 1842, 1843, 1844, 1845, 1846, 1847, 1848, 1849, 1850, 1851, 1852, 1853, 1854, 1855, 1856, 1857, 1858, 1859, 1860, 1861, 1862, 1863, 1864, 1865, 1866, 1867, 1868, 1869, 1870, 1871, 1872, 1873, 1874, 1875, 1876, 1877, 1878, 1879, 1880, 1881, 1882, 1883, 1884, 1885, 1886, 1887, 1888, 1889, 1890, 1891, 1892, 1893, 1894, 1895, 1896, 1897, 1898, 1899, 1900, 1901, 1902, 1903, 1904, 1905, 1906, 1907, 1908, 1909, 1910, 1911, 1912, 1913, 1914, 1915, 1916, 1917, 1918, 1919, 1920, 1921, 1922, 1923, 1924, 1925, 1926, 1927, 1928, 1929, 1930, 1931, 1932, 1933, 1934, 1935, 1936, 1937, 1938, 1939, 1940, 1941, 1942, 1943, 1944, 1945, 1946, 1947, 1948, 1949, 1950, 1951, 1952, 1953, 1954, 1955, 1956, 1957, 1958, 1959, 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 1970, 1971, 1972, 1973, 1974, 1975, 1976, 1977, 1978, 1979, 1980, 1981, 1982, 1983, 1984, 1985, 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011)) -- --- Catalin-Constantin ROIBU Forestry engineer, PhD Forestry Faculty of Suceava Str. Universitatii no. 13, Suceava, 720229, Romania office phone +4 0230 52 29 78, ext. 531 mobile phone +4 0745 53 18 01 +4 0766 71 76 58 FAX:+4 0230 52 16 64 silvic.usv.ro [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Jim Holtman Data Munger Guru What is the problem that you are trying to solve? Tell me what you want to do, not how you want to do it. [[alternative HTML version deleted]] __ R-help@r-project.org
Re: [R] extract values
Look at your data. You do not want values greater than 1820. There are none. You want values with NAMES greater than 1820. x1 - x[as.numeric(names(x)) 1820] x1 -- David L Carlson Associate Professor of Anthropology Texas AM University College Station, TX 77843-4352 -Original Message- From: r-help-boun...@r-project.org [mailto:r-help-bounces@r- project.org] On Behalf Of catalin roibu Sent: Tuesday, March 12, 2013 12:50 PM To: r-help@r-project.org Subject: [R] extract values Hello all! I have a problem to extract values greater that for example 1820. I try this code: x[x[,1]1820,]-x1 Please help me! Thank you! The data structure is: structure(c(2.576, 1.728, 3.434, 2.187, 1.928, 1.886, 1.2425, 1.23, 1.075, 1.1785, 1.186, 1.165, 1.732, 1.517, 1.4095, 1.074, 1.618, 1.677, 1.845, 1.594, 1.6655, 1.1605, 1.425, 1.099, 1.007, 1.1795, 1.3855, 1.4065, 1.138, 1.514, 1.4605, 1.756, 1.4165, 1.22, 1.825, 1.8365, 1.81, 1.818, 2.1085, 2.233, 2.5605, 2.285, 2.821, 2.16, 1.914, 1.747, 2.031, 1.847, 1.7715, 1.7925, 1.651, 1.4345, 1.291, 1.9895, 1.99, 1.73, 1.912, 1.776, 1.596, 1.6915, 1.8245, 1.773, 2.173, 2.2345, 2.105, 1.922, 1.802, 1.6385, 1.6545, 2.1785, 1.868, 2.1855, 2.5175, 2.025, 2.435, 1.809, 1.628, 1.327, 1.3485, 1.4335, 2.052, 2.2465, 2.151, 1.7945, 1.79, 1.6055, 1.616, 1.633, 1.665, 2.002, 2.152, 1.736, 1.7985, 1.9155, 1.7135, 1.548, 1.568, 1.713, 2.079, 1.875, 2.12, 2.072, 1.906, 1.4645, 1.3025, 1.407, 1.5445, 1.437, 1.463, 1.5235, 1.609, 1.738, 1.478, 1.573, 1.0465, 1.429, 1.632, 1.814, 1.933, 1.63, 1.482, 1.466, 1.4025, 1.6055, 1.279, 1.827, 1.201, 1.425, 1.678, 1.5535, 1.599, 1.826, 1.964, 1.68, 1.492, 1.509, 1.666, 1.5665, 1.666, 1.4885, 1.8205, 1.5965, 1.84, 1.551, 1.4835, 1.805, 1.7145, 1.902, 1.2085, 0.9095, 0.9325, 1.34, 1.6135, 1.5825, 1.757, 1.7105, 1.3115, 1.288, 1.567, 1.7795, 1.642, 1.4375, 1.4495, 1.4225, 1.4885, 1.251, 1.179, 1.188, 1.3605, 1.373, 1.2185, 1.405, 1.016, 0.979, 1.018, 1.0335, 1.39, 1.3005, 1.3955, 1.301, 1.6475, 1.1945, 1.3215, 1.0535, 1.1645, 1.0895, 1.041, 1.155, 1.322, 1.1615, 0.933, 1.1215, 1.022, 0.922, 0.8465, 1.103, 1.1375, 1.23, 1.289, 1.222, 1.4865, 1.4025, 1.4295, 1.156, 0.9085, 0.8755, 0.9135, 0.982, 1.145, 1.1295, 1.3475, 1.2415, 1.2505), .Names = c(1799, 1800, 1801, 1802, 1803, 1804, 1805, 1806, 1807, 1808, 1809, 1810, 1811, 1812, 1813, 1814, 1815, 1816, 1817, 1818, 1819, 1820, 1821, 1822, 1823, 1824, 1825, 1826, 1827, 1828, 1829, 1830, 1831, 1832, 1833, 1834, 1835, 1836, 1837, 1838, 1839, 1840, 1841, 1842, 1843, 1844, 1845, 1846, 1847, 1848, 1849, 1850, 1851, 1852, 1853, 1854, 1855, 1856, 1857, 1858, 1859, 1860, 1861, 1862, 1863, 1864, 1865, 1866, 1867, 1868, 1869, 1870, 1871, 1872, 1873, 1874, 1875, 1876, 1877, 1878, 1879, 1880, 1881, 1882, 1883, 1884, 1885, 1886, 1887, 1888, 1889, 1890, 1891, 1892, 1893, 1894, 1895, 1896, 1897, 1898, 1899, 1900, 1901, 1902, 1903, 1904, 1905, 1906, 1907, 1908, 1909, 1910, 1911, 1912, 1913, 1914, 1915, 1916, 1917, 1918, 1919, 1920, 1921, 1922, 1923, 1924, 1925, 1926, 1927, 1928, 1929, 1930, 1931, 1932, 1933, 1934, 1935, 1936, 1937, 1938, 1939, 1940, 1941, 1942, 1943, 1944, 1945, 1946, 1947, 1948, 1949, 1950, 1951, 1952, 1953, 1954, 1955, 1956, 1957, 1958, 1959, 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 1970, 1971, 1972, 1973, 1974, 1975, 1976, 1977, 1978, 1979, 1980, 1981, 1982, 1983, 1984, 1985, 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011)) -- --- Catalin-Constantin ROIBU Forestry engineer, PhD Forestry Faculty of Suceava Str. Universitatii no. 13, Suceava, 720229, Romania office phone +4 0230 52 29 78, ext. 531 mobile phone +4 0745 53 18 01 +4 0766 71 76 58 FAX:+4 0230 52 16 64 silvic.usv.ro [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting- guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] extract values
Data in x is never 1820: summary(x) Min. 1st Qu. MedianMean 3rd Qu.Max. 0.8465 1.2890 1.5660 1.5710 1.8050 3.4340 And your object is a vector: trying to extract the first column with x[,1] is meaningless, because x has no dimensions. dim(x) NULL It looks to me as if you want to extract values of x where the *names* are 1820, but the names are character, so maybe: x1 - x[as.numeric(names(x)) 1820] is what you're looking for. Sarah On Tue, Mar 12, 2013 at 1:50 PM, catalin roibu catalinro...@gmail.com wrote: Hello all! I have a problem to extract values greater that for example 1820. I try this code: x[x[,1]1820,]-x1 Please help me! Thank you! The data structure is: structure(c(2.576, 1.728, 3.434, 2.187, 1.928, 1.886, 1.2425, 1.23, 1.075, 1.1785, 1.186, 1.165, 1.732, 1.517, 1.4095, 1.074, 1.618, 1.677, 1.845, 1.594, 1.6655, 1.1605, 1.425, 1.099, 1.007, 1.1795, 1.3855, 1.4065, 1.138, 1.514, 1.4605, 1.756, 1.4165, 1.22, 1.825, 1.8365, 1.81, 1.818, 2.1085, 2.233, 2.5605, 2.285, 2.821, 2.16, 1.914, 1.747, 2.031, 1.847, 1.7715, 1.7925, 1.651, 1.4345, 1.291, 1.9895, 1.99, 1.73, 1.912, 1.776, 1.596, 1.6915, 1.8245, 1.773, 2.173, 2.2345, 2.105, 1.922, 1.802, 1.6385, 1.6545, 2.1785, 1.868, 2.1855, 2.5175, 2.025, 2.435, 1.809, 1.628, 1.327, 1.3485, 1.4335, 2.052, 2.2465, 2.151, 1.7945, 1.79, 1.6055, 1.616, 1.633, 1.665, 2.002, 2.152, 1.736, 1.7985, 1.9155, 1.7135, 1.548, 1.568, 1.713, 2.079, 1.875, 2.12, 2.072, 1.906, 1.4645, 1.3025, 1.407, 1.5445, 1.437, 1.463, 1.5235, 1.609, 1.738, 1.478, 1.573, 1.0465, 1.429, 1.632, 1.814, 1.933, 1.63, 1.482, 1.466, 1.4025, 1.6055, 1.279, 1.827, 1.201, 1.425, 1.678, 1.5535, 1.599, 1.826, 1.964, 1.68, 1.492, 1.509, 1.666, 1.5665, 1.666, 1.4885, 1.8205, 1.5965, 1.84, 1.551, 1.4835, 1.805, 1.7145, 1.902, 1.2085, 0.9095, 0.9325, 1.34, 1.6135, 1.5825, 1.757, 1.7105, 1.3115, 1.288, 1.567, 1.7795, 1.642, 1.4375, 1.4495, 1.4225, 1.4885, 1.251, 1.179, 1.188, 1.3605, 1.373, 1.2185, 1.405, 1.016, 0.979, 1.018, 1.0335, 1.39, 1.3005, 1.3955, 1.301, 1.6475, 1.1945, 1.3215, 1.0535, 1.1645, 1.0895, 1.041, 1.155, 1.322, 1.1615, 0.933, 1.1215, 1.022, 0.922, 0.8465, 1.103, 1.1375, 1.23, 1.289, 1.222, 1.4865, 1.4025, 1.4295, 1.156, 0.9085, 0.8755, 0.9135, 0.982, 1.145, 1.1295, 1.3475, 1.2415, 1.2505), .Names = c(1799, 1800, 1801, 1802, 1803, 1804, 1805, 1806, 1807, 1808, 1809, 1810, 1811, 1812, 1813, 1814, 1815, 1816, 1817, 1818, 1819, 1820, 1821, 1822, 1823, 1824, 1825, 1826, 1827, 1828, 1829, 1830, 1831, 1832, 1833, 1834, 1835, 1836, 1837, 1838, 1839, 1840, 1841, 1842, 1843, 1844, 1845, 1846, 1847, 1848, 1849, 1850, 1851, 1852, 1853, 1854, 1855, 1856, 1857, 1858, 1859, 1860, 1861, 1862, 1863, 1864, 1865, 1866, 1867, 1868, 1869, 1870, 1871, 1872, 1873, 1874, 1875, 1876, 1877, 1878, 1879, 1880, 1881, 1882, 1883, 1884, 1885, 1886, 1887, 1888, 1889, 1890, 1891, 1892, 1893, 1894, 1895, 1896, 1897, 1898, 1899, 1900, 1901, 1902, 1903, 1904, 1905, 1906, 1907, 1908, 1909, 1910, 1911, 1912, 1913, 1914, 1915, 1916, 1917, 1918, 1919, 1920, 1921, 1922, 1923, 1924, 1925, 1926, 1927, 1928, 1929, 1930, 1931, 1932, 1933, 1934, 1935, 1936, 1937, 1938, 1939, 1940, 1941, 1942, 1943, 1944, 1945, 1946, 1947, 1948, 1949, 1950, 1951, 1952, 1953, 1954, 1955, 1956, 1957, 1958, 1959, 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 1970, 1971, 1972, 1973, 1974, 1975, 1976, 1977, 1978, 1979, 1980, 1981, 1982, 1983, 1984, 1985, 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011)) -- Sarah Goslee http://www.functionaldiversity.org __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Constrain slope in segmented package
On 2013-03-12 03:55, Pierre Hainaut wrote: Hello, I'm currently using the segmented package of M.R. Muggeo to fit a two-slope segmented regression. I would like to constrain a null-left-slope, but I cannot make it. I followed the explanations of the package (http://dssm.unipa.it/vmuggeo/segmentedRnews.pdf) to write the following code : fit.glm - glm(y~x) fit.seg - segmented(fit.glm, seg.Z=~x,psi=0.3) fit.glm -update(fit.glm,.~.-x) fit.seg1 -update(fit.seg) If I got well, the two last lines are supposed to constrain the slope, but it did not change anything. Could you help me with this ? By the way, I don't understand the .-.-x code in the update function. Thanks a lot by anticipation ! Pierre I can't help you with the segmented constraints, but the 'dot'-notation is very well and concisely documented: use ?update which will take you to a link for ?update.formula where you can read all about it. Peter Ehlers __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] ls() with different defaults: Solution;
-Original Message- From: r-help-boun...@r-project.org [mailto:r-help-bounces@r- project.org] On Behalf Of Szumiloski, John Sent: Tuesday, March 12, 2013 10:39 AM To: Hadley Wickham Cc: r-help@r-project.org Subject: Re: [R] ls() with different defaults: Solution; -Original Message- From: Hadley Wickham [mailto:h.wick...@gmail.com] Sent: Tuesday, 12 March, 2013 1:34 PM To: Szumiloski, John Cc: r-help@r-project.org Subject: Re: [R] ls() with different defaults: Solution; On Tue, Mar 12, 2013 at 12:59 PM, Szumiloski, John john_szumilo...@merck.com wrote: Dear useRs, Some time ago I queried the list as to an efficient way of building a function which acts as ls() but with a different default for all.names: http://tolstoy.newcastle.edu.au/R/e6/help/09/03/7588.html I have struck upon a solution which so far has performed admirably. In particular, it uses ls() and not its explicit source code, so only has a dependency on its name and the name of its all.names argument. Here is my solution: lsall - function(...) { thecall - as.call(c(as.name('ls'), list(...))) newcall - match.call(definition=ls, call=thecall) if( !('all.names' %in% names(newcall)) ) newcall[['all.names']] - TRUE eval(newcall, envir=parent.frame()) } end lsall Why not just do: lsall - function(..., all.names = TRUE) { ls(..., all.names = all.names) } ? Then the function practically documents itself. The search path of the internal ls() is not the same as that of the called lsall(). You then get (e.g.) lsall2() [1] ... all.names John Then how about lsall - function(..., all.names = TRUE) { ls(..., all.names = all.names, envir=parent.frame()) } Hope this is helpful, Dan Daniel J. Nordlund Washington State Department of Social and Health Services Planning, Performance, and Accountability Research and Data Analysis Division Olympia, WA 98504-5204 __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] extract values
Thank very much for your help! On 12 March 2013 20:04, Sarah Goslee sarah.gos...@gmail.com wrote: Data in x is never 1820: summary(x) Min. 1st Qu. MedianMean 3rd Qu.Max. 0.8465 1.2890 1.5660 1.5710 1.8050 3.4340 And your object is a vector: trying to extract the first column with x[,1] is meaningless, because x has no dimensions. dim(x) NULL It looks to me as if you want to extract values of x where the *names* are 1820, but the names are character, so maybe: x1 - x[as.numeric(names(x)) 1820] is what you're looking for. Sarah On Tue, Mar 12, 2013 at 1:50 PM, catalin roibu catalinro...@gmail.com wrote: Hello all! I have a problem to extract values greater that for example 1820. I try this code: x[x[,1]1820,]-x1 Please help me! Thank you! The data structure is: structure(c(2.576, 1.728, 3.434, 2.187, 1.928, 1.886, 1.2425, 1.23, 1.075, 1.1785, 1.186, 1.165, 1.732, 1.517, 1.4095, 1.074, 1.618, 1.677, 1.845, 1.594, 1.6655, 1.1605, 1.425, 1.099, 1.007, 1.1795, 1.3855, 1.4065, 1.138, 1.514, 1.4605, 1.756, 1.4165, 1.22, 1.825, 1.8365, 1.81, 1.818, 2.1085, 2.233, 2.5605, 2.285, 2.821, 2.16, 1.914, 1.747, 2.031, 1.847, 1.7715, 1.7925, 1.651, 1.4345, 1.291, 1.9895, 1.99, 1.73, 1.912, 1.776, 1.596, 1.6915, 1.8245, 1.773, 2.173, 2.2345, 2.105, 1.922, 1.802, 1.6385, 1.6545, 2.1785, 1.868, 2.1855, 2.5175, 2.025, 2.435, 1.809, 1.628, 1.327, 1.3485, 1.4335, 2.052, 2.2465, 2.151, 1.7945, 1.79, 1.6055, 1.616, 1.633, 1.665, 2.002, 2.152, 1.736, 1.7985, 1.9155, 1.7135, 1.548, 1.568, 1.713, 2.079, 1.875, 2.12, 2.072, 1.906, 1.4645, 1.3025, 1.407, 1.5445, 1.437, 1.463, 1.5235, 1.609, 1.738, 1.478, 1.573, 1.0465, 1.429, 1.632, 1.814, 1.933, 1.63, 1.482, 1.466, 1.4025, 1.6055, 1.279, 1.827, 1.201, 1.425, 1.678, 1.5535, 1.599, 1.826, 1.964, 1.68, 1.492, 1.509, 1.666, 1.5665, 1.666, 1.4885, 1.8205, 1.5965, 1.84, 1.551, 1.4835, 1.805, 1.7145, 1.902, 1.2085, 0.9095, 0.9325, 1.34, 1.6135, 1.5825, 1.757, 1.7105, 1.3115, 1.288, 1.567, 1.7795, 1.642, 1.4375, 1.4495, 1.4225, 1.4885, 1.251, 1.179, 1.188, 1.3605, 1.373, 1.2185, 1.405, 1.016, 0.979, 1.018, 1.0335, 1.39, 1.3005, 1.3955, 1.301, 1.6475, 1.1945, 1.3215, 1.0535, 1.1645, 1.0895, 1.041, 1.155, 1.322, 1.1615, 0.933, 1.1215, 1.022, 0.922, 0.8465, 1.103, 1.1375, 1.23, 1.289, 1.222, 1.4865, 1.4025, 1.4295, 1.156, 0.9085, 0.8755, 0.9135, 0.982, 1.145, 1.1295, 1.3475, 1.2415, 1.2505), .Names = c(1799, 1800, 1801, 1802, 1803, 1804, 1805, 1806, 1807, 1808, 1809, 1810, 1811, 1812, 1813, 1814, 1815, 1816, 1817, 1818, 1819, 1820, 1821, 1822, 1823, 1824, 1825, 1826, 1827, 1828, 1829, 1830, 1831, 1832, 1833, 1834, 1835, 1836, 1837, 1838, 1839, 1840, 1841, 1842, 1843, 1844, 1845, 1846, 1847, 1848, 1849, 1850, 1851, 1852, 1853, 1854, 1855, 1856, 1857, 1858, 1859, 1860, 1861, 1862, 1863, 1864, 1865, 1866, 1867, 1868, 1869, 1870, 1871, 1872, 1873, 1874, 1875, 1876, 1877, 1878, 1879, 1880, 1881, 1882, 1883, 1884, 1885, 1886, 1887, 1888, 1889, 1890, 1891, 1892, 1893, 1894, 1895, 1896, 1897, 1898, 1899, 1900, 1901, 1902, 1903, 1904, 1905, 1906, 1907, 1908, 1909, 1910, 1911, 1912, 1913, 1914, 1915, 1916, 1917, 1918, 1919, 1920, 1921, 1922, 1923, 1924, 1925, 1926, 1927, 1928, 1929, 1930, 1931, 1932, 1933, 1934, 1935, 1936, 1937, 1938, 1939, 1940, 1941, 1942, 1943, 1944, 1945, 1946, 1947, 1948, 1949, 1950, 1951, 1952, 1953, 1954, 1955, 1956, 1957, 1958, 1959, 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 1970, 1971, 1972, 1973, 1974, 1975, 1976, 1977, 1978, 1979, 1980, 1981, 1982, 1983, 1984, 1985, 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011)) -- Sarah Goslee http://www.functionaldiversity.org -- --- Catalin-Constantin ROIBU Forestry engineer, PhD Forestry Faculty of Suceava Str. Universitatii no. 13, Suceava, 720229, Romania office phone +4 0230 52 29 78, ext. 531 mobile phone +4 0745 53 18 01 +4 0766 71 76 58 FAX:+4 0230 52 16 64 silvic.usv.ro [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Bootstrap BCa confidence limits with your own resamples
That's very helpful John. Did you happen to run a test to make sure that boot.ci(..., type='bca') in fact gives the BCa intervals or that they at least disagree with percentile intervals? Frank John Fox wrote Dear Frank, I'm not sure that it will help, but you might look at the bootSem() function in the sem package, which creates objects that inherit from boot. Here's an artificial example: -- snip -- library(sem) for (x in names(CNES)) CNES[[x]] - as.numeric(CNES[[x]]) model.cnes - specifyModel() F - MBSA2, lam1 F - MBSA7, lam2 F - MBSA8, lam3 F - MBSA9, lam4 F - F, NA, 1 MBSA2 - MBSA2, the1 MBSA7 - MBSA7, the2 MBSA8 - MBSA8, the3 MBSA9 - MBSA9, the4 sem.cnes - sem(model.cnes, data=CNES) summary(sem.cnes) set.seed(12345) # for reproducibility system.time(boot.cnes - bootSem(sem.cnes, R=5000)) class(boot.cnes) boot.ci(boot.cnes) -- snip -- I hope this helps, John -Original Message- From: r-help-bounces@ [mailto:r-help-bounces@r- project.org] On Behalf Of Frank Harrell Sent: Tuesday, March 12, 2013 11:27 AM To: r-help@ Subject: [R] Bootstrap BCa confidence limits with your own resamples I like to bootstrap regression models, saving the entire set of bootstrapped regression coefficients for later use so that I can get confidence limits for a whole set of contrasts derived from the coefficients. I'm finding that ordinary bootstrap percentile confidence limits can provide poor coverage for odds ratios for binary logistic models with small N. So I'm exploring BCa confidence intervals. Because the matrix of bootstrapped regression coefficients is generated outside of the boot packages' boot() function, I need to see if it is possible to compute BCa intervals using boot's boot.ci() function after constructing a suitable boot object. The function below almost works, but it seems to return bootstrap percentile confidence limits for BCa limits. The failure is probably due to my being new to BCa limits and not understanding the inner workings. Has anyone successfully done something similar to this? Thanks very much, Frank bootBCa - function(estimate, estimates, n, conf.int=0.95) { require(boot) if(!exists('.Random.seed')) runif(1) w - list(sim= 'ordinary', stype = 'i', t0 = estimate, t = as.matrix(estimates), R = length(estimates), data= 1:n, strata = rep(1, n), weights = rep(1/n, n), seed = .Random.seed, statistic = function(...) 1e10, call = list('rigged', weights='junk')) np - boot.ci(w, type='perc', conf=conf.int)$percent m - length(np) np - np[c(m-1, m)] bca - tryCatch(boot.ci(w, type='bca', conf=conf.int), error=function(...) list(fail=TRUE)) if(length(bca$fail) bca$fail) { bca - NULL warning('could not obtain BCa bootstrap confidence interval') } else { bca - bca$bca m - length(bca) bca - bca[c(m-1, m)] } list(np=np, bca=bca) } - Frank Harrell Department of Biostatistics, Vanderbilt University -- View this message in context: http://r.789695.n4.nabble.com/Bootstrap- BCa-confidence-limits-with-your-own-resamples-tp4661045.html Sent from the R help mailing list archive at Nabble.com. __ R-help@ mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting- guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@ mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. - Frank Harrell Department of Biostatistics, Vanderbilt University -- View this message in context: http://r.789695.n4.nabble.com/Bootstrap-BCa-confidence-limits-with-your-own-resamples-tp4661045p4661077.html Sent from the R help mailing list archive at Nabble.com. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Bootstrap BCa confidence limits with your own resamples
Dear Frank, -Original Message- From: r-help-boun...@r-project.org [mailto:r-help-bounces@r- project.org] On Behalf Of Frank Harrell Sent: Tuesday, March 12, 2013 2:24 PM To: r-help@r-project.org Subject: Re: [R] Bootstrap BCa confidence limits with your own resamples That's very helpful John. Did you happen to run a test to make sure that boot.ci(..., type='bca') in fact gives the BCa intervals or that they at least disagree with percentile intervals? If you run the example I included, you'll see that the BCa intervals differ from the simple percentile intervals. OTOH, I don't believe that I ever checked that the BCa intervals are correct for objects produced by bootSem() -- I did many years ago check against manual computations for a regression model fit by robust regression, but that's another matter. Best, John Frank John Fox wrote Dear Frank, I'm not sure that it will help, but you might look at the bootSem() function in the sem package, which creates objects that inherit from boot. Here's an artificial example: -- snip -- library(sem) for (x in names(CNES)) CNES[[x]] - as.numeric(CNES[[x]]) model.cnes - specifyModel() F - MBSA2, lam1 F - MBSA7, lam2 F - MBSA8, lam3 F - MBSA9, lam4 F - F, NA, 1 MBSA2 - MBSA2, the1 MBSA7 - MBSA7, the2 MBSA8 - MBSA8, the3 MBSA9 - MBSA9, the4 sem.cnes - sem(model.cnes, data=CNES) summary(sem.cnes) set.seed(12345) # for reproducibility system.time(boot.cnes - bootSem(sem.cnes, R=5000)) class(boot.cnes) boot.ci(boot.cnes) -- snip -- I hope this helps, John -Original Message- From: r-help-bounces@ [mailto:r-help-bounces@r- project.org] On Behalf Of Frank Harrell Sent: Tuesday, March 12, 2013 11:27 AM To: r-help@ Subject: [R] Bootstrap BCa confidence limits with your own resamples I like to bootstrap regression models, saving the entire set of bootstrapped regression coefficients for later use so that I can get confidence limits for a whole set of contrasts derived from the coefficients. I'm finding that ordinary bootstrap percentile confidence limits can provide poor coverage for odds ratios for binary logistic models with small N. So I'm exploring BCa confidence intervals. Because the matrix of bootstrapped regression coefficients is generated outside of the boot packages' boot() function, I need to see if it is possible to compute BCa intervals using boot's boot.ci() function after constructing a suitable boot object. The function below almost works, but it seems to return bootstrap percentile confidence limits for BCa limits. The failure is probably due to my being new to BCa limits and not understanding the inner workings. Has anyone successfully done something similar to this? Thanks very much, Frank bootBCa - function(estimate, estimates, n, conf.int=0.95) { require(boot) if(!exists('.Random.seed')) runif(1) w - list(sim= 'ordinary', stype = 'i', t0 = estimate, t = as.matrix(estimates), R = length(estimates), data= 1:n, strata = rep(1, n), weights = rep(1/n, n), seed = .Random.seed, statistic = function(...) 1e10, call = list('rigged', weights='junk')) np - boot.ci(w, type='perc', conf=conf.int)$percent m - length(np) np - np[c(m-1, m)] bca - tryCatch(boot.ci(w, type='bca', conf=conf.int), error=function(...) list(fail=TRUE)) if(length(bca$fail) bca$fail) { bca - NULL warning('could not obtain BCa bootstrap confidence interval') } else { bca - bca$bca m - length(bca) bca - bca[c(m-1, m)] } list(np=np, bca=bca) } - Frank Harrell Department of Biostatistics, Vanderbilt University -- View this message in context: http://r.789695.n4.nabble.com/Bootstrap- BCa-confidence-limits-with-your-own-resamples-tp4661045.html Sent from the R help mailing list archive at Nabble.com. __ R-help@ mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting- guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@ mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. - Frank Harrell Department of Biostatistics, Vanderbilt University -- View this message in context: http://r.789695.n4.nabble.com/Bootstrap- BCa-confidence-limits-with-your-own-resamples-tp4661045p4661077.html Sent from the R help
Re: [R] extract values
Hello, I don't understand. Your object is a vector, with a dim attribute, so x[, 1] is meaningless. And all the values are less than 1820: any(x 1820) # FALSE Given the names attribute of your data, is it the values of x corresponding to names greater than 1820? If so, try y - as.numeric(names(x)) x[y 1820] Hope this helps, Rui Barradas Em 12-03-2013 17:50, catalin roibu escreveu: Hello all! I have a problem to extract values greater that for example 1820. I try this code: x[x[,1]1820,]-x1 Please help me! Thank you! The data structure is: structure(c(2.576, 1.728, 3.434, 2.187, 1.928, 1.886, 1.2425, 1.23, 1.075, 1.1785, 1.186, 1.165, 1.732, 1.517, 1.4095, 1.074, 1.618, 1.677, 1.845, 1.594, 1.6655, 1.1605, 1.425, 1.099, 1.007, 1.1795, 1.3855, 1.4065, 1.138, 1.514, 1.4605, 1.756, 1.4165, 1.22, 1.825, 1.8365, 1.81, 1.818, 2.1085, 2.233, 2.5605, 2.285, 2.821, 2.16, 1.914, 1.747, 2.031, 1.847, 1.7715, 1.7925, 1.651, 1.4345, 1.291, 1.9895, 1.99, 1.73, 1.912, 1.776, 1.596, 1.6915, 1.8245, 1.773, 2.173, 2.2345, 2.105, 1.922, 1.802, 1.6385, 1.6545, 2.1785, 1.868, 2.1855, 2.5175, 2.025, 2.435, 1.809, 1.628, 1.327, 1.3485, 1.4335, 2.052, 2.2465, 2.151, 1.7945, 1.79, 1.6055, 1.616, 1.633, 1.665, 2.002, 2.152, 1.736, 1.7985, 1.9155, 1.7135, 1.548, 1.568, 1.713, 2.079, 1.875, 2.12, 2.072, 1.906, 1.4645, 1.3025, 1.407, 1.5445, 1.437, 1.463, 1.5235, 1.609, 1.738, 1.478, 1.573, 1.0465, 1.429, 1.632, 1.814, 1.933, 1.63, 1.482, 1.466, 1.4025, 1.6055, 1.279, 1.827, 1.201, 1.425, 1.678, 1.5535, 1.599, 1.826, 1.964, 1.68, 1.492, 1.509, 1.666, 1.5665, 1.666, 1.4885, 1.8205, 1.5965, 1.84, 1.551, 1.4835, 1.805, 1.7145, 1.902, 1.2085, 0.9095, 0.9325, 1.34, 1.6135, 1.5825, 1.757, 1.7105, 1.3115, 1.288, 1.567, 1.7795, 1.642, 1.4375, 1.4495, 1.4225, 1.4885, 1.251, 1.179, 1.188, 1.3605, 1.373, 1.2185, 1.405, 1.016, 0.979, 1.018, 1.0335, 1.39, 1.3005, 1.3955, 1.301, 1.6475, 1.1945, 1.3215, 1.0535, 1.1645, 1.0895, 1.041, 1.155, 1.322, 1.1615, 0.933, 1.1215, 1.022, 0.922, 0.8465, 1.103, 1.1375, 1.23, 1.289, 1.222, 1.4865, 1.4025, 1.4295, 1.156, 0.9085, 0.8755, 0.9135, 0.982, 1.145, 1.1295, 1.3475, 1.2415, 1.2505), .Names = c(1799, 1800, 1801, 1802, 1803, 1804, 1805, 1806, 1807, 1808, 1809, 1810, 1811, 1812, 1813, 1814, 1815, 1816, 1817, 1818, 1819, 1820, 1821, 1822, 1823, 1824, 1825, 1826, 1827, 1828, 1829, 1830, 1831, 1832, 1833, 1834, 1835, 1836, 1837, 1838, 1839, 1840, 1841, 1842, 1843, 1844, 1845, 1846, 1847, 1848, 1849, 1850, 1851, 1852, 1853, 1854, 1855, 1856, 1857, 1858, 1859, 1860, 1861, 1862, 1863, 1864, 1865, 1866, 1867, 1868, 1869, 1870, 1871, 1872, 1873, 1874, 1875, 1876, 1877, 1878, 1879, 1880, 1881, 1882, 1883, 1884, 1885, 1886, 1887, 1888, 1889, 1890, 1891, 1892, 1893, 1894, 1895, 1896, 1897, 1898, 1899, 1900, 1901, 1902, 1903, 1904, 1905, 1906, 1907, 1908, 1909, 1910, 1911, 1912, 1913, 1914, 1915, 1916, 1917, 1918, 1919, 1920, 1921, 1922, 1923, 1924, 1925, 1926, 1927, 1928, 1929, 1930, 1931, 1932, 1933, 1934, 1935, 1936, 1937, 1938, 1939, 1940, 1941, 1942, 1943, 1944, 1945, 1946, 1947, 1948, 1949, 1950, 1951, 1952, 1953, 1954, 1955, 1956, 1957, 1958, 1959, 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 1970, 1971, 1972, 1973, 1974, 1975, 1976, 1977, 1978, 1979, 1980, 1981, 1982, 1983, 1984, 1985, 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011)) __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Fine control of plot
Hi everyone. I'm trying to create a graph where I could plot some lines on the right side. Here an example: layout(matrix(c(1,2), 1, 2, byrow = TRUE), widths=c(6,2), heights=c(1,1)) x = 1:100y = rnorm(x)+xplot(x,y) reg = lm(y~x)abline(reg, col = red) plot(1, type=n, axes=F, xlab=, ylab=, xlim = c(-1,1), ylim = c(min(y), max(x)))segments(-0.25,min(reg$fitted.values),0.25,min(reg$fitted.values))segments(-0.25,max(reg$fitted.values),0.25,max(reg$fitted.values))segments(0,min(reg$fitted.values),0,max(reg$fitted.values)) However, I cant figure out how to make it a bit nicer by removing extra space to the right. Any help would be greatly appreciated. Regards,Phil [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] funtion equivalent of jitter to move figures on device
On 2013-03-12 08:04, Folkes, Michael wrote: hello all, I'm overlaying numerous scatter plots onto a map (done in PBSmodelling). In this case I'm placing each plot by setting par(fig) to the centroid of map polygons. The location/mapping part is not so important. There are cases of small overlaps for some plots (ie figures) so I'm keen to write or find a function that moves my small scatter plots so they don't overlap. A little like jitter, but not random in behaviour, it needs to move away from the plots it's overlapping. thanks to all Michael ___ Michael Folkes Salmon Stock Assessment Canadian Dept. of Fisheries Oceans Pacific Biological Station I would investigate the subplot() function in Greg Snow's TeachingDemos package, possibly in conjunction with his updateusr() function. For interactive placement of figures, it should be possible to use locator() to place each plot. Peter Ehlers __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Troubleshooting code
Your code appears to be a load of dingos' kidneys, but in general when troubleshooting one can do worse than be guided by fortune(magnitude and direction). cheers, Rolf Turner On 03/13/2013 01:29 AM, Sahana Srinivasan wrote: Hi everyone, I am having trouble understanding where I went wrong with my code. It seems to logically be all there but the output says otherwise. I know this is a bit long but I can't seem to find the errors so I would appreciate your help :) This is my program : files-Sys.glob(*.rescount.txt);length-length(files);* #selecting all files of a particular extension, saving in a list* a-1; while(a=length) *#going through every element of the list* { df1-read.table(files[a]); c.leng-length(files[,1]); r.leng-length(files[1,]); *#creating data frame for output with the same dimensions as input* opdf-data.frame(matrix(rep(NA,nrow(df1)*ncol(df1)),nrow=nrow(df1))); opdf[,1]-df1[,1]; opdf[1,]-df1[1,]; *#copying the first row and first column so they have the same headers* b-2; while(b=c.leng) *#working through each row of the input data frame* { c-2; while(c=r.leng) *#working through each row element of a particular column* { n-(df1[c][b,]); k-1; while(k=(n-1)) *#inner loop to go through a value of 'k' variable* { ... *#working with the code to generate a value* opdf[c][b,]-sum; *#[1]* k-k+1; } c-c+1; } b-b+1; } fname-strsplit(files[a],.seq.ptseq.rescount.txt); *#generating uniqe file names based on the input file names* ext-.zsc.txt; filename-paste0(fname,ext); write.table(opdf,file=filename,row.names=FALSE,col.names=FALSE,quote=FALSE,sep=\t); a-a+1; } If the input data frame is supposed to be : *NAMEV1 V2 V3* *V1' * 10 12 45 *V2' * 56 34 79 *V3' * 34 6787 The output data frame should be : *NAMEV1 V2 V3* *V1' * xy z *V2' *a b c *V3' * n p q (all the letters of the alphabet are various numbers generated by the program and filled in in the line marked by #[1] However my output file contains this: x __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] funtion equivalent of jitter to move figures on device
Thanks very much Peter, That may be a good start. Though I am aiming for automation and would prefer to avoid the locator function. I'll take a look... Michael -Original Message- From: Peter Ehlers [mailto:ehl...@ucalgary.ca] Sent: March 12, 2013 12:27 PM To: Folkes, Michael Cc: r-help@r-project.org Subject: Re: [R] funtion equivalent of jitter to move figures on device I would investigate the subplot() function in Greg Snow's TeachingDemos package, possibly in conjunction with his updateusr() function. For interactive placement of figures, it should be possible to use locator() to place each plot. Peter Ehlers __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Fine control of plot
Hi, You posted in HTML by mistake, so your code was mangled: I'm trying to create a graph where I could plot some lines on the right side. Here an example: layout(matrix(c(1,2), 1, 2, byrow = TRUE), widths=c(6,2), heights=c(1,1)) x = 1:100y = rnorm(x)+xplot(x,y) reg = lm(y~x)abline(reg, col = red) plot(1, type=n, axes=F, xlab=, ylab=, xlim = c(-1,1), ylim = c(min(y), max(x)))segments(-0.25,min(reg$fitted.values),0.25,min(reg$fitted.values))segments(-0.25,max(reg$fitted.values),0.25,max(reg$fitted.values))segments(0,min(reg$fitted.values),0,max(reg$fitted.values)) I figured out where the linebreaks go, but I can't run this: y = rnorm(x)+xplot(x,y) What's xplot() doing here? However, I cant figure out how to make it a bit nicer by removing extra space to the right. Can you explain further what you're trying to do? Plot spacing is controlled with par() for base graphics, but I really don't understand what you're after. -- Sarah Goslee http://www.functionaldiversity.org __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] funtion equivalent of jitter to move figures on device
Hi Michael: I am not totally certain what you are trying to do, but Hadley Wickham and a student have been working on a package that allows a variety of plots to be added to maps. Examples can be found in this paper in Environmetrics : Glyph-maps for visually exploring temporal patterns in climate data and models(pages 382–393) Hadley Wickham, Heike Hofmann, Charlotte Wickham and Dianne Cook Article first published online: 5 JUL 2012 | DOI: 10.1002/env.2152 I believe an earlier version is available at Wickham's web page. When I asked him about the packages he used to make the plots, he wrote: All code and data for that article is available at https://github.com/ggobi/paper-climate. My student Garrett has also been working on a generalisation of the glyph map ideas. His R package is at https://github.com/garrettgman/ggplyr and I've attached the corresponding paper (in submission to JCGS). I mention it mainly because the code might be in a form that's easier to reuse for your purposes. HTH, -Roy On Mar 12, 2013, at 12:29 PM, Folkes, Michael michael.fol...@dfo-mpo.gc.ca wrote: Thanks very much Peter, That may be a good start. Though I am aiming for automation and would prefer to avoid the locator function. I'll take a look... Michael -Original Message- From: Peter Ehlers [mailto:ehl...@ucalgary.ca] Sent: March 12, 2013 12:27 PM To: Folkes, Michael Cc: r-help@r-project.org Subject: Re: [R] funtion equivalent of jitter to move figures on device I would investigate the subplot() function in Greg Snow's TeachingDemos package, possibly in conjunction with his updateusr() function. For interactive placement of figures, it should be possible to use locator() to place each plot. Peter Ehlers __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. ** The contents of this message do not reflect any position of the U.S. Government or NOAA. ** Roy Mendelssohn Supervisory Operations Research Analyst NOAA/NMFS Environmental Research Division Southwest Fisheries Science Center 1352 Lighthouse Avenue Pacific Grove, CA 93950-2097 e-mail: roy.mendelss...@noaa.gov (Note new e-mail address) voice: (831)-648-9029 fax: (831)-648-8440 www: http://www.pfeg.noaa.gov/ Old age and treachery will overcome youth and skill. From those who have been given much, much will be expected the arc of the moral universe is long, but it bends toward justice -MLK Jr. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Fine control of plot
Hi and thank you for your answer. Sorry for the html post, here's the code: (you missed a break line between +x and plot(...) layout(matrix(c(1,2), 1, 2, byrow = TRUE), widths=c(6,2), heights=c(1,1)) x = 1:100 y = rnorm(x)+x plot(x,y) reg = lm(y~x) abline(reg, col = red) plot(1, type=n, axes=F, xlab=, ylab=, xlim = c(-1,1), ylim = c(min(y), max(x))) segments(-0.25,min(reg$fitted.values),0.25,min(reg$fitted.values)) segments(-0.25,max(reg$fitted.values),0.25,max(reg$fitted.values)) segments(0,min(reg$fitted.values),0,max(reg$fitted.values)) I hope my question is more obvious after you urn this example. Regards, Phil Date: Tue, 12 Mar 2013 15:33:40 -0400 Subject: Re: [R] Fine control of plot From: sarah.gos...@gmail.com To: pmassico...@hotmail.com CC: r-help@r-project.org Hi, You posted in HTML by mistake, so your code was mangled: I'm trying to create a graph where I could plot some lines on the right side. Here an example: layout(matrix(c(1,2), 1, 2, byrow = TRUE), widths=c(6,2), heights=c(1,1)) x = 1:100y = rnorm(x)+xplot(x,y) reg = lm(y~x)abline(reg, col = red) plot(1, type=n, axes=F, xlab=, ylab=, xlim = c(-1,1), ylim = c(min(y), max(x)))segments(-0.25,min(reg$fitted.values),0.25,min(reg$fitted.values))segments(-0.25,max(reg$fitted.values),0.25,max(reg$fitted.values))segments(0,min(reg$fitted.values),0,max(reg$fitted.values)) I figured out where the linebreaks go, but I can't run this: y = rnorm(x)+xplot(x,y) What's xplot() doing here? However, I cant figure out how to make it a bit nicer by removing extra space to the right. Can you explain further what you're trying to do? Plot spacing is controlled with par() for base graphics, but I really don't understand what you're after. -- Sarah Goslee http://www.functionaldiversity.org __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] funtion equivalent of jitter to move figures on device
I'm very grateful! Thanks Roy. -Original Message- From: Roy Mendelssohn - NOAA Federal [mailto:roy.mendelss...@noaa.gov] Sent: March 12, 2013 12:49 PM To: Folkes, Michael Cc: r-help@r-project.org Subject: Re: [R] funtion equivalent of jitter to move figures on device Hi Michael: I am not totally certain what you are trying to do, but Hadley Wickham and a student have been working on a package that allows a variety of plots to be added to maps. Examples can be found in this paper in Environmetrics : Glyph-maps for visually exploring temporal patterns in climate data and models(pages 382-393) Hadley Wickham, Heike Hofmann, Charlotte Wickham and Dianne Cook Article first published online: 5 JUL 2012 | DOI: 10.1002/env.2152 I believe an earlier version is available at Wickham's web page. When I asked him about the packages he used to make the plots, he wrote: All code and data for that article is available at https://github.com/ggobi/paper-climate. My student Garrett has also been working on a generalisation of the glyph map ideas. His R package is at https://github.com/garrettgman/ggplyr and I've attached the corresponding paper (in submission to JCGS). I mention it mainly because the code might be in a form that's easier to reuse for your purposes. HTH, -Roy On Mar 12, 2013, at 12:29 PM, Folkes, Michael michael.fol...@dfo-mpo.gc.ca wrote: Thanks very much Peter, That may be a good start. Though I am aiming for automation and would prefer to avoid the locator function. I'll take a look... Michael -Original Message- From: Peter Ehlers [mailto:ehl...@ucalgary.ca] Sent: March 12, 2013 12:27 PM To: Folkes, Michael Cc: r-help@r-project.org Subject: Re: [R] funtion equivalent of jitter to move figures on device I would investigate the subplot() function in Greg Snow's TeachingDemos package, possibly in conjunction with his updateusr() function. For interactive placement of figures, it should be possible to use locator() to place each plot. Peter Ehlers __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. ** The contents of this message do not reflect any position of the U.S. Government or NOAA. ** Roy Mendelssohn Supervisory Operations Research Analyst NOAA/NMFS Environmental Research Division Southwest Fisheries Science Center 1352 Lighthouse Avenue Pacific Grove, CA 93950-2097 e-mail: roy.mendelss...@noaa.gov (Note new e-mail address) voice: (831)-648-9029 fax: (831)-648-8440 www: http://www.pfeg.noaa.gov/ Old age and treachery will overcome youth and skill. From those who have been given much, much will be expected the arc of the moral universe is long, but it bends toward justice -MLK Jr. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Zoo Data
It's much easier to use xts' time-of-day subsetting: library(xts) dat1 - as.xts(read.zoo(text=TIME, Value1, Value2 01.08.2011 02:30:00, 4.4, 4.7 01.09.2011 03:00:00, 4.2, 4.3 01.11.2011 01:00:00, 3.5, 4.3 01.12.2011 01:40:00, 3.4, 4.5 01.01.2012 02:00:00, 4.8, 5.3 01.02.2012 02:30:00, 4.9, 5.2 01.08.2012 02:30:00, 4.1, 4.7 01.12.2012 03:00:00, 4.7, 4.3 01.01.2013 01:00:00, 3, 4.3 01.01.2013 01:30:00, 3.8, 4.1 01.01.2013 02:00:00, 3.8, 4.3 01.01.2013 02:30:00, 3.9, 4.2 01.01.2013 03:00:00, 3.7, 4.5 01.01.2013 03:30:00, 3.5, 4.1 01.02.2013 02:00:00, 3.8, 4.3 02.02.2013 04:30:00, 3.9, 4.2, sep=,, header=TRUE, FUN=as.POSIXct, format=%d.%m.%Y %H:%M:%S)) dat1[T02:30/T03:00] Best, -- Joshua Ulrich | about.me/joshuaulrich FOSS Trading | www.fosstrading.com R/Finance 2013: Applied Finance with R | www.RinFinance.com On Sat, Mar 9, 2013 at 11:59 AM, arun smartpink...@yahoo.com wrote: HI Jakob, If your data is based on 30 min interval, this should work: dat1-read.table(text= TIME, Value1, Value2 01.08.2011 02:30:00, 4.4, 4.7 01.09.2011 03:00:00, 4.2, 4.3 01.11.2011 01:00:00, 3.5, 4.3 01.12.2011 01:40:00, 3.4, 4.5 01.01.2012 02:00:00, 4.8, 5.3 01.02.2012 02:30:00, 4.9, 5.2 01.08.2012 02:30:00, 4.1, 4.7 01.12.2012 03:00:00, 4.7, 4.3 01.01.2013 01:00:00, 3, 4.3 01.01.2013 01:30:00, 3.8, 4.1 01.01.2013 02:00:00, 3.8, 4.3 01.01.2013 02:30:00, 3.9, 4.2 01.01.2013 03:00:00, 3.7, 4.5 01.01.2013 03:30:00, 3.5, 4.1 01.02.2013 02:00:00, 3.8, 4.3 02.02.2013 04:30:00, 3.9, 4.2 ,sep=,,header=TRUE,stringsAsFactors=FALSE) dat1$TIME-as.POSIXct(dat1$TIME,format=%d.%m.%Y %H:%M:%S) library(zoo) z1- zoo(dat1[,-1],dat1[,1]) Vec-format(seq(from=as.POSIXct(2012-01-01 02:30:00, format=%Y-%m-%d %H:%M:%S),length.out=3, by=30 min),%H:%M:%S) z1[format(time(z1),%H:%M:%S)%in% Vec,] #Value1 Value2 #2011-08-01 02:30:004.44.7 #2011-09-01 03:00:004.24.3 #2012-02-01 02:30:004.95.2 #2012-08-01 02:30:004.14.7 #2012-12-01 03:00:004.74.3 #2013-01-01 02:30:003.94.2 #2013-01-01 03:00:003.74.5 #2013-01-01 03:30:003.54.1 A.K. - Original Message - From: Jakob Hahn jakob.h...@stud.hs-esslingen.de To: arun smartpink...@yahoo.com Cc: Sent: Saturday, March 9, 2013 10:04 AM Subject: Re: Zoo Data Hi Arun, z1[seq(which(time(z1)==as.POSIXct(**.**.2012 02:30:00,format=%d.%m.%Y %H:%M:%S)),which(time(z1)==as.POSIXct(**.**.2012 03:30:00,format=%d.%m.%Y %H:%M:%S))),] Is there a possibility to get all data e.g between two times - not depending on a special date - all values betweend 2:30 and 3:30? Thanks Jakob --- Am 08.03.2013 um 22:05 schrieb arun: Hi, Try this: z1[seq(which(time(z1)==as.POSIXct(01.01.2012 02:00:00,format=%d.%m.%Y %H:%M:%S)),which(time(z1)==as.POSIXct(01.01.2013 03:00:00,format=%d.%m.%Y %H:%M:%S))),] #Value1 Value2 #2012-01-01 02:00:004.85.3 #2012-02-01 02:30:004.95.2 #2012-08-01 02:30:004.14.7 #2012-12-01 03:00:004.74.3 #2013-01-01 01:00:003.04.3 #2013-01-01 01:30:003.84.1 #2013-01-01 02:00:003.84.3 #2013-01-01 02:30:003.94.2 #2013-01-01 03:00:003.74.5 A.K. - Original Message - From: Jakob Hahn jakob.h...@gmail.com To: arun smartpink...@yahoo.com Cc: Sent: Friday, March 8, 2013 3:40 PM Subject: Re: Zoo Data Okay, but works only by line not by date - if I don't know which line my date is;-) Jakob Am 08.03.2013 um 21:32 schrieb arun: HI Jakob, No problem. Your statement Is it also possible plot from e.g. 01.01.2013 01:30:00 to 01.01.2013 02:00:00 - general spoken not the whole file? So, I subset the whole dataset to show that you can do the plot for a specific subset of your data. Depending upon the number of labels and its spacing, you can change it: ix- seq(1,length(tt), ) Arun - Original Message - From: Jakob Hahn jakob.h...@gmail.com To: arun smartpink...@yahoo.com Cc: Sent: Friday, March 8, 2013 3:27 PM Subject: Re: Zoo Data Thats great, thank you very much!!! Needed such a long time to find some stuff for this topic - I am a beginner but should work with R for my master thesis - so time is always against me;-) Have a nice day, regards Jakob PS: What do you mean with the subset z2? --- Am 08.03.2013 um 21:11 schrieb arun: Hi Jakob, dat1-read.table(text= TIME, Value1, Value2 01.08.2011 02:30:00, 4.4, 4.7 01.09.2011 03:00:00, 4.2, 4.3 01.11.2011 01:00:00, 3.5, 4.3 01.12.2011 01:40:00, 3.4, 4.5 01.01.2012 02:00:00, 4.8, 5.3 01.02.2012 02:30:00, 4.9, 5.2 01.08.2012 02:30:00, 4.1, 4.7 01.12.2012 03:00:00, 4.7, 4.3 01.01.2013 01:00:00, 3, 4.3 01.01.2013 01:30:00, 3.8, 4.1 01.01.2013 02:00:00, 3.8, 4.3 01.01.2013 02:30:00, 3.9, 4.2 01.01.2013 03:00:00, 3.7, 4.5 01.01.2013 03:30:00, 3.5, 4.1 01.02.2013 02:00:00, 3.8, 4.3 ,sep=,,header=TRUE,stringsAsFactors=FALSE)
[R] Cook's distance
Dear useRs, I have some trouble with the calculation of Cook's distance in R. The formula for Cook's distance can be found for example here: http://en.wikipedia.org/wiki/Cook%27s_distance I tried to apply it in R: y - (1:400)^2 x - 1:100 lm(y~x) - linmod # just for the sake of a simple example linmod$residuals[1]^2/(2*mean(linmod$residuals^2))*(hatvalues(linmod)[1]/(1-hatvalues(linmod)[1])^2) 1 0.02503195 cooks.distance(linmod)[1] 1 0.02490679 Why differ the two results? Thanks a lot if somebody have some instructions for me. Best wishes: Kolos __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Extracting the knots of a natural cubic spline fit
Dear list members, I am trying to fit a natural cubic spline to my dataset using the ns function in the splines package. Specifically, I do: library(splines) lm(y ~ ns(x, df=3), data =data) How do I extract the values of the interior knots of the fitted spline ? Thanks, Rajat - Rajat Tayal Ph.D Research Scholar Indira Gandhi Institute of Development Research, Gen A.K.Vaidya Marg, Goregaon (East) Mumbai __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Fine control of plot
Okay, so what you really want to do is be able to set a wide right margin and draw some segments there? Using layout() is not the best way to go about this: as you've discovered, you can't control the area assigned. You can cheat with layout(), as in: layout(matrix(c(1,1,1,2), nrow=1)) but the better way is to see xpd within ?par as described here: https://stat.ethz.ch/pipermail/r-help/2009-July/206311.html along with par()$mai to set the margins appropriately. Sarah On Tue, Mar 12, 2013 at 3:50 PM, philippe massicotte pmassico...@hotmail.com wrote: Hi and thank you for your answer. Sorry for the html post, here's the code: (you missed a break line between +x and plot(...) layout(matrix(c(1,2), 1, 2, byrow = TRUE), widths=c(6,2), heights=c(1,1)) x = 1:100 y = rnorm(x)+x plot(x,y) reg = lm(y~x) abline(reg, col = red) plot(1, type=n, axes=F, xlab=, ylab=, xlim = c(-1,1), ylim = c(min(y), max(x))) segments(-0.25,min(reg$fitted.values),0.25,min(reg$fitted.values)) segments(-0.25,max(reg$fitted.values),0.25,max(reg$fitted.values)) segments(0,min(reg$fitted.values),0,max(reg$fitted.values)) I hope my question is more obvious after you urn this example. Regards, Phil Date: Tue, 12 Mar 2013 15:33:40 -0400 Subject: Re: [R] Fine control of plot From: sarah.gos...@gmail.com To: pmassico...@hotmail.com CC: r-help@r-project.org Hi, You posted in HTML by mistake, so your code was mangled: I'm trying to create a graph where I could plot some lines on the right side. Here an example: layout(matrix(c(1,2), 1, 2, byrow = TRUE), widths=c(6,2), heights=c(1,1)) x = 1:100y = rnorm(x)+xplot(x,y) reg = lm(y~x)abline(reg, col = red) plot(1, type=n, axes=F, xlab=, ylab=, xlim = c(-1,1), ylim = c(min(y), max(x)))segments(-0.25,min(reg$fitted.values),0.25,min(reg$fitted.values))segments(-0.25,max(reg$fitted.values),0.25,max(reg$fitted.values))segments(0,min(reg$fitted.values),0,max(reg$fitted.values)) I figured out where the linebreaks go, but I can't run this: y = rnorm(x)+xplot(x,y) What's xplot() doing here? However, I cant figure out how to make it a bit nicer by removing extra space to the right. Can you explain further what you're trying to do? Plot spacing is controlled with par() for base graphics, but I really don't understand what you're after. -- Sarah Goslee http://www.functionaldiversity.org __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] how to convert a data.frame to tree structure object such as dendrogram
On Mar 12, 2013, at 9:37 AM, Not To Miss wrote: Thanks. Is there any more elegant solution? What if I don't know how many levels of nesting ahead of time? It's even worse than what you now offer as a potential complication. You did not provide an example of a data object that would illustrate the complexity of the task nor what you consider the correct procedure (i.e. the order of the columns to be used for splitting) nor the correct results. The task is woefully underspecified at the moment. It's a bit akin to asking how do I do classification without saying what you what to classify. -- David. On Tue, Mar 12, 2013 at 8:51 AM, Greg Snow 538...@gmail.com wrote: You can use the lapply or rapply functions on the resulting list to break each piece into a list itself, then apply the lapply or rapply function to those resulting lists, ... On Mon, Mar 11, 2013 at 3:41 PM, Not To Miss not.to.m...@gmail.comwrote: Thanks. That's just an simple example - what if there are more columns and more rows? Is there any easy way to create nested list? Best, Zech On Mon, Mar 11, 2013 at 2:12 PM, MacQueen, Don macque...@llnl.gov wrote: You will have to decide what R data structure is a tree structure. But maybe this will get you started: foo - data.frame(x=c('A','A','B','B'), y=c('Ab','Ac','Ba','Bd')) split(foo$y, foo$x) $A [1] Ab Ac $B [1] Ba Bd I suppose it is at least a little bit tree-like. -- Don MacQueen Lawrence Livermore National Laboratory 7000 East Ave., L-627 Livermore, CA 94550 925-423-1062 On 3/10/13 9:19 PM, Not To Miss not.to.m...@gmail.com wrote: I have a data.frame object like: data.frame(x=c('A','A','B','B'), y=c('Ab','Ac','Ba','Bd')) x y 1 A Ab 2 A Ac 3 B Ba 4 B Bd how could I create a tree structure object like this: |---Ab A---| _| |---Ac | | |---Ba B---| |---Bb Thanks, Zech [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Gregory (Greg) L. Snow Ph.D. 538...@gmail.com [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. David Winsemius Alameda, CA, USA __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Fine control of plot
xpd=TRUE might works well. I'll give it a try. Thank you for your assistance, Phil Date: Tue, 12 Mar 2013 16:07:05 -0400 Subject: Re: [R] Fine control of plot From: sarah.gos...@gmail.com To: pmassico...@hotmail.com CC: r-help@r-project.org Okay, so what you really want to do is be able to set a wide right margin and draw some segments there? Using layout() is not the best way to go about this: as you've discovered, you can't control the area assigned. You can cheat with layout(), as in: layout(matrix(c(1,1,1,2), nrow=1)) but the better way is to see xpd within ?par as described here: https://stat.ethz.ch/pipermail/r-help/2009-July/206311.html along with par()$mai to set the margins appropriately. Sarah On Tue, Mar 12, 2013 at 3:50 PM, philippe massicotte pmassico...@hotmail.com wrote: Hi and thank you for your answer. Sorry for the html post, here's the code: (you missed a break line between +x and plot(...) layout(matrix(c(1,2), 1, 2, byrow = TRUE), widths=c(6,2), heights=c(1,1)) x = 1:100 y = rnorm(x)+x plot(x,y) reg = lm(y~x) abline(reg, col = red) plot(1, type=n, axes=F, xlab=, ylab=, xlim = c(-1,1), ylim = c(min(y), max(x))) segments(-0.25,min(reg$fitted.values),0.25,min(reg$fitted.values)) segments(-0.25,max(reg$fitted.values),0.25,max(reg$fitted.values)) segments(0,min(reg$fitted.values),0,max(reg$fitted.values)) I hope my question is more obvious after you urn this example. Regards, Phil Date: Tue, 12 Mar 2013 15:33:40 -0400 Subject: Re: [R] Fine control of plot From: sarah.gos...@gmail.com To: pmassico...@hotmail.com CC: r-help@r-project.org Hi, You posted in HTML by mistake, so your code was mangled: I'm trying to create a graph where I could plot some lines on the right side. Here an example: layout(matrix(c(1,2), 1, 2, byrow = TRUE), widths=c(6,2), heights=c(1,1)) x = 1:100y = rnorm(x)+xplot(x,y) reg = lm(y~x)abline(reg, col = red) plot(1, type=n, axes=F, xlab=, ylab=, xlim = c(-1,1), ylim = c(min(y), max(x)))segments(-0.25,min(reg$fitted.values),0.25,min(reg$fitted.values))segments(-0.25,max(reg$fitted.values),0.25,max(reg$fitted.values))segments(0,min(reg$fitted.values),0,max(reg$fitted.values)) I figured out where the linebreaks go, but I can't run this: y = rnorm(x)+xplot(x,y) What's xplot() doing here? However, I cant figure out how to make it a bit nicer by removing extra space to the right. Can you explain further what you're trying to do? Plot spacing is controlled with par() for base graphics, but I really don't understand what you're after. -- Sarah Goslee http://www.functionaldiversity.org __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Troubleshooting code
On 2013-03-12 12:28, Rolf Turner wrote: Your code appears to be a load of dingos' kidneys, but in general when troubleshooting one can do worse than be guided by fortune(magnitude and direction). cheers, Rolf Turner Cool. I didn't know dingos had kidneys. I thought they just ate kids. Peter Ehlers __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Cook's distance
Dear Koilos, You've neglected to correct the MSE for df. Modifying your example, so that it actually runs (your original regression doesn't work -- the lengths of x and y differ): y - (1:100)^2 x - 1:100 lm(y~x) - linmod # just for the sake of a simple example linmod$residuals[1]^2/(2*sum(linmod$residuals^2)/98)*(hatvalues(linmod)[1]/( 1-hatvalues(linmod)[1])^2) 1 0.09853436 cooks.distance(linmod)[1] 1 0.09853436 I hope this helps, John --- John Fox Senator McMaster Professor of Social Statistics Department of Sociology McMaster University Hamilton, Ontario, Canada -Original Message- From: r-help-boun...@r-project.org [mailto:r-help-bounces@r- project.org] On Behalf Of kolos.agos...@gmx.net Sent: Tuesday, March 12, 2013 3:55 PM To: r-help@r-project.org Subject: [R] Cook's distance Dear useRs, I have some trouble with the calculation of Cook's distance in R. The formula for Cook's distance can be found for example here: http://en.wikipedia.org/wiki/Cook%27s_distance I tried to apply it in R: y - (1:400)^2 x - 1:100 lm(y~x) - linmod # just for the sake of a simple example linmod$residuals[1]^2/(2*mean(linmod$residuals^2))*(hatvalues(linmod)[1 ]/(1-hatvalues(linmod)[1])^2) 1 0.02503195 cooks.distance(linmod)[1] 1 0.02490679 Why differ the two results? Thanks a lot if somebody have some instructions for me. Best wishes: Kolos __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting- guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Extracting the knots of a natural cubic spline fit
On Mar 12, 2013, at 2:59 PM, Rajat Tayal ra...@igidr.ac.in wrote: Dear list members, I am trying to fit a natural cubic spline to my dataset using the ns function in the splines package. Specifically, I do: library(splines) lm(y ~ ns(x, df=3), data =data) How do I extract the values of the interior knots of the fitted spline ? Thanks, Rajat Using the example from ?ns: require(splines) fm1 - lm(weight ~ ns(height, df = 5), data = women) attr(terms(fm1), predvars) list(weight, ns(height, knots = c(60.8, 63.6, 66.4, 69.2), Boundary.knots = c(58, 72), intercept = FALSE)) or directly on the data: attr(ns(women$height, df = 5), knots) 20% 40% 60% 80% 60.8 63.6 66.4 69.2 attr(ns(women$height, df = 5), Boundary.knots) [1] 58 72 Regards, Marc Schwartz __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] loading data frames and rbind them
The only real improveent I can see over Ivan's solution is to use lapply instead of the loop (this may just be person preference though). Something like: list_df - lapply( lista_rea_c, function(x) read.xls( file= paste0(path,x,/,x,.xls),1,header=TRUE,as.data.frame=TRUE)) my_df - do.call(rbind, list_df) You could even do that as a single line if you really wanted to, but it would be less readable. You could also make it a little more readable by putting the paste on its own line to create all the path/filenames in a variable to pass to lapply. On Tue, Mar 12, 2013 at 9:06 AM, Ivan Calandra ivan.calan...@u-bourgogne.fr wrote: Hi Mario! I'm not really familiar with this kind of manipulations, but I think you can do it more or less like this (some people on this list might give a more detailed answer): #Create an empty named list list_df - vector(mode=list, length=length(lista_rec_c)) names(list_df) - lista_rea_c##or some part of it using gsub or something similar #Import for (i in lista_rea_c) { list_df[[i]] - read.xlsx(...) } #rbind do.call(rbind, list_df) This probably won't work like this exactly but you should be able to make the modifications. HTH, Ivan -- Ivan CALANDRA Université de Bourgogne UMR CNRS/uB 6282 Biogéosciences 6 Boulevard Gabriel 21000 Dijon, FRANCE +33(0)3.80.39.63.06 ivan.calan...@u-bourgogne.fr http://biogeosciences.u-**bourgogne.fr/calandrahttp://biogeosciences.u-bourgogne.fr/calandra Le 12/03/13 15:52, A M Lavezzi a écrit : Hello everybody I have the following problem. I have to load a number of xls files from different folders (each xls file has the same number of columns, and different numbers of rows). Each xls file is named with a number, i.e. 12345.xls and is contained in a folder with same name, say 12345) Once loaded, I want to rbind all of them to obtain a single database. I think I successfully did the first part, using assign: for (i in lista_rea_c){ name=paste(reaNum,i,sep=,**collapse=NULL) assign(name,read.xlsx(file=**paste(path,i,/,i,.xls,sep=** ,collapse=NULL),1,header=**TRUE,as.data.frame=TRUE)) } where lista_rea_c contains the numbers and is obtained as: lista_rea_c = list.files(path = /Users/mario/Dropbox/..., and path is defined elsewhere At this point I have a number of variables, names as reaNum12345 ecc. I would like to rbind all of them, but what I get is that I rbind reaNum12345, i.e. the variable name and not the data it contains. Can anyone help? thanks! Mario __** R-help@r-project.org mailing list https://stat.ethz.ch/mailman/**listinfo/r-helphttps://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/** posting-guide.html http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __** R-help@r-project.org mailing list https://stat.ethz.ch/mailman/**listinfo/r-helphttps://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/** posting-guide.html http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Gregory (Greg) L. Snow Ph.D. 538...@gmail.com [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Running 32 bits code from within R 64 bits
Dear R colleagues, Is there any code I can use that allows an R 64bits to automatically run code on an adjoining 32bits version? Specifically, I am using the RODBC package on a 64 bits version and have a separate installation of R 32bits. I would like to automate data extraction from Microsoft Access and Excel [for which RODBC seems to require R 32 bits] from within my 64 bits version. Thank you in advance, -- Nuno Prista Programa de Amostragem a Bordo (PNAB/DCF) Instituto Português do Mar e da Atmosfera, I.P. Telef: +351 21 302 70 00 (ext. 1385) __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Stretch the y axis in levelplot
Hi - levelplot (Package lattice) assumes, that the used Matrix is more or less quadratic. But if the Matrix is e.g. 5x400 you get only a bar require(lattice) # create a nice matrix dat - as.data.frame(matrix(runif(2000),ncol=5)) dat$V1 - c(sin(1:400/80)) dat$V2 - c(sin(1:400/75)) dat$V3 - c(sin(1:400/70)) dat$V4 - c(sin(1:400/65)) dat$V5 - c(sin(1:400/60)) matrix - abs(as.matrix(dat)) # and perform levelplot on levelplot(matrix) Is there (a simple) possibility to stretch the y axis. Dieter __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] [R-pkgs] reports 0.1.2 released
I'm very pleased to announce the release of reports: An R package to assist in the workflow of writing academic articles and other reports. This is a bug fix release of reports: http://cran.r-project.org/web/packages/reports/index.html The reports package assists in writing reports and presentations by providing a frame work that brings together existing R, LaTeX/.docx and Pandoc tools. The package is designed to be used with RStudio, MiKTex/Tex Live/LibreOffice, knitr, knitcitations, Pandoc and pander. The user will want to download these free programs/packages to maximize the effectiveness of the reports package. Functions with two letter names are general text formatting functions for copying text from articles for inclusion as a citation. Github development version: https://github.com/trinker/reports As reports is further developed the following are planned: (a) a help video section and (b) a vignette detailing workflow and use of reports. Tyler Rinker ___ R-packages mailing list r-packa...@r-project.org https://stat.ethz.ch/mailman/listinfo/r-packages __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Running 32 bits code from within R 64 bits
On Mar 12, 2013, at 3:18 PM, Nuno Prista nmpri...@ipma.pt wrote: Dear R colleagues, Is there any code I can use that allows an R 64bits to automatically run code on an adjoining 32bits version? Specifically, I am using the RODBC package on a 64 bits version and have a separate installation of R 32bits. I would like to automate data extraction from Microsoft Access and Excel [for which RODBC seems to require R 32 bits] from within my 64 bits version. Thank you in advance, The short answer is No. The longer answer is that R, the ODBC drivers and the target application have to be of the same architecture. Additional information is contained in various locations: 1. Last paragraph in R Windows FAQ 2.28: http://cran.r-project.org/bin/windows/base/rw-FAQ.html#Should-I-run-32_002dbit-or-64_002dbit-R_003f 2. Run vignette(RODBC) and then look at: A. Lower half of page 23 B. Footnote 15 on page 26 Regards, Marc Schwartz __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] funtion equivalent of jitter to move figures on device
In addition to the other suggestions that you have received you may want to look at the spread.labs function in the TeachingDemos package and the spread.labels function in the plotrix package. These only spread in 1 dimension, but may be able to do what you want. The thigmaphobe.labels function in plotrix may be of help as well. If none of these functions does what you want then looking at the code may help with some ideas. Preventing overlap in 2 dimensions automatically while keeping things close to a certain location is not a simple task. I have written functions that do well for a particular dataset, but then fail miserably on a different one. If there is only a little overlap and that can be solved by moving just one subplot then an algorithm that may work is to start with the smallest polygon and put its plot on the map and store the coordinates that it covers, then try the next smallest polygon and if it overlaps move it a little in the direction away from the center of the one it overlaps with until they don't overlap, keep going in the order smallest to largest (since larger polygons should have more flexibility in where you can move things). Another option that sometimes works (and sometimes fails with strange results) is to create a function that for a give set of centers will calculate the degree of overlap and the distance from the centers to the desired location and returns a weighted sum of these values. You then use that function with the optim function it can give you location for the centers of the subplots. Getting the weighting correct is difficult, weight the overlap too much and it will spread the plots out way too much, don't give it enough weight and it will still leave you with some overlap. On Tue, Mar 12, 2013 at 9:04 AM, Folkes, Michael michael.fol...@dfo-mpo.gc.ca wrote: hello all, I'm overlaying numerous scatter plots onto a map (done in PBSmodelling). In this case I'm placing each plot by setting par(fig) to the centroid of map polygons. The location/mapping part is not so important. There are cases of small overlaps for some plots (ie figures) so I'm keen to write or find a function that moves my small scatter plots so they don't overlap. A little like jitter, but not random in behaviour, it needs to move away from the plots it's overlapping. thanks to all Michael ___ Michael Folkes Salmon Stock Assessment Canadian Dept. of Fisheries Oceans Pacific Biological Station [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Gregory (Greg) L. Snow Ph.D. 538...@gmail.com [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Stretch the y axis in levelplot
On 2013-03-12 13:27, Dieter Wirz wrote: Hi - levelplot (Package lattice) assumes, that the used Matrix is more or less quadratic. But if the Matrix is e.g. 5x400 you get only a bar require(lattice) # create a nice matrix dat - as.data.frame(matrix(runif(2000),ncol=5)) dat$V1 - c(sin(1:400/80)) dat$V2 - c(sin(1:400/75)) dat$V3 - c(sin(1:400/70)) dat$V4 - c(sin(1:400/65)) dat$V5 - c(sin(1:400/60)) matrix - abs(as.matrix(dat)) # and perform levelplot on levelplot(matrix) Is there (a simple) possibility to stretch the y axis. Dieter I'm not sure that this is what you want, but you could set the argument 'aspect' to 'fill' instead of 'iso'. Peter Ehlers __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] funtion equivalent of jitter to move figures on device
Thanks very much Greg. 'fail miserably' - sounds like what I was expecting. I appreciate the ideas and hadn't considered any of them. I was looking to draw a line between centroid of one polygon and centroid of the common area with overlapping polygon then move the polygon further along the axis of that line I've defined. Multiple overlapping polygons will be ugly. I'll report back if I have any success. thank you all Michael From: Greg Snow [mailto:538...@gmail.com] Sent: March 12, 2013 2:46 PM To: Folkes, Michael Cc: r-help Subject: Re: [R] funtion equivalent of jitter to move figures on device In addition to the other suggestions that you have received you may want to look at the spread.labs function in the TeachingDemos package and the spread.labels function in the plotrix package. These only spread in 1 dimension, but may be able to do what you want. The thigmaphobe.labels function in plotrix may be of help as well. If none of these functions does what you want then looking at the code may help with some ideas. Preventing overlap in 2 dimensions automatically while keeping things close to a certain location is not a simple task. I have written functions that do well for a particular dataset, but then fail miserably on a different one. If there is only a little overlap and that can be solved by moving just one subplot then an algorithm that may work is to start with the smallest polygon and put its plot on the map and store the coordinates that it covers, then try the next smallest polygon and if it overlaps move it a little in the direction away from the center of the one it overlaps with until they don't overlap, keep going in the order smallest to largest (since larger polygons should have more flexibility in where you can move things). Another option that sometimes works (and sometimes fails with strange results) is to create a function that for a give set of centers will calculate the degree of overlap and the distance from the centers to the desired location and returns a weighted sum of these values. You then use that function with the optim function it can give you location for the centers of the subplots. Getting the weighting correct is difficult, weight the overlap too much and it will spread the plots out way too much, don't give it enough weight and it will still leave you with some overlap. On Tue, Mar 12, 2013 at 9:04 AM, Folkes, Michael michael.fol...@dfo-mpo.gc.ca wrote: hello all, I'm overlaying numerous scatter plots onto a map (done in PBSmodelling). In this case I'm placing each plot by setting par(fig) to the centroid of map polygons. The location/mapping part is not so important. There are cases of small overlaps for some plots (ie figures) so I'm keen to write or find a function that moves my small scatter plots so they don't overlap. A little like jitter, but not random in behaviour, it needs to move away from the plots it's overlapping. thanks to all Michael ___ Michael Folkes Salmon Stock Assessment Canadian Dept. of Fisheries Oceans Pacific Biological Station [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Gregory (Greg) L. Snow Ph.D. 538...@gmail.com [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] big edge list to adjacency matrix
I have huge list of edges with weights. a1 b1 w1 a2 b2 w2 a3 b3 w3 a1 b1 w4 a3 b1 w5 I have to convert it into 2 dim matrix b1b2 b3 a1 max(w1,w4) 0 0 a20w2 0 a3w5 0 w3 if edges repeated take the maximum weights. How do this efficiently without using for loop? Any idea. thanks Avi [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Add a continuous color ramp legend to a 3d scatter plot
I have a 3 column dataset x,y,z, and I plotted a 3d scatter plot using: cols - myColorRamp(c(topo.colors(10)),z) plot3d(x=x, y=y, z=z, col=cols) I wanted to add a legend to the 3d plot showing the color ramp. Any help will be greatly appreciated! thanks, Z [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] big edge list to adjacency matrix
Hello, The following is a bit convoluted but will do it. dat - read.table(text = a1 b1 1 a2 b2 2 a3 b3 3 a1 b1 4 a3 b1 5 ) xtabs(V3 ~ V1 + V2, data = aggregate(V3 ~ V1 + V2, data = dat, FUN = max)) Hope this helps, Rui Barradas Em 12-03-2013 21:45, avinash sahu escreveu: I have huge list of edges with weights. a1 b1 w1 a2 b2 w2 a3 b3 w3 a1 b1 w4 a3 b1 w5 I have to convert it into 2 dim matrix b1b2 b3 a1 max(w1,w4) 0 0 a20w2 0 a3w5 0 w3 if edges repeated take the maximum weights. How do this efficiently without using for loop? Any idea. thanks Avi [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Export R generated tables and figures to MS Word
Dear Rxperts, I am aware of Sweave that generates reports into a pdf, but do know of any tools to generate to export to a MS Word document... Is there a way to use R to generate and export report/publication quality tables and figures and export them to MS word (for reporting purposes)? Thanks so much, Santosh [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Determining maximum hourly slope per day
Hello, I have a challenge! I have a large dataset with three columns, date,temp, location. date is in the format %m/%d/%y %H:%M, with a temp recorded every 10 minutes. These temperatures of surface temperatures and so fluctuate during the day, heating up and then cooling down, so the data is a series of peaks and troughs. I would like to develop a function that would go through a dataset consisting of many sequential dates and determine for each day the maximum hourly slope of temp~date for each site (the fastest hourly rate of heating). The output would be the date, the maximum hourly slope for that date, and the location. It would also be great if I could extract when during the day the maximum hourly slope occurred. I have been playing around with using the package lubridate to identify each hour of the day using something like this to create a separate column grouping the data into hours library(lubridate) data$date2 - floor_date(data$date, hour) I was then imagining something like this though this code doesn't work as written. ddply(data, .(location, date2), function(d) max(rollapply(slope(d$temp~d$date, data=d))) Essentially what I'm imagining is calculating the slope (though I'd have to write a quick slope function) of the date/temp relationship, use rollapply to apply this function across the dataset, and determine the maximum slope, grouped by location and hour (using date2). Hmm... and per day! This seems complicated. Can others think of a simpler, more elegant means of extracting this type of data? I struggled to put together a working example with a set of data, but if this doesn't make sense let me know and I'll see what I can do. Thanks, Nate [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] big edge list to adjacency matrix
Hi, You could also do: library(reshape2) dcast(dat,V1~V2,value.var=V3,max,fill=0) # V1 b1 b2 b3 #1 a1 4 0 0 #2 a2 0 2 0 #3 a3 5 0 3 A.K. - Original Message - From: Rui Barradas ruipbarra...@sapo.pt To: avinash sahu avinash.s...@gmail.com Cc: r-help@r-project.org Sent: Tuesday, March 12, 2013 7:10 PM Subject: Re: [R] big edge list to adjacency matrix Hello, The following is a bit convoluted but will do it. dat - read.table(text = a1 b1 1 a2 b2 2 a3 b3 3 a1 b1 4 a3 b1 5 ) xtabs(V3 ~ V1 + V2, data = aggregate(V3 ~ V1 + V2, data = dat, FUN = max)) Hope this helps, Rui Barradas Em 12-03-2013 21:45, avinash sahu escreveu: I have huge list of edges with weights. a1 b1 w1 a2 b2 w2 a3 b3 w3 a1 b1 w4 a3 b1 w5 I have to convert it into 2 dim matrix b1 b2 b3 a1 max(w1,w4) 0 0 a2 0 w2 0 a3 w5 0 w3 if edges repeated take the maximum weights. How do this efficiently without using for loop? Any idea. thanks Avi [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Export R generated tables and figures to MS Word
probably not the answer you're looking for but i only use LaTeX. On Mar 12, 2013, at 8:02 PM, Santosh wrote: Dear Rxperts, I am aware of Sweave that generates reports into a pdf, but do know of any tools to generate to export to a MS Word document... Is there a way to use R to generate and export report/publication quality tables and figures and export them to MS word (for reporting purposes)? Thanks so much, Santosh [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Export R generated tables and figures to MS Word
knitr markdown+pandoc gives serviceable results, for low enough expectations --- Jeff NewmillerThe . . Go Live... DCN:jdnew...@dcn.davis.ca.usBasics: ##.#. ##.#. Live Go... Live: OO#.. Dead: OO#.. Playing Research Engineer (Solar/BatteriesO.O#. #.O#. with /Software/Embedded Controllers) .OO#. .OO#. rocks...1k --- Sent from my phone. Please excuse my brevity. Santosh santosh2...@gmail.com wrote: Dear Rxperts, I am aware of Sweave that generates reports into a pdf, but do know of any tools to generate to export to a MS Word document... Is there a way to use R to generate and export report/publication quality tables and figures and export them to MS word (for reporting purposes)? Thanks so much, Santosh [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Export R generated tables and figures to MS Word
you are describing SWord, distributed at rcom.univie.ac.at Sent from my iPhone On Mar 12, 2013, at 20:02, Santosh santosh2...@gmail.com wrote: Dear Rxperts, I am aware of Sweave that generates reports into a pdf, but do know of any tools to generate to export to a MS Word document... Is there a way to use R to generate and export report/publication quality tables and figures and export them to MS word (for reporting purposes)? Thanks so much, Santosh [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] holding argument(s) fixed within lapply
|Hello, Given a function with several arguments, I would like to perform an lapply (or equivalent) while holding one or more arguments fixed to some common value, and I would like to do it in as elegant a fashion as possible, without resorting to wrapping a separate wrapper for the function if possible. Moreover I would also like it to work in cases where one or more arguments to the original function has a default binding. | |# Here is an example; the original function||| |||f - function(w, y, z){ w + y + z }||| |||# common value I would like y to take||| |||Y - 5||| |||# I have a list of arguments for the lapply()||| |||mylist - list(one = list(w = 1, z = 2),||| ||| two = list(w = 3, z = 4)||| ||| )||| |||# one way to do it involves a custom wrapper||; I do not like this method| |||ret - lapply(FUN = function(x,...) f(w = x$w, z = x$z, ...),||| ||| X = mylist,||| ||| y = Y||| ||| )||| |||# another way||| |||ret - lapply(FUN = with.default,||| ||| X= mylist,||| ||| expr = f(w, y = Y, z)||| ||| )||| |||# yet another way||| |||ret - lapply(FUN = eval,||| ||| X= mylist,||| ||| expr = substitute(f(w, y = Y, z))||| ||| )||| |||# now, the part I'm stuck on is for a version of f where z has a default||binding| |||f2 - function(w, y, z = 0){ w + y + z }||| |||# the same as mylist, but now z is optional||| |||mylist2 - list(one = list(w = 1),||| |||two = list(w = 3, z = 4)||| |||)||| |||# undesired result||(first element has length 0)| |||ret2 - lapply(FUN = function(x,...) f2(w = x$w, z = x$z, ...),||| ||| X = mylist2,||| ||| y = Y||| ||| )||| |||# errors out ('z' not found)||| |||ret2 - lapply(FUN = with.default,||| ||| X= mylist2,||| ||| expr = f2(w, y = Y, z)||| ||| )||| |||# errors out again||| |||ret2 - lapply(FUN = eval,||| ||| X= mylist2,||| ||| expr = substitute(f2(w, y = Y, z))||| ||| )|| || ||# not quite...|| ||ret2 - lapply(FUN = gtools::defmacro(y = Y, expr = f2(w, y = Y, z)),|| || X = mylist2|| || )| || |It seems like there are many ways to skin this cat; open to any and all guidance others care to offer. ||| |Regards, Ben | __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] holding argument(s) fixed within lapply
Apologies; resending in plain text... Given a function with several arguments, I would like to perform an lapply (or equivalent) while holding one or more arguments fixed to some common value, and I would like to do it in as elegant a fashion as possible, without resorting to wrapping a separate wrapper for the function if possible. Moreover I would also like it to work in cases where one or more arguments to the original function has a default binding. # Here is an example; the original function f - function(w, y, z){ w + y + z } # common value I would like y to take Y - 5 # I have a list of arguments for the lapply() mylist - list(one = list(w = 1, z = 2), two = list(w = 3, z = 4) ) # one way to do it involves a custom wrapper; I do not like this method ret - lapply(FUN = function(x,...) f(w = x$w, z = x$z, ...), X = mylist, y = Y ) # another way ret - lapply(FUN = with.default, X= mylist, expr = f(w, y = Y, z) ) # yet another way ret - lapply(FUN = eval, X= mylist, expr = substitute(f(w, y = Y, z)) ) # now, the part I'm stuck on is for a version of f where z has a default binding f2 - function(w, y, z = 0){ w + y + z } # the same as mylist, but now z is optional mylist2 - list(one = list(w = 1), two = list(w = 3, z = 4) ) # undesired result (first element has length 0) ret2 - lapply(FUN = function(x,...) f2(w = x$w, z = x$z, ...), X = mylist2, y = Y ) # errors out ('z' not found) ret2 - lapply(FUN = with.default, X= mylist2, expr = f2(w, y = Y, z) ) # errors out again ret2 - lapply(FUN = eval, X= mylist2, expr = substitute(f2(w, y = Y, z)) ) # not quite... ret2 - lapply(FUN = gtools::defmacro(y = Y, expr = f2(w, y = Y, z)), X = mylist2 ) It seems like there are many ways to skin this cat; open to any and all guidance others care to offer. Regards, Ben __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Troubleshooting code
On 12 Mar 2013, at 20:45, Peter Ehlers ehl...@ucalgary.camailto:ehl...@ucalgary.ca wrote: Cool. I didn't know dingos had kidneys. I thought they just ate kids. That would be the Australian ones, at least allegedly. The canonical UK reference to the cited anatomical features is Adams (1979). S Ellison References Adams, D (1979) The Hitchhiker's Guide to the Galaxy, Pan Books. ISBN 0-330-25864-8 See also http://hitchhikers.wikia.com/wiki/Babel_Fish, under 'philosophical'. Regrettably it won't help the OP. *** This email and any attachments are confidential. Any use, copying or disclosure other than by the intended recipient is unauthorised. If you have received this message in error, please notify the sender immediately via +44(0)20 8943 7000 or notify postmas...@lgcgroup.com and delete this message and any copies from your computer and network. LGC Limited. Registered in England 2991879. Registered office: Queens Road, Teddington, Middlesex, TW11 0LY, UK [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Change the Chinese/English fonts in the Lattice graphic package
Hi, I am graphing with the following command in the Lattice and LatticeExtra package xyplot(xts,lty=c(1,2),col=c(blue,red),type=c(l,g),par.settings = list(layout.heights = list(panel = c(2, 2))), aspect=xy,xlab=,ylab=%, key=key1,screen=list(a,a,b,b,c,c,d,d), layout=c(2,2), scales=list(x=same,y=same)) where a, b, c, d contains English and Chinese characters How can I modify the preceding line to change the fonts? (1) I find from the manual that font=2 represents boldface, while 3 represents italics. Where should I add it? (2) The default Chinese character seems to be ²Ó©úÅé (Shi Ming Tee). How can I change it to Kai -Shu (·¢®Ñ)? Thanks, Miao [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] different color indicates difference magnitude
Hi all: Is there a plot tool to use different color indicates difference magnitude of data? The plot is in the attachment. Many thanks. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] why predict() gives zero length prediction?
I am new to R and start to play around naive bayes algorithm. But I run into a problem which I could not figure out why: for the following sample data, the predict function (using a naive bayes model) always gives me zero length. Do you have any hints? Thanks Here is the code: titanic_small survived pclass 1 0 3 2 1 1 3 1 3 4 1 1 5 0 3 6 0 3 model - naiveBayes(survived~., data=titanic_small) prediction - predict(model, titanic_small[,-1]) length(prediction) [1] 0 [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] saving vector output as numeric
Hi everybody, I'm trying to create a numerical data frame on which to perform PRCC. So far I have created a data frame that consists of function/vector output that displays in numerical form, but when I try and run PRCC (from epiR package) I get the following error message: Error in solve.default(C) : Lapack routine dgesv: system is exactly singular It appears this is because the data frame is passing character strings rather than the numerical data to R. An example of my original data frame minmaxfunc is as follows: min max T1 1.50e+01 3.54e+01 SE1 0.00e+00 1.00e+00 PRE 0.00e+00 1.00e+00 WET 0.00e+00 5.98e+00 BE1 4.664642e+00 5.866620e+00 Kappa1 5.50e+03 2.00e+04 Kappa3 1.00e+04 2.00e+04 Then I created a latin hypercube set using (qunif(x[,i], minmaxfunc$min[i], minmaxfunc$max[i]). The new data frame looks as follows: T1 SE1 PRE WET BE1Kappa1 Kappa3 131.35590 0.7066388715 0.8665111432 4.965701530 5.783424 12240.019 12675.12 228.27640 0.5442730461 0.7000693454 3.181014435 5.183708 16626.566 10759.27 328.14695 0.6295741145 0.7818034368 2.262515130 4.670685 16930.360 13857.44 430.51873 0.3983581045 0.4026640041 2.730221171 5.058697 19546.625 14408.89 516.03162 0.0440886703 0.9954737808 1.002989298 5.310149 13188.279 19500.85 619.48413 0.4280443098 0.8500412067 1.668042962 5.068510 11742.748 18891.87 736.44783 0.5033961511 0.8249423312 5.582521574 4.722634 8738.121 16457.21 839.76318 0.8805976090 0.3430379347 4.876022801 4.787737 19873.134 18660.02 939.99782 0.4109272317 0.6606016486 0.191627831 5.625588 11086.803 13569.30 Each cell contains a vector that is computing the numerical output, but how do I save the latin hypercube sampling data frame in numerical form?? I've tried as.numeric but the error message is (list) object cannot be coerced to type 'double' regardless of whether it's write.table, read.table or as.data.frame that I'm using as.numeric with. Sorry if this is information dense, and thanks for any help you are able to give. Aimee. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] different color indicates difference magnitude
Hi, The attachment has been deleted. Please be more specific. Regards, Pascal On 13/03/13 10:20, meng wrote: Hi all: Is there a plot tool to use different color indicates difference magnitude of data? The plot is in the attachment. Many thanks. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] image color analysis
I am not sure if I should ask this question in this list. But I'll try. Currently I am trying to analyze images using EBImage and biOps. One of the features that I need to extract from various images is the color spectrum, namely, which colors each image consists of. So, each image hopefully can be converted into some sort of color histogram so that color ingredients are easily comparable with each other. There are so many functionalities that these packages and others provide, and I am hoping that someone would give me some guideline for the analysis. Any suggestion? Thanks. ishida [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] ADCP data processing in R
Hello R Users, I have ADCP (Acoustic Doppler Current Profiler) data measurements for a river and I want to process these data using R. Is there a R package to handle ADCP data ? Any suggestions are highly appreciated. Thanks. Janesh [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.