Re: [R] SAS or R software
Hello, BXC (Bendix Carstensen) schrieb: Two major advantages of SAS that seems to have been overlooked in the previous replies are: 1) The data-set language is SAS for data manipulation is more human-readable than R-code in general. R is not a definite write-only laguage as APL, but in particular in datamanipulation it is easy to write code that is impossible to decipher after few weeks. Not quite sure if this is a valid point! Mostly you'll have to comment on code as it is taught in programming course. If your comments are clear, there shouldn't be any problems to understand your code after weeks , month or maybe... Thomas __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Help with ooplot(gplots) and error bars
Dear All I am trying to graph a proportion and CI95% by a factor with ooplot (any other better solution ?) It works well until I try to add the confidence interval. this is the error message and and a description of the data: dat1 PointEst TT1 1 3.6 TT2 2 5.0 TT3 3 5.8 TT4 4 11.5 TT5 5 7.5 TT5 6 8.7 TT7 7 17.4 dat2 Lower TT1 1 1.0 TT2 2 2.2 TT3 3 2.7 TT4 4 6.7 TT5 5 3.9 TT5 6 4.6 TT7 7 11.5 dat3 Upper TT1 1 12.3 TT2 2 11.2 TT3 3 12.1 TT4 4 19.1 TT5 5 14.2 TT5 6 15.6 TT7 7 25.6 ooplot(dat1,type=barplot,col=rich.colors(7,temperature),names.arg=c(X,Y,Z,A,B,C,D),plot.ci=T, + ci.l=dat2,ci.u=dat3, xlab=Treatment, ylab=Percent Normalized Patients) Error in ooplot.default(dat1, type = barplot, col = rich.colors(7, temperature), : 'height' and 'ci.u' must have the same dimensions. I have tried various ways of supplying ci.l and ci.u (including a vector) Thanks for the help that anyone can bring, Regards, JL __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Two factor ANOVA in lme
I want to specify a two-factor model in lme, which should be easy? Here's what I have: factor 1 - treatment FIXED (two levels) factor 2 - genotype RANDOM (160 genotypes in total) I need a model that tells me whether the treatment, genotype and interaction terms are significant. I have been reading 'Mixed effects models in S' but in all examples the random factor is not in the main model - it is a nesting factor etc to specify the error structure. Here i need the random factor in the model. I have tried this: height.aov-lme(height~trt*genotype,data.reps,random=~1|genotype,na.action=na.exclude) but the output is nothing like that from Minitab (my only previous experience of stats software). The results for the interaction term are the same but F values for the factors alone are very different between Minitab and R. This is a very simple model but I can't figure out how to specify it. Help would be much appreciated. As background: The data are from a QTL mapping population, which is why I must test to see if genotype is significant and also why genotype is a random factor. Thanks __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Help with ooplot(gplots) and error bars
Jean-Louis Abitbol wrote: Dear All I am trying to graph a proportion and CI95% by a factor with ooplot (any other better solution ?) It works well until I try to add the confidence interval. this is the error message and and a description of the data: dat1 PointEst TT1 1 3.6 TT2 2 5.0 TT3 3 5.8 TT4 4 11.5 TT5 5 7.5 TT5 6 8.7 TT7 7 17.4 dat2 Lower TT1 1 1.0 TT2 2 2.2 TT3 3 2.7 TT4 4 6.7 TT5 5 3.9 TT5 6 4.6 TT7 7 11.5 dat3 Upper TT1 1 12.3 TT2 2 11.2 TT3 3 12.1 TT4 4 19.1 TT5 5 14.2 TT5 6 15.6 TT7 7 25.6 ooplot(dat1,type=barplot,col=rich.colors(7,temperature),names.arg=c(X,Y,Z,A,B,C,D),plot.ci=T, + ci.l=dat2,ci.u=dat3, xlab=Treatment, ylab=Percent Normalized Patients) Error in ooplot.default(dat1, type = barplot, col = rich.colors(7, temperature), : 'height' and 'ci.u' must have the same dimensions. I have tried various ways of supplying ci.l and ci.u (including a vector) Thanks for the help that anyone can bring, Regards, JL One way is to look at the examples for Dotplot in the Hmisc package. Those examples display bootstrap percentile confidence intervals for a mean. -- Frank E Harrell Jr Professor and Chair School of Medicine Department of Biostatistics Vanderbilt University __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Analysis of pre-calculated frequency distribution?
Sorry for the dumb question, but I cant work out how to do this. Quick version, How can I re-bin a given frequency distribution using new breaks without reference to the original data? Given distribution has integer valued bins. Long version, I am loading a frequency table into R from a file. The original data is very large, and it is a very simple process to get a frequency distribution from an SQL database, so in all this is a convenient method for me. Point being I don't start with 'raw' data. The data looks like this... dat COUNT FREQUENCY 11 5734 22 1625 33 793 44 480 55 294 66 237 77 205 88 200 99 123 10 10 108 11 11 90 12 12 62 13 13 60 14 14 68 15 15 64 16 16 56 17 17 68 18 18 45 19 19 38 20 20 37 21 21 29 22 22 39 23 23 35 24 24 33 25 25 36 ... 1481535 1491562 1501573 1511582 1521592 1531621 1541633 1551643 1561652 1571661 1581682 1591694 1601701 ... 354 21061 355 21891 356 21941 357 22171 358 22461 359 24741 360 28011 361 36971 362 37021 363 73531 364 87381 365 94421 366 122801 This is a tipical 'count / frequency' distribution in biology, where low counts of a certain property are very frequent (across genomes, proteins, ecosystems, etc...), and high counts of of a certain property are very rare. In the above example a certain property occurs 12280 times with a frequency of 1, another property occurs 9442 times with the same frequency. At the other end of the extreem, a certain property occurs once with a frequency of 5734, and another property occurs twice with a frequency of 1625. This kind of distribution is variously known as a zipf, a power law, a Pareto, scale free, heavy tailed or a 80:20 distribution, or coloquially the dominance of the few over the many. The term I choose is a log linear distribution, because that makes no assumptions about the underlying cause of the overall shape. People tipically quote the curve in the form of y ~ Cx^(-a). I want to use the binning method of parameter estimation given here... http://www.ece.uc.edu/~annexste/Courses/cs690/Zipf,%20Power-law,%20Pareto%20-%20a%20ranking%20tutorial.htm (bin the data with exponentially increasing bin widths within the data range). But I can't work out how to re-bin my existing frequency data. Sorry for the long question, all the best Dan. __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Help with ooplot(gplots) and error bars
Jean-Louis Abitbol wrote: Dear Pr Harrel, Thanks for your help at this occasion as well as for previous questions to the list. I just looked at the example in your intro doc. However I am dealing with proportions (ie % of patients responding to a given treatment). In this case I am not sure I can use summarize and then the xYplot. I am not aware of R graphing tools that can deal directly with proportions adding CI not to mention producing by factor/trellis plots. This is why I why trying to do it by hand (using binconf) with ooplot, without much success I am afraid. Best regards, JL Abitbol, MD Jean-Louis, Here is an example. # Plot proportions and their Wilson confidence limits set.seed(3) d - expand.grid(continent=c('USA','Europe'), year=1999:2001, reps=1:100) # Generate binary events from a population probability of 0.2 # of the event, same for all years and continents d$y - ifelse(runif(6*100) = .2, 1, 0) s - with(d, summarize(y, llist(continent,year), function(y) { n - sum(!is.na(y)) s - sum(y, na.rm=T) binconf(s, n) }, type='matrix') ) Dotplot(year ~ Cbind(y) | continent, data=s, ylab='Year', xlab='Probability') I did have to temporarily override a function in Hmisc to fix a problem. This will be corrected in an upcoming release of Hmisc: mApply - function(X, INDEX, FUN=NULL, ..., simplify=TRUE) { ## Matrix tapply ## X: matrix with n rows; INDEX: vector or list of vectors of length n ## FUN: function to operate on submatrices of x by INDEX ## ...: arguments to FUN; simplify: see sapply ## Modification of code by Tony Plate [EMAIL PROTECTED] 10Oct02 ## If FUN returns more than one number, mApply returns a matrix with ## rows corresponding to unique values of INDEX nr - nrow(X) if(!length(nr)) { ## X not a matrix r - tapply(X, INDEX, FUN, ..., simplify=simplify) if(is.matrix(r)) r - drop(t(r)) else if(simplify is.list(r)) r - drop(matrix(unlist(r), nrow=length(r), dimnames=list(names(r),names(r[[1]])), byrow=TRUE)) } else { idx.list - tapply(1:nr, INDEX, c) r - sapply(idx.list, function(idx,x,fun,...) fun(x[idx,,drop=FALSE],...), x=X, fun=FUN, ..., simplify=simplify) if(simplify) r - drop(t(r)) } dn - dimnames(r) if(length(dn) !length(dn[[length(dn)]])) { fx - FUN(X,...) dnl - if(length(names(fx))) names(fx) else dimnames(fx)[[2]] dn[[length(dn)]] - dnl dimnames(r) - dn } if(simplify is.list(r) is.array(r)) { ll - sapply(r, length) maxl - max(ll) empty - (1:length(ll))[ll==0] for(i in empty) r[[i]] - rep(NA, maxl) ## unlist not keep place for NULL entries for nonexistent categories first.not.empty - ((1:length(ll))[ll 0])[1] nam - names(r[[first.not.empty]]) dr - dim(r) r - aperm(array(unlist(r), dim=c(maxl,dr), dimnames=c(list(nam),dimnames(r))), c(1+seq(length(dr)), 1)) } r } Frank On Sun, 21 Nov 2004 07:48:58 -0500, Frank E Harrell Jr [EMAIL PROTECTED] said: Jean-Louis Abitbol wrote: Dear All I am trying to graph a proportion and CI95% by a factor with ooplot (any other better solution ?) It works well until I try to add the confidence interval. this is the error message and and a description of the data: dat1 PointEst TT1 1 3.6 TT2 2 5.0 TT3 3 5.8 TT4 4 11.5 TT5 5 7.5 TT5 6 8.7 TT7 7 17.4 dat2 Lower TT1 1 1.0 TT2 2 2.2 TT3 3 2.7 TT4 4 6.7 TT5 5 3.9 TT5 6 4.6 TT7 7 11.5 dat3 Upper TT1 1 12.3 TT2 2 11.2 TT3 3 12.1 TT4 4 19.1 TT5 5 14.2 TT5 6 15.6 TT7 7 25.6 ooplot(dat1,type=barplot,col=rich.colors(7,temperature),names.arg=c(X,Y,Z,A,B,C,D),plot.ci=T, + ci.l=dat2,ci.u=dat3, xlab=Treatment, ylab=Percent Normalized Patients) Error in ooplot.default(dat1, type = barplot, col = rich.colors(7, temperature), : 'height' and 'ci.u' must have the same dimensions. I have tried various ways of supplying ci.l and ci.u (including a vector) Thanks for the help that anyone can bring, Regards, JL One way is to look at the examples for Dotplot in the Hmisc package. Those examples display bootstrap percentile confidence intervals for a mean. -- Frank E Harrell Jr Professor and Chair School of Medicine Department of Biostatistics Vanderbilt University __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Error with strwidth after lattice graphic drawn
Uwe Ligges wrote: Prof Brian Ripley wrote: On Sat, 20 Nov 2004, Frank E Harrell Jr wrote: In platform i386-pc-linux-gnu arch i386 os linux-gnu system i386, linux-gnu status major2 minor0.1 year 2004 month11 day 15 language R I'm getting an error when using strwidth after a lattice graphic is drawn: library(lattice) xyplot(runif(20) ~ runif(20)) strwidth('xxx') Error in strwidth(xxx) : invalid graphics state Any help appreciated. I have version 2.0.1 of grid and version 0.10-14 of lattice. The advice is `don't do that'! strwidth() is a base graphics command, and will only work if a device is currently plotting base graphics. Lattice is built on grid, which has stringWidth(). ... and convertWidth() is useful to display stuff afterwards in an interpretable way ... Uwe Ligges Thanks Uwe and Brian. Before the latest versions, grid would let me use ordinary graphics functions to deal with the currently rendered plot after I did par(usr=c(0,1,0,1)), so I was lazy. I changed my code to use specific grid functions. -- Frank E Harrell Jr Professor and Chair School of Medicine Department of Biostatistics Vanderbilt University __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] ERROR: installing package indices failed
Sigal Blay wrote: gregmisc is installed yet the problem persist. I installed gregmisc using install.packages(c(combinat,gregmisc,genetics),lib='/home/sblay/lib') (on the same library path where I am trying to install LDheatmap) Have you set the environment variable R_LIBS appropriately? Uwe Ligges installed.packages(lib='/home/sblay/lib') Package LibPath Version Priority Bundle combinat combinat /home/sblay/lib 0.0-5 NA NA gdata gdata /home/sblay/lib 2.0.0 NA gregmisc genetics genetics /home/sblay/lib 1.1.1 NA NA gmodels gmodels /home/sblay/lib 2.0.0 NA gregmisc gplotsgplots/home/sblay/lib 2.0.0 NA gregmisc gtoolsgtools/home/sblay/lib 2.0.0 NA gregmisc LDheatmap LDheatmap /home/sblay/lib 1.0 NA NA ... I am developing a package named LDehatmap. It depends on the genetics package and includes two data files and a demo file. When I'm trying to install it, I get the following messages: * Installing *source* package 'LDheatmap' ... ** R ** data ** demo ** help Building/Updating help pages for package 'LDheatmap' Formats: text html latex example LDheatmaptexthtmllatex example ldheatmaptexthtmllatex example Error: object 'reorder' not found whilst loading namespace 'gdata' Error: package 'gdata' could not be loaded Execution halted ERROR: installing package indices failed Any ideas? Yes. You do not have gdata (part of gregmisc) installed, and genetics depends on it. How did you get genetics installed? A binary install? Install gregmisc __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] How to change the significant codes default?
(Ted Harding) wrote: On 20-Nov-04 Uwe Ligges wrote: Shigeru Mase wrote: Dear R experts, I am posting this question on behalf of a Japanese R user who wants to know how to change the siginificant codes default. As you know, R's default significant codes are: Signif. codes: 0 `***' 0.001 `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1 But he says that it is usual in economics to give codes such as `***' for 0.01, `**' for 0.05 and `*' for 0.10 I don't know if this is true (common) or not, but what I as well as he are puzzled is that, apparently, there is no part in the code, say that of summary.lm, which produces these significant codes as well as the annotation above. A quick search of rking using keywords significant codes star gave me no information. Thanks in advance. For example, calling summary(lmObject) dispatches on method summary.lm() hwich creates an object of class summary.lm. The latter is printed by method print.summary.lm() which calls printCoefmat(). The stars are hard-coded there, and I don't think anybody is going to change that. I suggest to turn of the printing of siginificant codes by specifying print(summary(.), signif.stars = FALSE) or by setting the corresponding option(). Uwe Ligges It would be possible to re-define 'printCoefmat' privately so as to change the lines cutpoints = c(0, 0.001, 0.01, 0.05, 0.1, 1), symbols = c(***, **, *, ., )) towards the end of its code into whatever you prefer, e.g. cutpoints = c(0, 0.01, 0.05, 0.1, 1), symbols = c(***, **, *, )) or cutpoints = c(0, 0.001, 0.01, 0.05, 0.1, 1), symbols = c(, ***, **, *, )) (both of which are compatible with your description of what is needed). The most straightforward way of redefining it is to copy the code for 'printCoefmat' into a file, e.g. sink(printCoefmat.R) printCoefmat sink() and then edit that file. NOTE that the code written to the file does not include the name of the function, i.e. it starts function (x, digits = max(3, getOption(digits) - 2), so the first modification has to be printCoefmat-function(x, digits = ) Then, when you want your private version, simply do source(printCoefmat.R) and it will overlay the original version. (Experts will have to advise whether this clashes with any namespace issues. On my reading of the code, it doesn't seem to; but I'm no expert!) Ted, it clashes! Functions in the namespace are looked up at first. Uwe If your friend wants to use this new definition all the time, then one way to arrange this is to put the revised function definition (as in the edited file) into his .Rprofile, or put the command source(printCoefmat) into that file. Best wishes, Ted. E-Mail: (Ted Harding) [EMAIL PROTECTED] Fax-to-email: +44 (0)870 094 0861 [NB: New number!] Date: 20-Nov-04 Time: 19:13:23 -- XFMail -- __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] SAS or R software
I can't resist dipping my oar in here: For me, some significant advantages of SAS are - Ability to input data in almost *any* conceivable form using the combination of features available through input/infile statements, SAS informats and formats, data step programming, etc. Dataset manipulation (merge, join, stack, subset, summarize, etc.) also favors SAS in more complex cases, IMHO. - Output delivery system (ODS): *Every* piece of SAS output is an output object that can be captured as a dataset, rendered in RTF, LaTeX, HTML, PDF, etc. with a relatively simple mechanism (including graphs) ods pdf file='mystuff.pdf''; any sas stuff ods pdf close; - If you think the output tables are ugly, it is not difficult to define a template for *any* output to display it the way you like. - ODS Graphics (new in SAS 9.1) will extend much of this so that statistical procedures will produce many graphics themselves with ease. One significant disadvantage for me is the difficulty of composing multipanel graphic displays (trellis graphics, linked micromaps, etc.) due to the lack of an overall, top-down graphics environment. As well, there are a variety of kinds of graphics I've found extraordinarily frustrating to try to do in SAS because of lack of coherence or generality in the output available from procedures --- an example would be effect displays, such as implemented in R in the effects package. I can't agree, however, with Frank Harrell that SAS produces 'the worst graphics in the statistical software world.' One can get ugly graphs in R almost as easily in SAS just by accepting the 80-20 rule: You can typically get 80% of what you want with 20% of the effort. To get what you really want takes the remaining 80% of effort. On the other hand, the active hard work of many R developers has led to R graphics for which the *default* results for many graphs avoid many of the egregious design errors introduced in SAS in the days of pen-plotters (+ signs for points, cross-hatching for area fill). -Michael -- Michael Friendly Email: [EMAIL PROTECTED] Professor, Psychology Dept. York University Voice: 416 736-5115 x66249 Fax: 416 736-5814 4700 Keele Streethttp://www.math.yorku.ca/SCS/friendly.html Toronto, ONT M3J 1P3 CANADA __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
RE: [R] Analysis of pre-calculated frequency distribution?
On 21-Nov-04 Dan Bolser wrote: Sorry for the dumb question, but I cant work out how to do this. Quick version, How can I re-bin a given frequency distribution using new breaks without reference to the original data? Given distribution has integer valued bins. Long version, I am loading a frequency table into R from a file. The original data is very large, and it is a very simple process to get a frequency distribution from an SQL database, so in all this is a convenient method for me. Point being I don't start with 'raw' data. The data looks like this... dat COUNT FREQUENCY 11 5734 22 1625 [...] 365 94421 366 122801 [...] People tipically quote the curve in the form of y ~ Cx^(-a). I want to use the binning method of parameter estimation given here... http://www.ece.uc.edu/~annexste/Courses/cs690/Zipf,%20Power-law,%20Paret o%20-%20a%20ranking%20tutorial.htm (bin the data with exponentially increasing bin widths within the data range). But I can't work out how to re-bin my existing frequency data. Hi Dan, Your starting point can be the fact that the number of cases with property i (in class i) is COUNT_i + FREQUENCY_I So if you construct a vector with these numbers in it you have in effect reconstructed the original data. I.e. N[i] - COUNT[i]*FREQUENCY[i] which can be done in one stroke with N - COUNT*FREQUENCY One way (and maybe others can suggest better) to bin these classes non-uniformly could be: Say you have k upper breakpoints for your k bins, say BP, so that e.g. if BP[1] = 2 then there are N[1]+N[2] cases with class = 2, and if BP[2] = 5 then there are N[3] + N[4] + N[5] cases with class 2 and class = 5, and so on. In your case BP[k] = 366. Let csN - cumsum(N) Then (if I've not overlooked something) diff(c(0,csN[BP])) will give you the counts in yhour new bins. E.g. (just to show it should work): N-rep(1,31) BP-c(1,3,7,15,31) csN - cumsum(N) diff(c(0,csN[BP])) [1] 1 2 4 8 16 BP-c(2,3,5,9,17,31) diff(c(0,csN[BP])) [1] 2 1 2 4 8 14 I hope this matches the sort of thing you have in mind! Ted. E-Mail: (Ted Harding) [EMAIL PROTECTED] Fax-to-email: +44 (0)870 094 0861 [NB: New number!] Date: 21-Nov-04 Time: 16:47:05 -- XFMail -- __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Help with ooplot(gplots) and error bars
On Sun, 2004-11-21 at 12:31 +0100, Jean-Louis Abitbol wrote: Dear All I am trying to graph a proportion and CI95% by a factor with ooplot (any other better solution ?) It works well until I try to add the confidence interval. this is the error message and and a description of the data: dat1 PointEst TT1 1 3.6 TT2 2 5.0 TT3 3 5.8 TT4 4 11.5 TT5 5 7.5 TT5 6 8.7 TT7 7 17.4 dat2 Lower TT1 1 1.0 TT2 2 2.2 TT3 3 2.7 TT4 4 6.7 TT5 5 3.9 TT5 6 4.6 TT7 7 11.5 dat3 Upper TT1 1 12.3 TT2 2 11.2 TT3 3 12.1 TT4 4 19.1 TT5 5 14.2 TT5 6 15.6 TT7 7 25.6 ooplot(dat1,type=barplot,col=rich.colors(7,temperature),names.arg=c(X,Y,Z,A,B,C,D),plot.ci=T, + ci.l=dat2,ci.u=dat3, xlab=Treatment, ylab=Percent Normalized Patients) Error in ooplot.default(dat1, type = barplot, col = rich.colors(7, temperature), : 'height' and 'ci.u' must have the same dimensions. I have tried various ways of supplying ci.l and ci.u (including a vector) Thanks for the help that anyone can bring, Regards, JL It appears that ooplot() is built upon barplot2() to an extent. When I wrote barplot2(), in the case of plotting CI's, it expects that the primary data structure ('data' in ooplot, 'height' in barplot2) have the same dimensions as 'ci.l' and 'ci.u'. Thus, for example: barplot2(dat1[, 2], col = rich.colors(7,temperature), names.arg = c(X, Y, Z, A, B, C, D), plot.ci = TRUE, ci.l = dat2[, 2], ci.u = dat3[, 2], xlab = Treatment, ylab = Percent Normalized Patients) will work. Note that I am explicitly passing the requisite data vectors for 'height' and the CI's to the function. In the case of ooplot(), it appears to require that the 'data' argument have at least two columns, which requires that you pass 'dat1' as a two dimensional structure and not dat1[, 2] as a vector. In this case, since ooplot is built upon barplot2, your call to ooplot fails when the check of the 'data' and 'ci.u' dimesional structure takes place. The reason for the failure (even though all three of your data structures appear to be of the same shape initially), is that ooplot does: if (by.row) data - as.matrix(data) else data - t(as.matrix(data)) which results in your dat1 being changed from a 7 x 2 matrix to a 2 x 7 matrix. So 'data' now looks like: data TT1 TT2 TT3 TT4 TT5 TT6 TT7 1.0 2 3.0 4.0 5.0 6.0 7.0 PointEst 3.6 5 5.8 11.5 7.5 8.7 17.4 ooplot then does: height - data[-1, , drop = FALSE] to create 'height' which is used later in the function, as it is in barplot2(). So 'height' now looks like: height TT1 TT2 TT3 TT4 TT5 TT6 TT7 PointEst 3.6 5 5.8 11.5 7.5 8.7 17.4 The actual checks in the ooplot code (taken verbatim from barplot2) that compare the dimensions of the 'height' argument and the CI's is: if (any(dim(height) != dim(ci.u))) stop('height' and 'ci.u' must have the same dimensions.) else if (any(dim(height) != dim(ci.l))) stop('height' and 'ci.l' must have the same dimensions.) Due to the transformations above however, 'height' is now a 1 x 7 matrix, whereas your dat2 and dat3 are 7 x 2 matrices. Hence the failure. So, I suspect that Greg (who I have cc:'d here) needs to look at the ooplot code to make similar transformations on the 'ci.l' and 'ci.u' arguments as he is on the 'data' argument to remove the error. Short term, you have (at least) four options: 1. Use barplot2() as I note above 2. You can modify your call to ooplot(), by using t() on the 'ci.l' and 'ci.u' arguments as follows: ooplot(dat1, type = barplot, col = rich.colors(7,temperature), names.arg = c(X, Y, Z, A, B, C, D), plot.ci = TRUE, ci.l = t(dat2[, 2]), ci.u = t(dat3[, 2]), xlab = Treatment, ylab = Percent Normalized Patients) 3. As Frank has mentioned, you can use his Dotplot() function. 4. Similar to Dotplot() in a fashion, is the plotCI() function, which is also in Greg's gplots package. If you stay with the barplot type of graph, you should consider changing your colors, as the CI's are difficult to discern in the first two columns at least. HTH, Marc Schwartz __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] SAS or R software
Michael Friendly wrote: I can't resist dipping my oar in here: For me, some significant advantages of SAS are - Ability to input data in almost *any* conceivable form using the combination of features available through input/infile statements, SAS informats and formats, data step programming, etc. Dataset manipulation (merge, join, stack, subset, summarize, etc.) also favors SAS in more complex cases, IMHO. I respectfully disagree Michael, in cases where the file sizes are not enormous. And in many cases repetitive SAS data manipulation can be done much easier using vectors (e.g., vectors of recodes) and matrices in R, as shown in the examples in Alzola Harrell (http://biostat.mc.vanderbilt.edu/twiki/pub/Main/RS/sintro.pdf). We are working on a project in which a client sends us 50 SAS datasets. Not only can we do all the complex data manipulations needed in R, but we can treat the 50 datasets as a array (actually a list( )) to handle repetitive operations over many datasets. - Output delivery system (ODS): *Every* piece of SAS output is an output object that can be captured as a dataset, rendered in RTF, LaTeX, HTML, PDF, etc. with a relatively simple mechanism (including graphs) ods pdf file='mystuff.pdf''; any sas stuff ods pdf close; - If you think the output tables are ugly, it is not difficult to define a template for *any* output to display it the way you like. I would like to see SAS ODS duplicate Table 10 in http://biostat.mc.vanderbilt.edu/twiki/pub/Main/StatReport/summary.pdf Cheers, Frank - ODS Graphics (new in SAS 9.1) will extend much of this so that statistical procedures will produce many graphics themselves with ease. One significant disadvantage for me is the difficulty of composing multipanel graphic displays (trellis graphics, linked micromaps, etc.) due to the lack of an overall, top-down graphics environment. As well, there are a variety of kinds of graphics I've found extraordinarily frustrating to try to do in SAS because of lack of coherence or generality in the output available from procedures --- an example would be effect displays, such as implemented in R in the effects package. I can't agree, however, with Frank Harrell that SAS produces 'the worst graphics in the statistical software world.' One can get ugly graphs in R almost as easily in SAS just by accepting the 80-20 rule: You can typically get 80% of what you want with 20% of the effort. To get what you really want takes the remaining 80% of effort. On the other hand, the active hard work of many R developers has led to R graphics for which the *default* results for many graphs avoid many of the egregious design errors introduced in SAS in the days of pen-plotters (+ signs for points, cross-hatching for area fill). -Michael -- Frank E Harrell Jr Professor and Chair School of Medicine Department of Biostatistics Vanderbilt University __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] sas vs. R
SAS * better manuals. * tech support for most universities contracted into the price, thus for researchers. * batch orientation. if you have to handle data sets that are as large as your memory, SAS generally does it better. It seems to be an n-pass design. Years ago, when memory was expensive, I could not use S/R even for simple problems. Just a few simple operations, and I was disk thrashing. * all sorts of corporate-oriented data base and ready-to-go application stuff, often not statistical in nature, at all. R * actually, I believe that perl---which can be used as R or SAS backend---beats even weird SAS input statements in its flexibility. though don't get me going on how crazy it is not to have in-code data set embedding. * a real programming language and a real graphics language. * some stuff (e.g., built-in statistical procedures) are a bit overly complex; other stuff is so beautifully simple and intuitive that it borders on a miracle. * interactive design. both suffer from weird mysteries---magic incantations that gurus know, and ordinary people cannot easily find. and let me say---despite prof brian ripley's occasional grumpiness ( ;-) ), he and the rest if the core R group have done absolutely amazing things for the community, both building the program and in helping support it on this forum. I wish some of the corporations or universities that are using SAS would fund the R group a little, too. regards, /ivo --- ivo welch professor of finance and economics brown / nber / yale __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] How to change the significant codes default?
On 21-Nov-04 Uwe Ligges wrote: (Ted Harding) wrote: [...] Then, when you want your private version, simply do source(printCoefmat.R) and it will overlay the original version. (Experts will have to advise whether this clashes with any namespace issues. On my reading of the code, it doesn't seem to; but I'm no expert!) Ted, it clashes! Functions in the namespace are looked up at first. Uwe Oops! Thanks, Uwe; now I know! Cheers, Ted. E-Mail: (Ted Harding) [EMAIL PROTECTED] Fax-to-email: +44 (0)870 094 0861 [NB: New number!] Date: 21-Nov-04 Time: 17:01:35 -- XFMail -- __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
RE: [R] How to change the significant codes default?
From: Uwe Ligges (Ted Harding) wrote: On 20-Nov-04 Uwe Ligges wrote: Shigeru Mase wrote: Dear R experts, I am posting this question on behalf of a Japanese R user who wants to know how to change the siginificant codes default. As you know, R's default significant codes are: Signif. codes: 0 `***' 0.001 `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1 But he says that it is usual in economics to give codes such as `***' for 0.01, `**' for 0.05 and `*' for 0.10 I don't know if this is true (common) or not, but what I as well as he are puzzled is that, apparently, there is no part in the code, say that of summary.lm, which produces these significant codes as well as the annotation above. A quick search of rking using keywords significant codes star gave me no information. Thanks in advance. For example, calling summary(lmObject) dispatches on method summary.lm() hwich creates an object of class summary.lm. The latter is printed by method print.summary.lm() which calls printCoefmat(). The stars are hard-coded there, and I don't think anybody is going to change that. I suggest to turn of the printing of siginificant codes by specifying print(summary(.), signif.stars = FALSE) or by setting the corresponding option(). Uwe Ligges It would be possible to re-define 'printCoefmat' privately so as to change the lines cutpoints = c(0, 0.001, 0.01, 0.05, 0.1, 1), symbols = c(***, **, *, ., )) towards the end of its code into whatever you prefer, e.g. cutpoints = c(0, 0.01, 0.05, 0.1, 1), symbols = c(***, **, *, )) or cutpoints = c(0, 0.001, 0.01, 0.05, 0.1, 1), symbols = c(, ***, **, *, )) (both of which are compatible with your description of what is needed). The most straightforward way of redefining it is to copy the code for 'printCoefmat' into a file, e.g. sink(printCoefmat.R) printCoefmat sink() and then edit that file. NOTE that the code written to the file does not include the name of the function, i.e. it starts function (x, digits = max(3, getOption(digits) - 2), so the first modification has to be printCoefmat-function(x, digits = ) Then, when you want your private version, simply do source(printCoefmat.R) and it will overlay the original version. (Experts will have to advise whether this clashes with any namespace issues. On my reading of the code, it doesn't seem to; but I'm no expert!) Ted, it clashes! Functions in the namespace are looked up at first. Uwe As well, try to count the number of times people modified functions in base without renaming, then asked if there's a bug when that was forgotten, or when behavior of the function changed in newer version of R... Andy If your friend wants to use this new definition all the time, then one way to arrange this is to put the revised function definition (as in the edited file) into his .Rprofile, or put the command source(printCoefmat) into that file. Best wishes, Ted. E-Mail: (Ted Harding) [EMAIL PROTECTED] Fax-to-email: +44 (0)870 094 0861 [NB: New number!] Date: 20-Nov-04 Time: 19:13:23 -- XFMail -- __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] sas vs. R
On Sun, 2004-11-21 at 09:15 -0800, [EMAIL PROTECTED] wrote: I wish some of the corporations or universities that are using SAS would fund the R group a little, too. We do via the R Foundation! http://www.r-project.org/nosvn/foundation/memberlist.html Talk to the folks at your institutions... Marc Schwartz No longer a SAS user __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
RE: [R] Analysis of pre-calculated frequency distribution?
On Sun, 21 Nov 2004 [EMAIL PROTECTED] wrote: On 21-Nov-04 Dan Bolser wrote: Sorry for the dumb question, but I cant work out how to do this. Quick version, How can I re-bin a given frequency distribution using new breaks without reference to the original data? Given distribution has integer valued bins. Long version, I am loading a frequency table into R from a file. The original data is very large, and it is a very simple process to get a frequency distribution from an SQL database, so in all this is a convenient method for me. Point being I don't start with 'raw' data. The data looks like this... dat COUNT FREQUENCY 11 5734 22 1625 [...] 365 94421 366 122801 [...] People tipically quote the curve in the form of y ~ Cx^(-a). I want to use the binning method of parameter estimation given here... http://www.ece.uc.edu/~annexste/Courses/cs690/Zipf,%20Power-law,%20Paret o%20-%20a%20ranking%20tutorial.htm (bin the data with exponentially increasing bin widths within the data range). But I can't work out how to re-bin my existing frequency data. Hi Dan, Your starting point can be the fact that the number of cases with property i (in class i) is COUNT_i + FREQUENCY_I So if you construct a vector with these numbers in it you have in effect reconstructed the original data. I.e. N[i] - COUNT[i]*FREQUENCY[i] Cheers for this, I was trying this, but my results looked wrong with respect to the data shown on the webpage cited above. Thanks to James Holtman for the other suggestion - My confusion was coming from thinking I had to use hist, but in fact cut + tapply was the ticket. Cheers, Dan. which can be done in one stroke with N - COUNT*FREQUENCY One way (and maybe others can suggest better) to bin these classes non-uniformly could be: Say you have k upper breakpoints for your k bins, say BP, so that e.g. if BP[1] = 2 then there are N[1]+N[2] cases with class = 2, and if BP[2] = 5 then there are N[3] + N[4] + N[5] cases with class 2 and class = 5, and so on. In your case BP[k] = 366. Let csN - cumsum(N) Then (if I've not overlooked something) diff(c(0,csN[BP])) will give you the counts in yhour new bins. E.g. (just to show it should work): N-rep(1,31) BP-c(1,3,7,15,31) csN - cumsum(N) diff(c(0,csN[BP])) [1] 1 2 4 8 16 BP-c(2,3,5,9,17,31) diff(c(0,csN[BP])) [1] 2 1 2 4 8 14 I hope this matches the sort of thing you have in mind! Ted. E-Mail: (Ted Harding) [EMAIL PROTECTED] Fax-to-email: +44 (0)870 094 0861 [NB: New number!] Date: 21-Nov-04 Time: 16:47:05 -- XFMail -- __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] How to change the significant codes default?
Uwe Ligges ligges at statistik.uni-dortmund.de writes: : : (Ted Harding) wrote: : : On 20-Nov-04 Uwe Ligges wrote: : : Shigeru Mase wrote: : : Dear R experts, : : I am posting this question on behalf of a Japanese R user : who wants to know how to change the siginificant codes default. : As you know, R's default significant codes are: : : Signif. codes: 0 `***' 0.001 `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1 : : But he says that it is usual in economics to give codes such as : : `***' for 0.01, `**' for 0.05 and `*' for 0.10 : : I don't know if this is true (common) or not, but what I as well : as he are puzzled is that, apparently, there is no part in the code, : say that of summary.lm, which produces these significant codes : as well as the annotation above. A quick search of rking using : keywords significant codes star gave me no information. : : Thanks in advance. : : For example, calling summary(lmObject) dispatches on method : summary.lm() : hwich creates an object of class summary.lm. : The latter is printed by method print.summary.lm() which calls : printCoefmat(). : : The stars are hard-coded there, and I don't think anybody is going to : change that. I suggest to turn of the printing of siginificant codes by : specifying :print(summary(.), signif.stars = FALSE) : or by setting the corresponding option(). : : Uwe Ligges : : : It would be possible to re-define 'printCoefmat' privately : so as to change the lines : :cutpoints = c(0, 0.001, 0.01, 0.05, 0.1, 1), :symbols = c(***, **, *, ., )) : : towards the end of its code into whatever you prefer, e.g. : :cutpoints = c(0, 0.01, 0.05, 0.1, 1), :symbols = c(***, **, *, )) : : or : :cutpoints = c(0, 0.001, 0.01, 0.05, 0.1, 1), :symbols = c(, ***, **, *, )) : : (both of which are compatible with your description of what : is needed). : : The most straightforward way of redefining it is to copy : the code for 'printCoefmat' into a file, e.g. : :sink(printCoefmat.R) :printCoefmat :sink() : : and then edit that file. : NOTE that the code written to the file does not include : the name of the function, i.e. it starts : :function (x, digits = max(3, getOption(digits) - 2), : : so the first modification has to be : :printCoefmat-function(x, digits = ) : : Then, when you want your private version, simply do : :source(printCoefmat.R) : : and it will overlay the original version. (Experts will have : to advise whether this clashes with any namespace issues. : On my reading of the code, it doesn't seem to; but I'm no : expert!) : : Ted, it clashes! Functions in the namespace are looked up at first. : True, but one can still get the effect by using assignInNamespace. For example, run these two lines (the body(...) - line is just for illustration here. You want to ultimately replace that line with your redefined printCoefmat: printCoefmat - function... as discussed by Ted.) body(printCoefmat) - parse(text = cat('Greetings from printCoefmat!!!')) assignInNamespace(printCoefmat, printCoefmat, stats) Now running summary.lm as shown below displays the desired Greetings line: R example(lm) ...snip... R summary(lm.D90) Call: lm(formula = weight ~ group - 1) Residuals: Min 1Q Median 3Q Max -1.0710 -0.4938 0.0685 0.2462 1.3690 Coefficients: Greetings from printCoefmat!!! Residual standard error: 0.6964 on 18 degrees of freedom Multiple R-Squared: 0.9818, Adjusted R-squared: 0.9798 F-statistic: 485.1 on 2 and 18 DF, p-value: 2.2e-16 __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Re: [R-sig-finance] Question about Exponential Weighted Moving Average (EWMA) in rmetrics.
With the next release of Rmetrics the help page will extended in the following way: \item{lambda}{ a numeric value between zero and one giving the decay length of the exponential moving average. If an integer value greater than one is given, lambda is used as a lag of n periods to calculate the decay parameter. } Please note that you will find much more (still undocumented) indicators in the example file xmpTradingIndicators.R. These include. #accelTA #adiTA #adoscillatorTA #bollingerTA #chaikinoTA #chaikinvTA #garmanKlassTA #macdTA #medpriceTA #momentumTA #nviTA #obvTA #pviTA #pvtrendTA #rocTA #rsiTA #stochasticTA #typicalPriceTA #wcloseTA #williamsadTA #williamsrTA If you have written functions for further trading indicators, please let me know, so I can add them to Rmetrics. Diethelm Wuertz. German G. Creamer wrote: Please disregard the previous message. I realized that in the emaTA equation, a lambda greater than one is used as a lag of n periods to calculate the decay parameter. A lambda less than one is used directly as the decay parameter. So, the functions are consistent. Thanks anyway, German ___ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-finance __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Your mail have been blocked
Your mail have been blocked __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] RE : Create sequence for dataset
Dear members, I want to create a sequence of numbers for the multiple records of individual animal in my dataset. The SAS code below will do the trick, but I want to learn to do it in R. Can anyone help ? data htssn; set htssn; by anml_key; if first.anml_key then do; seq_ht_rslt=0; end; seq_ht_rslt+1; Thanks in advance. Stella ___ This message, including attachments, is confidential. If you are not the intended recipient, please contact us as soon as possible and then destroy the message. Do not copy, disclose or use the contents in any way. The recipient should check this email and any attachments for viruses and other defects. Livestock Improvement Corporation Limited and any of its subsidiaries and associates are not responsible for the consequences of any virus, data corruption, interception or unauthorised amendments to this email. Because of the many uncertainties of email transmission we cannot guarantee that a reply to this email will be received even if correctly sent. Unless specifically stated to the contrary, this email does not designate an information system for the purposes of section 11(a) of the New Zealand Electronic Transactions Act 2002. __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Location of grobs etc on lattice output
I'm puzzled about side effects of trellis.unfocus(): The following runs without problem, though grid.text() does not seem to do anything. (I'd thought that I had it working at one point.) library(DAAG); library(lattice); library(grid) cuckoos.strip - stripplot(species ~ length, xlab=, data=cuckoos) cuckoos.bw - bwplot(species~length, xlab=Length of egg (mm), data=cuckoos) vp0 - viewport(layout=grid.layout(2, 1)) pushViewport(vp0) vp1 - viewport(layout.pos.row=1) vp2 - viewport(layout.pos.row=2) pushViewport(vp1) print(cuckoos.strip,newpage=FALSE) # trellis.focus(panel, row=1, column=1, clip.off=TRUE) grid.text(A, x=unit(0,native), y=unit(1.05,native), gp=gpar(fontsize=9)) # trellis.unfocus() ## remove the following upViewport() upViewport() pushViewport(vp2) print(cuckoos.bw, newpage=FALSE) trellis.focus(panel, row=1, column=1, clip.off=TRUE) grid.text(B, x=unit(0,native), y=unit(1.05,native), gp=gpar(fontsize=9)) trellis.unfocus() If I remove the #'s, and remove the upViewport() that follows the second #, I seem to lose the current tree, as though the newpage=FALSE for the next print() is ignored. Should I be able to do something like this? Clearly I do not understand what happens when trellis.focus() is invoked. This seems an area where an effective GUI, with a graphical display of the viewport tree, could be very helpful. John Maindonald email: [EMAIL PROTECTED] phone : +61 2 (6125)3473fax : +61 2(6125)5549 Centre for Bioinformation Science, Room 1194, John Dedman Mathematical Sciences Building (Building 27) Australian National University, Canberra ACT 0200. On 21 Nov 2004, at 3:41 PM, Deepayan Sarkar wrote: On Saturday 20 November 2004 19:41, John Maindonald wrote: Is there any way, after use of print.trellis(), to obtain the co-ordinates of the plot region, e.g., in what are then the native co-ordinates? Have you read help(trellis.focus)? This is new in 2.0.0 and the recommended API for interacting with lattice plots (you can of course use grid tools directly, but details are more likely to change at that level). It hasn't had much testing, so I would appreciate reports of things that should be doable easily but isn't. e.g. library(DAAG) library(lattice); library(grid) data(cuckoos) pushViewport(viewport(layout=grid.layout(2, 1))) pushViewport(viewport(layout.pos.row=1)) cuckoos.strip - stripplot(species ~ length, data=cuckoos) print(cuckoos.strip, newpage=FALSE) grid.text(A, x=unit(0.18,native), y=unit(0.925,native)) # This works, but is fiddly, and needs rejigging if width # or fontsize are changed. popViewport(1) An alternative would of course be to access the co-ordinate system used by the lattice function for locating the panels, or for locating labelling. As in the example above, I have been using grid.text() to position text outside the plot region, but closer to the top axis than the legend parameter to the lattice function will allow. trellis.focus(panel, row=1, column=1, clip.off=TRUE) will put you in the plot region (panel), but will switch off clipping so you can write text outside. You can also now control the amount of space between the axis and legend, see str(trellis.par.get(layout.heights)) Deepayan __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] adjusting the map of France to 1830
Date: Fri, 19 Nov 2004 15:59:25 -0500 From: Michael Friendly [EMAIL PROTECTED] Here's what I tried. I can plot a selection of regions, but I can't seem to remove an arbitrary list of region numbers, unless I've done something wrong by selecting the regions I want to plot with departements[-exclude]. I think here the problemis not using exact=T in the call to map(), see below. I also get an error when I try to use map.text to label a map with only the regions I'm selecting. departements - map('france',namesonly=T, plot=FALSE) # returns a vector of names of regions exclude - c(47, #Alpes-Maritimes + 66, # Haute-Savoie + 76, # Savoie + 95, # Territore-de-Belfort + 109, 110, 111, # Var: Iles d'Hyeres + 49, 53, 54, 55, # Moribhan: Isles + 62, 64,# Vendee: Isles + 72, 75 # Charente-Maritime: Isles + ) depts - departements[-exclude] gfrance -map('france', regions=depts) labels - (as.character(1:length(departements)))[-exclude] gfrance -map.text('france', regions=depts, add=FALSE, labels=labels) Error in map.text(france, regions = depts, add = FALSE, labels = labels) : map object must have polygons (fill=TRUE) That error message is issued when regions= specifies less than the whole database, in which case the alternate list of 'x', 'y', and 'names' obtained from a previous call to 'map' is required as a first parameter. So to do what you want (notwithstanding the next problem you raise), try the following as a demonstration: gfrance - map('france', regions=depts, exact=T, fill=T, plot=F) map.text(gfrance, regions=depts, labels=labels) map('france', regions=departements[exclude], fill=T, col=1, add=T) Another problem, potentially more difficult for mapping data on the map of France is that the departements are actually just the polygons in the map, arbitrarily numbered from east to west, and from north to south --- they don't correspond to the 'official' administrative region numbers. As well, the departement names don't always match exactly (ignoring accents, e.g., Val-d'Oise vs. Val-Doise) so it would be another challenge to plot my historical data on the map of France. Well, maps is a source package! [:-)]. You are most welcome to modify the source files maps/src/france.{gon,line,name} to reorder the polygons (and correct errors in the names). If the relationship between those 3 files is not obvious, contact me for further details. Also, I am happy to fold your changes back into the original maps package. Ray Brownrigg __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] RE : Create sequence for dataset
[EMAIL PROTECTED] writes: Dear members, I want to create a sequence of numbers for the multiple records of individual animal in my dataset. The SAS code below will do the trick, but I want to learn to do it in R. Can anyone help ? data htssn; set htssn; by anml_key; if first.anml_key then do; seq_ht_rslt=0; end; seq_ht_rslt+1; Thanks in advance. Whoa. Who just said that SAS data step code was clearer than R? Quite a bit of implicit knowledge in that one. Here's one way (someone please think up a better name for ave()...): x - numeric(nrow(airquality)) ave(x, airquality$Month, FUN=function(z)seq(along=z)) [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [19] 19 20 21 22 23 24 25 26 27 28 29 30 31 1 2 3 4 5 [37] 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 [55] 24 25 26 27 28 29 30 1 2 3 4 5 6 7 8 9 10 11 [73] 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 [91] 30 31 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [109] 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 1 2 3 [127] 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 [145] 22 23 24 25 26 27 28 29 30 or, same basic idea but a little less cryptic: tb - table(airquality$Month) l - lapply(tb, function(x)seq(length=x)) unsplit(l, airquality$Month) [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [19] 19 20 21 22 23 24 25 26 27 28 29 30 31 1 2 3 4 5 (etc.) or, brute force and ignorance: x - numeric(nrow(airquality)) for (i in unique(airquality$Month)) { + ix - airquality$Month == i + x[ix] - seq(along=x[ix]) + } x [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [19] 19 20 21 22 23 24 25 26 27 28 29 30 31 1 2 3 4 5 or, going to the opposite extreme (Gabor et al. are going to try and beat me on this...): seq.factor - function(f) ave(rep(1,length(f)),f,FUN=cumsum) seq(as.factor(airquality$Month)) [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [19] 19 20 21 22 23 24 25 26 27 28 29 30 31 1 2 3 4 5 -- O__ Peter Dalgaard Blegdamsvej 3 c/ /'_ --- Dept. of Biostatistics 2200 Cph. N (*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918 ~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907 __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] RE : Create sequence for dataset
I think this might do it. x.1 - data.frame(x=sample(1:3,20,T), y=sample(10:12,20,T)) # create test data x.1 # print it out x y 1 2 11 2 3 11 3 2 10 4 1 12 5 3 11 6 1 10 7 3 10 8 1 11 9 1 12 10 1 11 11 1 12 12 1 12 13 2 11 14 3 11 15 3 10 16 3 10 17 2 12 18 2 10 19 3 11 20 2 11 # split the data by the numbers in 'x' (would be your 'amnl_key) # and add a column containing the sequence number x.s - by(x.1, x.1$x, function(x){x$seq - seq(along=x$x); x}) # the result in 'x.s' is a list and the rows have to be recombined (rbind) to form the result x.s # print out the data x.1$x: 1 x y seq 4 1 12 1 6 1 10 2 8 1 11 3 9 1 12 4 10 1 11 5 11 1 12 6 12 1 12 7 x.1$x: 2 x y seq 1 2 11 1 3 2 10 2 13 2 11 3 17 2 12 4 18 2 10 5 20 2 11 6 x.1$x: 3 x y seq 2 3 11 1 5 3 11 2 7 3 10 3 14 3 11 4 15 3 10 5 16 3 10 6 19 3 11 7 do.call('rbind', x.s) # bind the rows and print out the result x y seq 1.4 1 12 1 1.6 1 10 2 1.8 1 11 3 1.9 1 12 4 1.10 1 11 5 1.11 1 12 6 1.12 1 12 7 2.1 2 11 1 2.3 2 10 2 2.13 2 11 3 2.17 2 12 4 2.18 2 10 5 2.20 2 11 6 3.2 3 11 1 3.5 3 11 2 3.7 3 10 3 3.14 3 11 4 3.15 3 10 5 3.16 3 10 6 3.19 3 11 7 __ James HoltmanWhat is the problem you are trying to solve? Executive Technical Consultant -- Office of Technology, Convergys [EMAIL PROTECTED] +1 (513) 723-2929 [EMAIL PROTECTED] Sent by: To: [EMAIL PROTECTED] [EMAIL PROTECTED]cc: ath.ethz.ch Subject: [R] RE : Create sequence for dataset 11/21/2004 16:28 Dear members, I want to create a sequence of numbers for the multiple records of individual animal in my dataset. The SAS code below will do the trick, but I want to learn to do it in R. Can anyone help ? data htssn; set htssn; by anml_key; if first.anml_key then do; seq_ht_rslt=0; end; seq_ht_rslt+1; Thanks in advance. Stella ___ This message, including attachments, is confidential. If you are not the intended recipient, please contact us as soon as possible and then destroy the message. Do not copy, disclose or use the contents in any way. The recipient should check this email and any attachments for viruses and other defects. Livestock Improvement Corporation Limited and any of its subsidiaries and associates are not responsible for the consequences of any virus, data corruption, interception or unauthorised amendments to this email. Because of the many uncertainties of email transmission we cannot guarantee that a reply to this email will be received even if correctly sent. Unless specifically stated to the contrary, this email does not designate an information system for the purposes of section 11(a) of the New Zealand Electronic Transactions Act 2002. __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Re: 3d Map with bars
From: [EMAIL PROTECTED] Date: Fri, 19 Nov 2004 16:53:12 -0500 Thanks for reply. I need to first draw the map of USA a perspective plot. I guess thats where my problem was. Try something like this: library(maps) states - map(state, plot=F) x1 - rep(0, 3) x2 - rep(0, 3) maxz - 1 z - matrix(c(0, 0, 0, 0, 0, 0, maxz, maxz, maxz), 3, 3) x1[-2] - states$range[1:2] x2[-2] - states$range[3:4] x1[2] - x1[3] - 1e-6 x2[2] - x2[3] - 1e-6 pmat - persp(x1, x2, z, xlab = , ylab = , zlab=, axes = F, theta=0, phi=20, d=10) lines(trans3d(states$x, states$y, 0, pmat)) latitude - -90 longitude - 37 myz - 0.6 lines(trans3d(rep(latitude, 2), rep(longitude, 2), c(0, myz), pmat), col=2) where trans3d is as defined in the examples for persp(). Hope this helps, Ray Brownrigg Uwe Ligges [EMAIL PROTECTED] 11/19/2004 04:33 PM To: [EMAIL PROTECTED] cc: [EMAIL PROTECTED] Subject:Re: 3d Map with bars [EMAIL PROTECTED] wrote: Apologies in advance for the question. I am trying to draw a map of the US as a surface plot so that I would be able to drop bars on the different states (something like Uwe Ligges' scatterplot3d example 4). I am not sure where to start looking for such a beast. If anyone has any pointers, ideas, I will be grateful. TIA, Partha How to drop bars with persp() has been described on R-help yesterday or today, please check the mailing list's archives. __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Location of grobs etc on lattice output
On Sunday 21 November 2004 16:35, John Maindonald wrote: I'm puzzled about side effects of trellis.unfocus(): The following runs without problem, though grid.text() does not seem to do anything. (I'd thought that I had it working at one point.) library(DAAG); library(lattice); library(grid) cuckoos.strip - stripplot(species ~ length, xlab=, data=cuckoos) cuckoos.bw - bwplot(species~length, xlab=Length of egg (mm), data=cuckoos) vp0 - viewport(layout=grid.layout(2, 1)) pushViewport(vp0) vp1 - viewport(layout.pos.row=1) vp2 - viewport(layout.pos.row=2) pushViewport(vp1) print(cuckoos.strip,newpage=FALSE) # trellis.focus(panel, row=1, column=1, clip.off=TRUE) grid.text(A, x=unit(0,native), y=unit(1.05,native), gp=gpar(fontsize=9)) I think you want npc rather than native here. x=0 on the native scale is outside the device area. # trellis.unfocus() ## remove the following upViewport() upViewport() pushViewport(vp2) print(cuckoos.bw, newpage=FALSE) trellis.focus(panel, row=1, column=1, clip.off=TRUE) grid.text(B, x=unit(0,native), y=unit(1.05,native), gp=gpar(fontsize=9)) trellis.unfocus() If I remove the #'s, and remove the upViewport() that follows the second #, I seem to lose the current tree, as though the newpage=FALSE for the next print() is ignored. Should I be able to do something like this? Clearly I do not understand what happens when trellis.focus() is invoked. This is a bug in trellis.unfocus, caused by my not reading grid documentation carefully enough, I didn't notice that upViewport(0) jumps to the root viewport instead of going up 0 viewports. I'll post an update soon. Quick fix: assignInNamespace(trellis.unfocus, ns = lattice, value = function() { if (lattice:::lattice.getStatus(vp.highlighted)) { grid.remove(lvp.highlight, warn = FALSE) lattice:::lattice.setStatus(vp.highlighted = FALSE) } lattice:::lattice.setStatus(current.focus.column = 0, current.focus.row = 0) if (lattice:::lattice.getStatus(vp.depth) 0) upViewport(lattice:::lattice.getStatus(vp.depth)) lattice:::lattice.setStatus(vp.depth = 0) invisible() }) This seems an area where an effective GUI, with a graphical display of the viewport tree, could be very helpful. True, but it may be overkill for the amount of use it would get. Deepayan __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] adjacent category model in ordinal regression
Hi: I want to analyze some multinomial data. And the response has a natural ordinal sturcture. I want to fit a adjacent category model to the data by logit, probit and complementary log-log link functions. I found a package VGAM (www.stat.auckland.ac.nz/~yee/VGAM/ ) whose function acat can fit the adjacent category model but it only has log and identity link functions. Does anybody know there is a package or function can do the analysis I want? Thank you! liu __ Log on to Messenger with your mobile phone! __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Installing rgl in R2.0.1
I'm running R2.0.1 under Solaris 2.9 on a SunBlade 100. When I installed it, I set things up to use the Sun compilers cc, CC, f95 with the options recommended in the installation and administration guide. Until today, no worries. With all this discussion about R GUIs I thought I'd give R Commander a go. The web page said to install a bunch of packages first, so I did install.packages(c(abind, car, effects, lmtest, multcomp, + mvtnorm, relimp, rgl, sandwich, strucchange, zoo), + dependencies = TRUE) Again, all went well up to a certain point. That point was rgl. * Installing *source* package 'rgl' ... checking build system type... sparc-sun-solaris2.9 checking host system type... sparc-sun-solaris2.9 checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ANSI C... none needed checking how to run the C preprocessor... gcc -E checking for X... libraries /usr/openwin/lib, headers /usr/openwin/include checking for libpng-config... yes configure: creating ./config.status config.status: creating src/Makevars ** libs CC -I/users/local/lib/R/include -I/usr/openwin/include -DHAVE_PNG_H -I/usr/local/include -Wall -pedantic -fno-exceptions -fno-rtti -KPIC -xO4 -xlibmil -dalign -c x11lib.cpp -o x11lib.o CC: Warning: Option -Wall passed to ld, if ld is invoked, ignored otherwise CC: Warning: Option -pedantic passed to ld, if ld is invoked, ignored otherwise CC: Warning: Option -fno-exceptions passed to ld, if ld is invoked, ignored otherwise ... erwise CC: Warning: Option -fno-rtti passed to ld, if ld is invoked, ignored otherwise CC -G -L/usr/local/lib -o rgl.so x11lib.o x11gui.o types.o math.o fps.o pixmap.o gui.o api.o device.o devicemanager.o rglview.o scene.o glgui.o -L/usr/openwin/lib -L/users/local/lib -R/users/local/lib -lpng12 -lz -lm -lstdc++ -lX11 -lXext -lGL -lGLU -lpng12 -lz -lm ld: fatal: library -lstdc++: not found ld: fatal: File processing errors. No output written to rgl.so *** Error code 1 make: Fatal error: Command failed for target `rgl.so' ERROR: compilation failed for package 'rgl' Previous packages figured out from whatever information the R installation squirrelled away that they should use f95 (not g77) and cc (not gcc) and provided sensible options. However, the rgl installation has decided to do its own configuration, and has decided to use gcc. That would probably work, except that it is mixing up the Sun C++ compiler (CC) with the Gnu command line options (-Wall -pedantic -fno-exceptions .) AND the Sun command line options (-xO4 -xlibmil -dalign). All my attempts to follow the http://wsopuppenkiste.wiso.uni-goettingen.de/~dadler/rgl link on the rgl catalogue card at CRAN have failed. - Did I do something wrong? - What if anything can I do about it? __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] How to change the significant codes default?
Shigeru Mase [EMAIL PROTECTED] wrote: I am posting this question on behalf of a Japanese R user who wants to know how to change the siginificant codes default. It's the line symbols = c(***, **, *, ., )) in printCoefmat(), isn't it? (summary.lm makes an object, it doesn't do any printing; getAnywhere(print.summary.lm) turns out to call printCoefmat(); look in printCoefmat and there it is.) __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Running R from CD?
Better install and run R from a USB flash drive. This will save you the trouble of re-writing the CD as you upgrade and install new packages. Also, you can simply copy the R installation on your work computer (no install rights needed); R will run. HTH, b. From: Hans van Walen hans_at_vanwalen.com Date: Fri 27 Aug 2004 - 23:54:53 EST At work I have no permission to install R. So, would anyone know whether it is possible to create a CD with a running R-installation for a windows(XP) pc? And of course, how to? Thank you for your help, Hans van Walen __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] RE : Create sequence for dataset
[EMAIL PROTECTED] (Stella) asked I want to create a sequence of numbers for the multiple records of individual animal in my dataset. The SAS code below will do the trick, but I want to learn to do it in R. Can anyone help ? data htssn; set htssn; by anml_key; if first.anml_key then do; seq_ht_rslt=0; end; seq_ht_rslt+1; Someone was saying how readable SAS data steps were. I must say that as someone who has written code in more than 160 programming languages I find this _completely_ unreadable. (Is the initial value for seq_ht_rslt 0 or 1?) So I'm going to have to guess what was intended. Suppose you have a data.frame ht_ssn and want to add a sequence number column for it. That's easy: ht_ssn$seqno - seq(length = nrow(ht_ssn)) Now suppose that there is an ht_ssn$anml_key column which says which individual animal each row corresponds to, and many rows may correspond to the same animal. data_sequence_number - function (data, column = anml_key) { # Extract the key column. # If it is not already a factor, make it one. # From this factor, extract the level numbers. as.numeric(as.factor(data[[column]])) } ht_ssn$seq_ht_rslt - data_sequence_number(ht_ssn) Probably I have completely misunderstood the question. One thing which will be different is the actual numeric values. If I've understood the SAS version, it will assign numbers to keys in the order in which the keys are encountered, while the R code above will assign numbers to keys in increasing order of key. So if the input contains just Sammy then Jumbo the SAS version might assign numbers 1, 2 while the R version would assign 2, 1. If this really matters, use x - data[[column]] as.numeric(as.factor(x, levels = unique(x))) __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] How to correct this
Hi there, I tried to add a few circles on an existing figure using the following codes grid.circle(x=0.5, y=0.5, r=0.1, draw=TRUE, gp=gpar(col=5)) grid.circle(x=0.5, y=0.5, r=0.3, draw=TRUE, gp=gpar(col=5)) grid.circle(x=0.5, y=0.5, r=0.5, draw=TRUE, gp=gpar(col=5)) points(0.5, 0.5, col = 5) # centre of the circle , but all circles moved away from the centre. Could we do any corrections to this? Thanks. Regards, Jin == Jin Li, PhD Climate Impacts Modeller CSIRO Sustainable Ecosystems Atherton, QLD 4883 Australia Ph: 61 7 4091 8802 Email: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] == [[alternative HTML version deleted]] __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Installing rgl in R2.0.1
On Mon, 22 Nov 2004 13:10:59 +1300 (NZDT), Richard A. O'Keefe [EMAIL PROTECTED] wrote: I'm running R2.0.1 under Solaris 2.9 on a SunBlade 100. When I installed it, I set things up to use the Sun compilers cc, CC, f95 with the options recommended in the installation and administration guide. Until today, no worries. With all this discussion about R GUIs I thought I'd give R Commander a go. The web page said to install a bunch of packages first, so I did install.packages(c(abind, car, effects, lmtest, multcomp, + mvtnorm, relimp, rgl, sandwich, strucchange, zoo), + dependencies = TRUE) Again, all went well up to a certain point. That point was rgl. * Installing *source* package 'rgl' ... checking build system type... sparc-sun-solaris2.9 checking host system type... sparc-sun-solaris2.9 checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ANSI C... none needed checking how to run the C preprocessor... gcc -E checking for X... libraries /usr/openwin/lib, headers /usr/openwin/include checking for libpng-config... yes configure: creating ./config.status config.status: creating src/Makevars ** libs CC -I/users/local/lib/R/include -I/usr/openwin/include -DHAVE_PNG_H -I/usr/local/include -Wall -pedantic -fno-exceptions -fno-rtti -KPIC -xO4 -xlibmil -dalign -c x11lib.cpp -o x11lib.o CC: Warning: Option -Wall passed to ld, if ld is invoked, ignored otherwise CC: Warning: Option -pedantic passed to ld, if ld is invoked, ignored otherwise CC: Warning: Option -fno-exceptions passed to ld, if ld is invoked, ignored otherwise ... erwise CC: Warning: Option -fno-rtti passed to ld, if ld is invoked, ignored otherwise CC -G -L/usr/local/lib -o rgl.so x11lib.o x11gui.o types.o math.o fps.o pixmap.o gui.o api.o device.o devicemanager.o rglview.o scene.o glgui.o -L/usr/openwin/lib -L/users/local/lib -R/users/local/lib -lpng12 -lz -lm -lstdc++ -lX11 -lXext -lGL -lGLU -lpng12 -lz -lm ld: fatal: library -lstdc++: not found ld: fatal: File processing errors. No output written to rgl.so *** Error code 1 make: Fatal error: Command failed for target `rgl.so' ERROR: compilation failed for package 'rgl' Previous packages figured out from whatever information the R installation squirrelled away that they should use f95 (not g77) and cc (not gcc) and provided sensible options. However, the rgl installation has decided to do its own configuration, and has decided to use gcc. That would probably work, except that it is mixing up the Sun C++ compiler (CC) with the Gnu command line options (-Wall -pedantic -fno-exceptions .) AND the Sun command line options (-xO4 -xlibmil -dalign). All my attempts to follow the http://wsopuppenkiste.wiso.uni-goettingen.de/~dadler/rgl link on the rgl catalogue card at CRAN have failed. - Did I do something wrong? - What if anything can I do about it? I don't think you did anything wrong, but I don't know what you can do to fix it. uni-goettingen.de hasn't been responding for a few days. Duncan Murdoch __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] rhyp function from fBasics
Dear R People: There is a function from the fBasics library to get the probability and quantiles for the hyperbolic probability function. Is there one that will estimate parms of the hyperbolic probability function from a data set, please? Thanks in advance! Sincerely, Erin Hodgess mailto: [EMAIL PROTECTED] R Version 2.0.1 windows __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] rhyp function from fBasics
On Sun, 21 Nov 2004, Erin Hodgess wrote: Dear R People: There is a function from the fBasics library to get the probability and quantiles for the hyperbolic probability function. Is there one that will estimate parms of the hyperbolic probability function from a data set, please? Look at the package HyperbolicDist David Scott _ David Scott Department of Statistics, Tamaki Campus The University of Auckland, PB 92019 AucklandNEW ZEALAND Phone: +64 9 373 7599 ext 86830 Fax: +64 9 373 7000 Email: [EMAIL PROTECTED] Graduate Officer, Department of Statistics __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] How to correct this
[EMAIL PROTECTED] wrote: Hi there, I tried to add a few circles on an existing figure using the following codes grid.circle(x=0.5, y=0.5, r=0.1, draw=TRUE, gp=gpar(col=5)) grid.circle(x=0.5, y=0.5, r=0.3, draw=TRUE, gp=gpar(col=5)) grid.circle(x=0.5, y=0.5, r=0.5, draw=TRUE, gp=gpar(col=5)) points(0.5, 0.5, col = 5) # centre of the circle , but all circles moved away from the centre. Could we do any corrections to this? Thanks. Regards, Jin If you are using Lattice (where grid is a more natural fit) then you can do the following: library(lattice) library(grid) xyplot(0.5 ~ 0.5, panel = function(x, y, ...) { grid.circle(x, y, 0.5, default.units = native) panel.xyplot(x, y, ...) }, xlim = c(0, 1), ylim = c(0, 1)) I'm not sure how grid is supposed to behave in a non-trellis.device. --sundar __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] rhyp function from fBasics
Erin Hodgess wrote: Dear R People: There is a function from the fBasics library to get the probability and quantiles for the hyperbolic probability function. Is there one that will estimate parms of the hyperbolic probability function from a data set, please? Thanks in advance! Sincerely, Erin Hodgess mailto: [EMAIL PROTECTED] R Version 2.0.1 windows Erin, You can use MASS::fitdistr to do this, I believe. library(fBasics) library(MASS) x - rhyp(1000, alpha = 2, beta = 1, delta = 1) fitdistr(x, dhyp, list(alpha = 1, beta = 0.5, delta = 0.5)) --sundar __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] variable object naming
Is it possible to give a temporary object a name that varies with each run of a foreloop? For example, I want to fill a matrix every time I run a loop, and I want a new matrix with each run, with an appropriate new name. i.e.: for(i in 1:5){... matrix.i-some values ...} so that in the end I would have: matrix.1 matrix.2 matrix.3 matrix.4 matrix.5 Thanks, Ben Osborne -- Botany Department University of Vermont 109 Carrigan Drive Burlington, VT 05405 [EMAIL PROTECTED] phone: 802-656-0297 fax: 802-656-0440 __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
RE: [R] How to correct this
Hi there, I would like to add a few circles to the following image: x-seq(0,1,0.2) y-x pred-matrix(c(0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.7, 0.7, 0.7, 0.7, 0.5, 0.5, 0.7, 0.9, 0.9, 0.7, 0.5, 0.5, 0.7, 0.9, 0.9, 0.7, 0.5, 0.5, 0.7, 0.7, 0.7, 0.7, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5), 6, 6) image(x, y, pred, col = gray(20:100/100), asp='s', axes=F, xlab= , ylab=) points(0.5, 0.5, col = 5) # the centre of the image The centre of these circles needs to be overlapped with the centre of the image. Any helps are greatly appreciated. Regards, Jin -Original Message- From: Mulholland, Tom [mailto:[EMAIL PROTECTED] Sent: Monday, 22 November 2004 12:29 P To: Li, Jin (CSE, Atherton) Subject: RE: [R] How to correct this I think you need to create a complete set of code that can be replicated by anyone trying to help. I ran the three grid.circle commands on my current plot and it did what I expected it to do. It plotted three circles centred in the current viewport. See the jpeg. The last command using points makes me think that you need to understand about units and the setting up of viewports. I have not played around with this much but I think thr newsletter had an article which may be of use (although it uses old code I think the differences are minor) Ciao, Tom -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Monday, 22 November 2004 10:07 AM To: [EMAIL PROTECTED] Subject: [R] How to correct this Hi there, I tried to add a few circles on an existing figure using the following codes grid.circle(x=0.5, y=0.5, r=0.1, draw=TRUE, gp=gpar(col=5)) grid.circle(x=0.5, y=0.5, r=0.3, draw=TRUE, gp=gpar(col=5)) grid.circle(x=0.5, y=0.5, r=0.5, draw=TRUE, gp=gpar(col=5)) points(0.5, 0.5, col = 5) # centre of the circle , but all circles moved away from the centre. Could we do any corrections to this? Thanks. Regards, Jin == Jin Li, PhD Climate Impacts Modeller CSIRO Sustainable Ecosystems Atherton, QLD 4883 Australia Ph: 61 7 4091 8802 Email: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] == [[alternative HTML version deleted]] __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] variable object naming
Benjamin M. Osborne Benjamin.Osborne at uvm.edu writes: : : Is it possible to give a temporary object a name that varies with each run of a : foreloop? For example, I want to fill a matrix every time I run a loop, and I : want a new matrix with each run, with an appropriate new name. : i.e.: : for(i in 1:5){... matrix.i-some values ...} : : so that in the end I would have: : matrix.1 : matrix.2 : matrix.3 : matrix.4 : matrix.5 See 7.1 of the FAQ. __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
RE: [R] How to correct this
Taking note of the first post, this is what I assume you wish. Note Paul's caveat in the help file If you resize the device, all bets are off! require(gridBase) x-seq(0,1,0.2) y-x pred-matrix(c(0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.7, 0.7, 0.7, 0.7, 0.5, 0.5, 0.7, 0.9, 0.9, 0.7, 0.5, 0.5, 0.7, 0.9, 0.9, 0.7, 0.5, 0.5, 0.7, 0.7, 0.7, 0.7, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5), 6, 6) image(x, y, pred, col = gray(20:100/100), asp='s', axes=F, xlab= , ylab=) points(0.5, 0.5, col = 5) # the centre of the image vps - baseViewports() pushViewport(vps$plot) grid.circle(x=0.5, y=0.5, r=0.1, draw=TRUE, gp=gpar(col=5)) grid.circle(x=0.5, y=0.5, r=0.3, draw=TRUE, gp=gpar(col=5)) grid.circle(x=0.5, y=0.5, r=0.5, draw=TRUE, gp=gpar(col=5)) -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Monday, 22 November 2004 1:21 PM To: [EMAIL PROTECTED] Subject: RE: [R] How to correct this Hi there, I would like to add a few circles to the following image: x-seq(0,1,0.2) y-x pred-matrix(c(0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.7, 0.7, 0.7, 0.7, 0.5, 0.5, 0.7, 0.9, 0.9, 0.7, 0.5, 0.5, 0.7, 0.9, 0.9, 0.7, 0.5, 0.5, 0.7, 0.7, 0.7, 0.7, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5), 6, 6) image(x, y, pred, col = gray(20:100/100), asp='s', axes=F, xlab= , ylab=) points(0.5, 0.5, col = 5) # the centre of the image The centre of these circles needs to be overlapped with the centre of the image. Any helps are greatly appreciated. Regards, Jin -Original Message- From: Mulholland, Tom [mailto:[EMAIL PROTECTED] Sent: Monday, 22 November 2004 12:29 P To: Li, Jin (CSE, Atherton) Subject: RE: [R] How to correct this I think you need to create a complete set of code that can be replicated by anyone trying to help. I ran the three grid.circle commands on my current plot and it did what I expected it to do. It plotted three circles centred in the current viewport. See the jpeg. The last command using points makes me think that you need to understand about units and the setting up of viewports. I have not played around with this much but I think thr newsletter had an article which may be of use (although it uses old code I think the differences are minor) Ciao, Tom -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Monday, 22 November 2004 10:07 AM To: [EMAIL PROTECTED] Subject: [R] How to correct this Hi there, I tried to add a few circles on an existing figure using the following codes grid.circle(x=0.5, y=0.5, r=0.1, draw=TRUE, gp=gpar(col=5)) grid.circle(x=0.5, y=0.5, r=0.3, draw=TRUE, gp=gpar(col=5)) grid.circle(x=0.5, y=0.5, r=0.5, draw=TRUE, gp=gpar(col=5)) points(0.5, 0.5, col = 5) # centre of the circle , but all circles moved away from the centre. Could we do any corrections to this? Thanks. Regards, Jin == Jin Li, PhD Climate Impacts Modeller CSIRO Sustainable Ecosystems Atherton, QLD 4883 Australia Ph: 61 7 4091 8802 Email: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] == [[alternative HTML version deleted]] __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html