Re: [R] Help with map
On Sat, 5 May 2007, Alberto Vieira Ferreira Monteiro wrote: [for those that worry about these things, this _is_ a homework assignment. However, it's not an R homework, it's a Geography and History homework... and I want to use R to create a pretty map] Roger Bivand wrote: Is there any way to associate one color to each country? Try: map_poly_obj - map(worldHires, c(Argentina, Brazil), plot=FALSE, fill=TRUE) str(map_poly_obj) and you'll see that the component of interest is the named polygons, of which there are 28, namely Ok, I guess I can see what you mean. It worked, but I don't think this is a practical way to draw things. For example, suppose [this would help homework mentioned above] I want to draw a series of maps showing the evolution of Communism in the XX century. I would like to start with a 1917 map, showing most countries as in... map(worldHires) ... but with the Soviet Union in red. I don't see how I could mix the two maps (BTW, there's no Russia in worldHires, but there is a USSR...) map(worldHires); map(worldHires, USSR, col=red, fill=T) [Please note that the worldHires database refers to a particular time cross section, probably late 1980's. The territorial extents of the former USSR in 1919, 1920, 1939, 1940, 1941, 1944, 1945, etc., etc. are not the same; the same consideration would apply to PRC's actual control over Tibet. So to do this, you need a sequence of maps showing the marginal increments, with 1917 actually only colouring Petrograd/St Petersburg and perhaps some other cities. I'm not aware of any publically available sequence of boundary files adequately representing the situation of say the Baltic states or Finland for the 1917-2007 period, if anyone has a suitable link, please say so. Geographical data are vintaged, not just where, but where when. Was for example Estonia occupied by the USSR 1940-1941, 1944-1991, or was it part of the USSR for the purposes of this exercise? Using disputed boundaries implies a choice of point of view, one that may not be intended.] Roger map_poly_obj$names So you can build a matching colours vector, or: library(sp) library(maptools) IDs - sapply(strsplit(map_poly_obj$names, :), function(x) x[1]) SP_AB - map2SpatialPolygons(map_poly_obj, IDs=IDs, proj4string=CRS(+proj=longlat +datum=wgs84)) but plot(SP_AB, col=c(cyan, green)) still misses, because some polygons have their rings of coordinates in counter-clockwise order, so: pl_new - lapply(slot(SP_AB, polygons), checkPolygonsHoles) slot(SP_AB, polygons) - pl_new # please forget the assignment to the slot and do not do it unless you can # replace what was there before plot(SP_AB, col=c(cyan, green), axes=TRUE) now works. Moreover, SP_AB is a SpatialPolygons object, which can be promoted to a SpatialPolygonsDataFrame object, for a data slot holding a data.frame with row names matching the Polygons ID values: sapply(slot(SP_AB, polygons), function(x) slot(x, ID)) So adding a suitable data frame gets you to the lattice graphics method spplot(SP_AB, my_var) Hope this helps, So, in the above mentioned case, I could do something like: library(mapdata) commies - c(USSR, Mongolia) # Mongolia was the 2nd communist country, in 1925 map_poly_obj - map(worldHires, plot=FALSE) map_poly_commies - map(worldHires, commies, plot=FALSE, fill=TRUE) plot(map_poly_obj, type=l) polygon(map_poly_commies, col=red, border=black) I guess I can keep going, unless there is a simpler solution. Alberto Monteiro -- Roger Bivand Economic Geography Section, Department of Economics, Norwegian School of Economics and Business Administration, Helleveien 30, N-5045 Bergen, Norway. voice: +47 55 95 93 55; fax +47 55 95 95 43 e-mail: [EMAIL PROTECTED] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] dynamically specifying regressors/RHS variables in a regression
Using the builtin data frame CO2 this regresses uptake, which is variable number 5, against each consecutive pair of variables: for(i in 1:3) { idx - c(i, i+1, 5) print(lm(uptake ~., CO2[idx])) } On 5/5/07, Victor Bennett [EMAIL PROTECTED] wrote: Does anyone know if there is a way to specify regressors dynamically rather than explicitly? More specifically, I have a data set in long format that details a number of individuals and their responses to a question (which can be positive, negative, or no answer). Each individual answers as many questions as they want, so there are a different number of rows per individual. For each number of questions, I want to run a logit on all the individuals who saw that many questions, predicting whether they choose to answer anymore afterwards by their choices on the earlier questions. the second logit would look like answered_only_2~answer_1_pos+answer_1_neg+answer_2_pos+answer_2_neg This will result in over 100 regressions, with different numbers of RHS variables. I'd like to iterate over the sequence of numbers and run a logit for each, but I can't find any means for dynamically generating the RHS variable from an array, or vector. The only way I can think of to write the function would still require the same number of RHS variables in each regression. Is there a way to dynamically generate the right hand side? __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] save intermediate result
On Fri, May 04, 2007 at 04:55:36PM -0400, Weiwei Shi wrote: sorry for my English, staying in US over 7 years makes me think I were ok, now :( anyway, here is an example and why I need that: suppose I have a function like the following: f1 - function(){ line1 - f2() # assume this line takes very long time, like more than 30 min # then I need to save line1 as an object into workspace b/c # the following lines after this might have some bugs # currently I use save(line1, file=line1.robj) # but I am thinking if there is anothe way, like enviroment, instead of save it into a file # codes as followed might have some bugs # blabla... } yes, that's what I thought you meant. so use line1 - f2() i.e. the assignment operator `-' instead of `-'. after f2() is finished, the object `line1' is available from the R prompt, i.e. visible in the workspace (for details, cf. `help(-)'). if some codes as followed have bugs, my f1 function cannot return anything, which means I have to re-run the time-consuming line again. you probably should debug it using some dummy data instead of the f2() call... thanks, Weiwei On 5/4/07, Joerg van den Hoff [EMAIL PROTECTED] wrote: On Fri, May 04, 2007 at 03:45:10PM -0400, Weiwei Shi wrote: hi, is there a way to save a R object into workspace instead of into a file during a running of function? if I understand the question correctly you want the 'super-assignment' operator `-' as in -cut--- R f - function(x) y - 2*x R f(1) R y [1] 2 R f(10) R y [1] 20 -cut--- i.e. y is assigned a value in the 'workspace'. be careful, though, with this kind of assignment. why not return the object you are interested in as a list component together with other results when the function call finishes? -- Weiwei Shi, Ph.D Research Scientist GeneGO, Inc. Did you always know? No, I did not. But I believed... ---Matrix III __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] [SPAM] - Re: R package development in windows - BayesianFilter detected spam
On 04/05/2007 9:32 PM, Gabor Grothendieck wrote: It certainly would be excellent if installing perl could be eliminated. One additional thing that I really dislike about the R installation is that one needs find on one's path and that conflicts with find on Windows so other applications unrelated to R that use scripts can suddenly break because of R. If that could be solved at the same time it would be nice. At a minimum we should be able to wrap the calls to find in a macro, so you could change the macro in MkRules and rename your copy from Rtools to remove the conflict. I'll take a look. Duncan Murdoch On 5/4/07, Duncan Murdoch [EMAIL PROTECTED] wrote: On 04/05/2007 4:25 PM, Greg Snow wrote: I have used the pp/par combination for Perl before. It is pretty straight forward to convert an existing perl script into a stand alone windows executable. Both the Activestate licence and the Perl Artistic licence allow for embedding a script and perl interpreter together and distributing the result. The current perl script(s) used for the R package build package could easily be converted to a 'stand alone' windows executable and be distributed with Rtools for those who do not want to install Perl themselves. The only drawback is that even a Hello World script will result in over a meg sized executable (due to the perl interpreter being included). I took a quick look at the PAR page on CPAN, and it seems possible to build a DLL that incorporates the interpreter, and then each individual script .exe could be much smaller. I'll see if I can get that to work; it would be really nice to be able to drop the Perl requirement. If we could do that, I'd include the command line tools plus the compiled scripts with the basic R distribution, so you could easily build simple packages. The Rtools.exe installer would then just need to install the MinGW compilers for packages containing compiled code, and a few extras needed for building R. I don't really know Perl, so I might be asking for advice if I get stuck. Duncan Murdoch From: [EMAIL PROTECTED] on behalf of Gabor Grothendieck Sent: Fri 5/4/2007 11:55 AM To: Doran, Harold Cc: r-help@stat.math.ethz.ch; Duncan Murdoch Subject: Re: [R] [SPAM] - Re: R package development in windows - BayesianFilter detected spam Just googling I found this: http://www.perlmonks.org/?node_id=186402 On 5/4/07, Doran, Harold [EMAIL PROTECTED] wrote: The best, of course, would be to get rid of Perl altogether. In Python, it is possible to make standalone executables. Is it possible to also do this in Perl, then one could eliminate a perl install. Or, is it possible to use Python to accomplish what perl is currently doing? I may be getting in over my head here since I really don't know what perl is doing under the hood. Harold __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] [SPAM] - Re: R package development in windows - BayesianFilter detected spam
I think that should be the default in order to protect the user. Protecting the user from this sort of annoying conflict is important for a professionally working product that gets along with the rest of the Windows system. On 5/5/07, Duncan Murdoch [EMAIL PROTECTED] wrote: On 04/05/2007 9:32 PM, Gabor Grothendieck wrote: It certainly would be excellent if installing perl could be eliminated. One additional thing that I really dislike about the R installation is that one needs find on one's path and that conflicts with find on Windows so other applications unrelated to R that use scripts can suddenly break because of R. If that could be solved at the same time it would be nice. At a minimum we should be able to wrap the calls to find in a macro, so you could change the macro in MkRules and rename your copy from Rtools to remove the conflict. I'll take a look. Duncan Murdoch On 5/4/07, Duncan Murdoch [EMAIL PROTECTED] wrote: On 04/05/2007 4:25 PM, Greg Snow wrote: I have used the pp/par combination for Perl before. It is pretty straight forward to convert an existing perl script into a stand alone windows executable. Both the Activestate licence and the Perl Artistic licence allow for embedding a script and perl interpreter together and distributing the result. The current perl script(s) used for the R package build package could easily be converted to a 'stand alone' windows executable and be distributed with Rtools for those who do not want to install Perl themselves. The only drawback is that even a Hello World script will result in over a meg sized executable (due to the perl interpreter being included). I took a quick look at the PAR page on CPAN, and it seems possible to build a DLL that incorporates the interpreter, and then each individual script .exe could be much smaller. I'll see if I can get that to work; it would be really nice to be able to drop the Perl requirement. If we could do that, I'd include the command line tools plus the compiled scripts with the basic R distribution, so you could easily build simple packages. The Rtools.exe installer would then just need to install the MinGW compilers for packages containing compiled code, and a few extras needed for building R. I don't really know Perl, so I might be asking for advice if I get stuck. Duncan Murdoch From: [EMAIL PROTECTED] on behalf of Gabor Grothendieck Sent: Fri 5/4/2007 11:55 AM To: Doran, Harold Cc: r-help@stat.math.ethz.ch; Duncan Murdoch Subject: Re: [R] [SPAM] - Re: R package development in windows - BayesianFilter detected spam Just googling I found this: http://www.perlmonks.org/?node_id=186402 On 5/4/07, Doran, Harold [EMAIL PROTECTED] wrote: The best, of course, would be to get rid of Perl altogether. In Python, it is possible to make standalone executables. Is it possible to also do this in Perl, then one could eliminate a perl install. Or, is it possible to use Python to accomplish what perl is currently doing? I may be getting in over my head here since I really don't know what perl is doing under the hood. Harold __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] [SPAM] - Re: R package development in windows - BayesianFilter detected spam
On 05/05/2007 8:00 AM, Gabor Grothendieck wrote: I think that should be the default in order to protect the user. Protecting the user from this sort of annoying conflict is important for a professionally working product that gets along with the rest of the Windows system. I don't, because R building requires simulation of a subset of a Unix environment, so in case of a Unix/Windows conflict, Unix should win. For example none of the Makefiles use backslashes as path separators, they all use Unix-style forward slashes. Duncan Murdoch On 5/5/07, Duncan Murdoch [EMAIL PROTECTED] wrote: On 04/05/2007 9:32 PM, Gabor Grothendieck wrote: It certainly would be excellent if installing perl could be eliminated. One additional thing that I really dislike about the R installation is that one needs find on one's path and that conflicts with find on Windows so other applications unrelated to R that use scripts can suddenly break because of R. If that could be solved at the same time it would be nice. At a minimum we should be able to wrap the calls to find in a macro, so you could change the macro in MkRules and rename your copy from Rtools to remove the conflict. I'll take a look. Duncan Murdoch On 5/4/07, Duncan Murdoch [EMAIL PROTECTED] wrote: On 04/05/2007 4:25 PM, Greg Snow wrote: I have used the pp/par combination for Perl before. It is pretty straight forward to convert an existing perl script into a stand alone windows executable. Both the Activestate licence and the Perl Artistic licence allow for embedding a script and perl interpreter together and distributing the result. The current perl script(s) used for the R package build package could easily be converted to a 'stand alone' windows executable and be distributed with Rtools for those who do not want to install Perl themselves. The only drawback is that even a Hello World script will result in over a meg sized executable (due to the perl interpreter being included). I took a quick look at the PAR page on CPAN, and it seems possible to build a DLL that incorporates the interpreter, and then each individual script .exe could be much smaller. I'll see if I can get that to work; it would be really nice to be able to drop the Perl requirement. If we could do that, I'd include the command line tools plus the compiled scripts with the basic R distribution, so you could easily build simple packages. The Rtools.exe installer would then just need to install the MinGW compilers for packages containing compiled code, and a few extras needed for building R. I don't really know Perl, so I might be asking for advice if I get stuck. Duncan Murdoch From: [EMAIL PROTECTED] on behalf of Gabor Grothendieck Sent: Fri 5/4/2007 11:55 AM To: Doran, Harold Cc: r-help@stat.math.ethz.ch; Duncan Murdoch Subject: Re: [R] [SPAM] - Re: R package development in windows - BayesianFilter detected spam Just googling I found this: http://www.perlmonks.org/?node_id=186402 On 5/4/07, Doran, Harold [EMAIL PROTECTED] wrote: The best, of course, would be to get rid of Perl altogether. In Python, it is possible to make standalone executables. Is it possible to also do this in Perl, then one could eliminate a perl install. Or, is it possible to use Python to accomplish what perl is currently doing? I may be getting in over my head here since I really don't know what perl is doing under the hood. Harold __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] save intermediate result
Break your long running function into two parts. The first will be the long running part and it can return a value that you can pass to the second part. If the second part fails, you still have the original value that was returned. f1.1 - function(){ line1 - f2() return(line1) } f1.2 - function(line1){ rest of the original f1 } origData - f1.1() nextData - f1.2(origData) On 5/4/07, Weiwei Shi [EMAIL PROTECTED] wrote: sorry for my English, staying in US over 7 years makes me think I were ok, now :( anyway, here is an example and why I need that: suppose I have a function like the following: f1 - function(){ line1 - f2() # assume this line takes very long time, like more than 30 min # then I need to save line1 as an object into workspace b/c # the following lines after this might have some bugs # currently I use save(line1, file=line1.robj) # but I am thinking if there is anothe way, like enviroment, instead of save it into a file # codes as followed might have some bugs # blabla... } if some codes as followed have bugs, my f1 function cannot return anything, which means I have to re-run the time-consuming line again. thanks, Weiwei On 5/4/07, Joerg van den Hoff [EMAIL PROTECTED] wrote: On Fri, May 04, 2007 at 03:45:10PM -0400, Weiwei Shi wrote: hi, is there a way to save a R object into workspace instead of into a file during a running of function? if I understand the question correctly you want the 'super-assignment' operator `-' as in -cut--- R f - function(x) y - 2*x R f(1) R y [1] 2 R f(10) R y [1] 20 -cut--- i.e. y is assigned a value in the 'workspace'. be careful, though, with this kind of assignment. why not return the object you are interested in as a list component together with other results when the function call finishes? -- Weiwei Shi, Ph.D Research Scientist GeneGO, Inc. Did you always know? No, I did not. But I believed... ---Matrix III __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Jim Holtman Cincinnati, OH +1 513 646 9390 What is the problem you are trying to solve? __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] [SPAM] - Re: R package development in windows - BayesianFilter detected spam
Surely the idea of having a separate windows version of R is that it works in a very Windows-like way and that would preclude having conflicts with standard utilities on Windows. To me this is one of the most annoying things about R since I do use other Windows software and that includes software that conflicts with R. In fact, one of the Linux distros I tried to install on top of Windows conflicted with R since its setup.bat file used find and that's Linux! After spending quite a bit of time being frustrated with the installation I finally realized R was the culprit and was really cursing R for having wasted so much of my time. Windows should be setting the standard, not R. I don't have this sort of conflict problem with any of the other software I use except R. One other point. The multiple UNIX tools in a single executable I mentioned is called busybox: http://busybox.net/ Its intended for embedded systems and specific to UNIX systems although its web page claims its not that hard to get it to work on Windows. For example, this UNIX-on-a-floppy distro, tomsrtbt, uses it: http://www.toms.net/rb/ I mention busybox because you indicated that you were concerned about the size of the R distro on Windows. On 5/5/07, Duncan Murdoch [EMAIL PROTECTED] wrote: On 05/05/2007 8:00 AM, Gabor Grothendieck wrote: I think that should be the default in order to protect the user. Protecting the user from this sort of annoying conflict is important for a professionally working product that gets along with the rest of the Windows system. I don't, because R building requires simulation of a subset of a Unix environment, so in case of a Unix/Windows conflict, Unix should win. For example none of the Makefiles use backslashes as path separators, they all use Unix-style forward slashes. Duncan Murdoch On 5/5/07, Duncan Murdoch [EMAIL PROTECTED] wrote: On 04/05/2007 9:32 PM, Gabor Grothendieck wrote: It certainly would be excellent if installing perl could be eliminated. One additional thing that I really dislike about the R installation is that one needs find on one's path and that conflicts with find on Windows so other applications unrelated to R that use scripts can suddenly break because of R. If that could be solved at the same time it would be nice. At a minimum we should be able to wrap the calls to find in a macro, so you could change the macro in MkRules and rename your copy from Rtools to remove the conflict. I'll take a look. Duncan Murdoch On 5/4/07, Duncan Murdoch [EMAIL PROTECTED] wrote: On 04/05/2007 4:25 PM, Greg Snow wrote: I have used the pp/par combination for Perl before. It is pretty straight forward to convert an existing perl script into a stand alone windows executable. Both the Activestate licence and the Perl Artistic licence allow for embedding a script and perl interpreter together and distributing the result. The current perl script(s) used for the R package build package could easily be converted to a 'stand alone' windows executable and be distributed with Rtools for those who do not want to install Perl themselves. The only drawback is that even a Hello World script will result in over a meg sized executable (due to the perl interpreter being included). I took a quick look at the PAR page on CPAN, and it seems possible to build a DLL that incorporates the interpreter, and then each individual script .exe could be much smaller. I'll see if I can get that to work; it would be really nice to be able to drop the Perl requirement. If we could do that, I'd include the command line tools plus the compiled scripts with the basic R distribution, so you could easily build simple packages. The Rtools.exe installer would then just need to install the MinGW compilers for packages containing compiled code, and a few extras needed for building R. I don't really know Perl, so I might be asking for advice if I get stuck. Duncan Murdoch From: [EMAIL PROTECTED] on behalf of Gabor Grothendieck Sent: Fri 5/4/2007 11:55 AM To: Doran, Harold Cc: r-help@stat.math.ethz.ch; Duncan Murdoch Subject: Re: [R] [SPAM] - Re: R package development in windows - BayesianFilter detected spam Just googling I found this: http://www.perlmonks.org/?node_id=186402 On 5/4/07, Doran, Harold [EMAIL PROTECTED] wrote: The best, of course, would be to get rid of Perl altogether. In Python, it is possible to make standalone executables. Is it possible to also do this in Perl, then one could eliminate a perl install. Or, is it possible to use Python to accomplish what perl is currently doing? I may be getting in over my head here since I really don't know what perl is doing under the hood. Harold __ R-help@stat.math.ethz.ch mailing list
Re: [R] [SPAM] - Re: R package development in windows - BayesianFilter detected spam
I am glad to help. The pp program is the main tool to use to create the executable. -Original Message- From: Duncan Murdoch [EMAIL PROTECTED] To: Greg Snow [EMAIL PROTECTED] Cc: Gabor Grothendieck [EMAIL PROTECTED]; Doran, Harold [EMAIL PROTECTED]; r-help@stat.math.ethz.ch r-help@stat.math.ethz.ch Sent: 5/4/07 4:46 PM Subject: Re: [R] [SPAM] - Re: R package development in windows - BayesianFilter detected spam On 04/05/2007 4:25 PM, Greg Snow wrote: I have used the pp/par combination for Perl before. It is pretty straight forward to convert an existing perl script into a stand alone windows executable. Both the Activestate licence and the Perl Artistic licence allow for embedding a script and perl interpreter together and distributing the result. The current perl script(s) used for the R package build package could easily be converted to a 'stand alone' windows executable and be distributed with Rtools for those who do not want to install Perl themselves. The only drawback is that even a Hello World script will result in over a meg sized executable (due to the perl interpreter being included). I took a quick look at the PAR page on CPAN, and it seems possible to build a DLL that incorporates the interpreter, and then each individual script .exe could be much smaller. I'll see if I can get that to work; it would be really nice to be able to drop the Perl requirement. If we could do that, I'd include the command line tools plus the compiled scripts with the basic R distribution, so you could easily build simple packages. The Rtools.exe installer would then just need to install the MinGW compilers for packages containing compiled code, and a few extras needed for building R. I don't really know Perl, so I might be asking for advice if I get stuck. Duncan Murdoch From: [EMAIL PROTECTED] on behalf of Gabor Grothendieck Sent: Fri 5/4/2007 11:55 AM To: Doran, Harold Cc: r-help@stat.math.ethz.ch; Duncan Murdoch Subject: Re: [R] [SPAM] - Re: R package development in windows - BayesianFilter detected spam Just googling I found this: http://www.perlmonks.org/?node_id=186402 On 5/4/07, Doran, Harold [EMAIL PROTECTED] wrote: The best, of course, would be to get rid of Perl altogether. In Python, it is possible to make standalone executables. Is it possible to also do this in Perl, then one could eliminate a perl install. Or, is it possible to use Python to accomplish what perl is currently doing? I may be getting in over my head here since I really don't know what perl is doing under the hood. Harold __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] [SPAM] - Re: R package development in windows - BayesianFilter detected spam
If we go the route of converting Perl scripts into windows executables, then there is the Perl Power Tools (ppt) project for perl that aims to create a cross platform set of common Unix tools (see http://sourceforge.net/projects/ppt/, or the current toolset can be downloaded from CPAN). The find utility has been included for a while and I think we could get the author of that one to help. If all the Unix tools needed by R are included in ppt, then it may be possible to use those in Rtools for an overall smaller footprint. The find perl script could be compiled to an .exe and given a different name so that it would not conflict with the windows find command. There would need to be some switch or something to indicate using these tools rather than the standards for those that don't use the Rtools (but insteal installed perl and the other tools). From: Duncan Murdoch [mailto:[EMAIL PROTECTED] Sent: Sat 5/5/2007 6:06 AM To: Gabor Grothendieck Cc: Greg Snow; Doran, Harold; r-help@stat.math.ethz.ch Subject: Re: [R] [SPAM] - Re: R package development in windows - BayesianFilter detected spam On 05/05/2007 8:00 AM, Gabor Grothendieck wrote: I think that should be the default in order to protect the user. Protecting the user from this sort of annoying conflict is important for a professionally working product that gets along with the rest of the Windows system. I don't, because R building requires simulation of a subset of a Unix environment, so in case of a Unix/Windows conflict, Unix should win. For example none of the Makefiles use backslashes as path separators, they all use Unix-style forward slashes. Duncan Murdoch On 5/5/07, Duncan Murdoch [EMAIL PROTECTED] wrote: On 04/05/2007 9:32 PM, Gabor Grothendieck wrote: It certainly would be excellent if installing perl could be eliminated. One additional thing that I really dislike about the R installation is that one needs find on one's path and that conflicts with find on Windows so other applications unrelated to R that use scripts can suddenly break because of R. If that could be solved at the same time it would be nice. At a minimum we should be able to wrap the calls to find in a macro, so you could change the macro in MkRules and rename your copy from Rtools to remove the conflict. I'll take a look. Duncan Murdoch On 5/4/07, Duncan Murdoch [EMAIL PROTECTED] wrote: On 04/05/2007 4:25 PM, Greg Snow wrote: I have used the pp/par combination for Perl before. It is pretty straight forward to convert an existing perl script into a stand alone windows executable. Both the Activestate licence and the Perl Artistic licence allow for embedding a script and perl interpreter together and distributing the result. The current perl script(s) used for the R package build package could easily be converted to a 'stand alone' windows executable and be distributed with Rtools for those who do not want to install Perl themselves. The only drawback is that even a Hello World script will result in over a meg sized executable (due to the perl interpreter being included). I took a quick look at the PAR page on CPAN, and it seems possible to build a DLL that incorporates the interpreter, and then each individual script .exe could be much smaller. I'll see if I can get that to work; it would be really nice to be able to drop the Perl requirement. If we could do that, I'd include the command line tools plus the compiled scripts with the basic R distribution, so you could easily build simple packages. The Rtools.exe installer would then just need to install the MinGW compilers for packages containing compiled code, and a few extras needed for building R. I don't really know Perl, so I might be asking for advice if I get stuck. Duncan Murdoch From: [EMAIL PROTECTED] on behalf of Gabor Grothendieck Sent: Fri 5/4/2007 11:55 AM To: Doran, Harold Cc: r-help@stat.math.ethz.ch; Duncan Murdoch Subject: Re: [R] [SPAM] - Re: R package development in windows - BayesianFilter detected spam Just googling I found this: http://www.perlmonks.org/?node_id=186402 On 5/4/07, Doran, Harold [EMAIL PROTECTED] wrote: The best, of course, would be to get rid of Perl altogether. In Python, it is possible to make standalone executables. Is it possible to also do this in Perl, then one could eliminate a perl install. Or, is it possible to use Python to accomplish what perl is currently doing? I may be getting in over my head here since I really don't know what perl is doing under the hood. Harold __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. [[alternative HTML version deleted]]
[R] How to latex tables?
Suppose I have a table constructed from structable or simply just an object of class table. How can I convert it to a latex object? I looked in RSiteSearch, but only found info about matrices or data frames. Steve For example, here is a table t2 str(t2) table [1:2, 1:2, 1:2] 6 8 594 592 57 ... - attr(*, dimnames)=List of 3 ..$ Hospital : chr [1:2] A B ..$ Survival : chr [1:2] Died Survived ..$ Condition: chr [1:2] Good Poor Here's what happens with latex(t2): latex(t2) Error in x[, j] : incorrect number of dimensions Next, here's what happens with a structtable. tab=structable(Hospital ~ Condition + Survival, data=t2) tab HospitalAB Condition Survival Good Died 68 Survived 594 592 Poor Died578 Survived 1443 192 If I use latex(tab) I get Error in dimnames(cx) - list(rnam, nam) : length of 'dimnames' [1] not equal to array extent In addition: Warning messages: ...(deleted)... __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Response as matrix in MCMClogit?
I hope some of the authors of the package MCMCpack read this. I don't know if there is a way to set the response in the model formula of MCMClogit other than a (numeric) response vector. I think the MCMClogit in the MCMCpack needs some development so that the response in the formula could be set as a two-column matrix with the columns giving the numbers of successes and failures (just as in glm) or at least to be able to set the weights also when using the response vector. The advantage would be when analysing large datasets (over 1 million rows) to have it summarized and entered as a matrix because otherwise it takes a huge amount of memory space and crashes the program (if trying to expand the matrix into a response vector to have it in the formula. After all, the bayesian logistic regression is especially useful in large datasets. Is there a way to do that? Thanks -- View this message in context: http://www.nabble.com/Response-as-matrix-in-MCMClogit--tf3696522.html#a10336757 Sent from the R help mailing list archive at Nabble.com. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] [SPAM] - Re: R package development in windows - BayesianFilter detected spam
I think its actually here: http://search.cpan.org/dist/ppt/ but would have the significant disadvantage of deepening the use of perl whereas I think the direction should be to get rid of perl. On 5/5/07, Greg Snow [EMAIL PROTECTED] wrote: If we go the route of converting Perl scripts into windows executables, then there is the Perl Power Tools (ppt) project for perl that aims to create a cross platform set of common Unix tools (see http://sourceforge.net/projects/ppt/, or the current toolset can be downloaded from CPAN). The find utility has been included for a while and I think we could get the author of that one to help. If all the Unix tools needed by R are included in ppt, then it may be possible to use those in Rtools for an overall smaller footprint. The find perl script could be compiled to an .exe and given a different name so that it would not conflict with the windows find command. There would need to be some switch or something to indicate using these tools rather than the standards for those that don't use the Rtools (but insteal installed perl and the other tools). From: Duncan Murdoch [mailto:[EMAIL PROTECTED] Sent: Sat 5/5/2007 6:06 AM To: Gabor Grothendieck Cc: Greg Snow; Doran, Harold; r-help@stat.math.ethz.ch Subject: Re: [R] [SPAM] - Re: R package development in windows - BayesianFilter detected spam On 05/05/2007 8:00 AM, Gabor Grothendieck wrote: I think that should be the default in order to protect the user. Protecting the user from this sort of annoying conflict is important for a professionally working product that gets along with the rest of the Windows system. I don't, because R building requires simulation of a subset of a Unix environment, so in case of a Unix/Windows conflict, Unix should win. For example none of the Makefiles use backslashes as path separators, they all use Unix-style forward slashes. Duncan Murdoch On 5/5/07, Duncan Murdoch [EMAIL PROTECTED] wrote: On 04/05/2007 9:32 PM, Gabor Grothendieck wrote: It certainly would be excellent if installing perl could be eliminated. One additional thing that I really dislike about the R installation is that one needs find on one's path and that conflicts with find on Windows so other applications unrelated to R that use scripts can suddenly break because of R. If that could be solved at the same time it would be nice. At a minimum we should be able to wrap the calls to find in a macro, so you could change the macro in MkRules and rename your copy from Rtools to remove the conflict. I'll take a look. Duncan Murdoch On 5/4/07, Duncan Murdoch [EMAIL PROTECTED] wrote: On 04/05/2007 4:25 PM, Greg Snow wrote: I have used the pp/par combination for Perl before. It is pretty straight forward to convert an existing perl script into a stand alone windows executable. Both the Activestate licence and the Perl Artistic licence allow for embedding a script and perl interpreter together and distributing the result. The current perl script(s) used for the R package build package could easily be converted to a 'stand alone' windows executable and be distributed with Rtools for those who do not want to install Perl themselves. The only drawback is that even a Hello World script will result in over a meg sized executable (due to the perl interpreter being included). I took a quick look at the PAR page on CPAN, and it seems possible to build a DLL that incorporates the interpreter, and then each individual script .exe could be much smaller. I'll see if I can get that to work; it would be really nice to be able to drop the Perl requirement. If we could do that, I'd include the command line tools plus the compiled scripts with the basic R distribution, so you could easily build simple packages. The Rtools.exe installer would then just need to install the MinGW compilers for packages containing compiled code, and a few extras needed for building R. I don't really know Perl, so I might be asking for advice if I get stuck. Duncan Murdoch From: [EMAIL PROTECTED] on behalf of Gabor Grothendieck Sent: Fri 5/4/2007 11:55 AM To: Doran, Harold Cc: r-help@stat.math.ethz.ch; Duncan Murdoch Subject: Re: [R] [SPAM] - Re: R package development in windows - BayesianFilter detected spam Just googling I found this: http://www.perlmonks.org/?node_id=186402 On 5/4/07, Doran, Harold [EMAIL PROTECTED] wrote: The best, of course, would be to get rid of Perl altogether. In Python, it is possible to make standalone executables. Is it possible to also do this in Perl, then one could eliminate a perl install. Or, is it possible to use Python to accomplish what perl is currently doing? I may be getting in over my head here since I really don't know what perl is doing under the hood. Harold
Re: [R] [SPAM] - Re: R package development in windows - BayesianFilter detected spam
On 05/05/2007 9:36 AM, Greg Snow wrote: If we go the route of converting Perl scripts into windows executables, then there is the Perl Power Tools (ppt) project for perl that aims to create a cross platform set of common Unix tools (see http://sourceforge.net/projects/ppt/, or the current toolset can be downloaded from CPAN). The find utility has been included for a while and I think we could get the author of that one to help. If all the Unix tools needed by R are included in ppt, then it may be possible to use those in Rtools for an overall smaller footprint. The find perl script could be compiled to an .exe and given a different name so that it would not conflict with the windows find command. The Rtools footprint from the command line utilities isn't a problem. The size comes from Perl and MinGW. This really isn't an issue. I've just been trying out pp. I couldn't get it to build with ActivePerl (I don't have the MS compilers installed), but I did get it built in Strawberry Perl, which uses MinGW. It converts each of our Perl scripts into an .exe of about 2.5 MB (self-contained) or 1.4 MB (depending on perl58.dll, which is about 1.1MB). I believe we have 10 Perl scripts that would need converting, so this would add about 15 MB to the footprint. That's a size that is feasible, but wasteful, because there's a lot of duplication in these .exe's, with copies of all the Perl modules they use. Perhaps some additional work could reduce their size, or we could follow Gabor's suggestion and compile them into one big .exe instead. Which would you think would be easier? Duncan Murdoch There would need to be some switch or something to indicate using these tools rather than the standards for those that don't use the Rtools (but insteal installed perl and the other tools). From: Duncan Murdoch [mailto:[EMAIL PROTECTED] Sent: Sat 5/5/2007 6:06 AM To: Gabor Grothendieck Cc: Greg Snow; Doran, Harold; r-help@stat.math.ethz.ch Subject: Re: [R] [SPAM] - Re: R package development in windows - BayesianFilter detected spam On 05/05/2007 8:00 AM, Gabor Grothendieck wrote: I think that should be the default in order to protect the user. Protecting the user from this sort of annoying conflict is important for a professionally working product that gets along with the rest of the Windows system. I don't, because R building requires simulation of a subset of a Unix environment, so in case of a Unix/Windows conflict, Unix should win. For example none of the Makefiles use backslashes as path separators, they all use Unix-style forward slashes. Duncan Murdoch On 5/5/07, Duncan Murdoch [EMAIL PROTECTED] wrote: On 04/05/2007 9:32 PM, Gabor Grothendieck wrote: It certainly would be excellent if installing perl could be eliminated. One additional thing that I really dislike about the R installation is that one needs find on one's path and that conflicts with find on Windows so other applications unrelated to R that use scripts can suddenly break because of R. If that could be solved at the same time it would be nice. At a minimum we should be able to wrap the calls to find in a macro, so you could change the macro in MkRules and rename your copy from Rtools to remove the conflict. I'll take a look. Duncan Murdoch On 5/4/07, Duncan Murdoch [EMAIL PROTECTED] wrote: On 04/05/2007 4:25 PM, Greg Snow wrote: I have used the pp/par combination for Perl before. It is pretty straight forward to convert an existing perl script into a stand alone windows executable. Both the Activestate licence and the Perl Artistic licence allow for embedding a script and perl interpreter together and distributing the result. The current perl script(s) used for the R package build package could easily be converted to a 'stand alone' windows executable and be distributed with Rtools for those who do not want to install Perl themselves. The only drawback is that even a Hello World script will result in over a meg sized executable (due to the perl interpreter being included). I took a quick look at the PAR page on CPAN, and it seems possible to build a DLL that incorporates the interpreter, and then each individual script .exe could be much smaller. I'll see if I can get that to work; it would be really nice to be able to drop the Perl requirement. If we could do that, I'd include the command line tools plus the compiled scripts with the basic R distribution, so you could easily build simple packages. The Rtools.exe installer would then just need to install the MinGW compilers for packages containing compiled code, and a few extras needed for building R. I don't really know Perl, so I might be asking for advice if I get stuck. Duncan Murdoch From: [EMAIL PROTECTED] on behalf of Gabor Grothendieck Sent: Fri 5/4/2007 11:55 AM To: Doran, Harold Cc:
[R] (no subject)
Dear Mailing-List, I think this is a newbie question. However, i would like to integrate a loop in the function below. So that the script calculates for each variable within the dataframe df1 the connecting data in df2. Actually it takes only the first row. I hope that's clear. My goal is to apply the function for each data in df1. Many thanks in advance. An example is as follows: df1 -data.frame(b=c(1,2,3,4,5,5,6,7,8,9,10)) df2 -data.frame(x=c(1,2,3,4,5), y=c(2,5,4,6,5), z=c(10, 8, 7, 9, 3)) attach(df2) myfun = function(yxz) (x + y)/(z * df1$b) df1$goal - apply(df2, 1, myfun) df1$goal regards, kay __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] nlme fixed effects specification
Ivo, I don't know whether you ever got a proper answer to this question. Here is a kludgy one -- someone else can probably provide a more elegant version of my diffid function. What you want to do is sweep out the mean deviations from both y and x based on the factor fe and then estimate the simple y on x linear model. I have an old function that was originally designed to do panel data models that looks like this: diffid - function(h, id) { if(is.vector(h)) h - matrix(h, ncol = 1) Ph - unique(id) Ph - cbind(Ph, table(id)) for(i in 1:ncol(h)) Ph - cbind(Ph, tapply(h[, i], id, mean)) is - tapply(id, id) Ph - Ph[is, - (1:2)] h - Ph } With this you can do: set.seed(1); fe = as.factor( as.integer( runif(100)*10 ) ); y=rnorm(100); x=rnorm (100); summary(lm(diffid(y,fe) ~ diffid(x,fe))) HTH, Roger On May 4, 2007, at 3:08 PM, ivo welch wrote: hi doug: yikes. could I have done better? Oh dear. I tried to make my example clearer half-way through, but made it worse. I meant set.seed(1); fe = as.factor( as.integer( runif(100)*10 ) ); y=rnorm(100); x=rnorm (100); print(summary(lm( y ~ x + fe))) deleted Coefficients: Estimate Std. Error t value Pr(|t|) (Intercept) 0.1128 0.36800.31 0.76 x 0.0232 0.09600.24 0.81 fe1 -0.6628 0.5467 -1.21 0.23 deleted more fe's Residual standard error: 0.949 on 89 degrees of freedom Multiple R-Squared: 0.0838, Adjusted R-squared: -0.0192 F-statistic: 0.814 on 10 and 89 DF, p-value: 0.616 I really am interested only in this linear specification, the coefficient on x (0.0232) and the R^2 of 8.38% (adjusted -1.92%). If I did not have so much data in my real application, I would never have to look at nlme or nlme4. I really only want to be able to run this specification through lm with far more observations (100,000) and groups (10,000), and be done with my problem. now, with a few IQ points more, I would have looked at the lme function instead of the nlme function in library(nlme).[then again, I could understand stats a lot better with a few more IQ points.] I am reading the lme description now, but I still don't understand how to specify that I want to have dummies in my specification, plus the x variable, and that's it. I think I am not understanding the integration of fixed and random effects in the same R functions. thanks for pointing me at your lme4 library. on linux, version 2.5.0, I did R CMD INSTALL matrix*.tar.gz R CMD INSTALL lme4*.tar.gz and it installed painlessly. (I guess R install packages don't have knowledge of what they rely on; lme4 requires matrix, which the docs state, but having gotten this wrong, I didn't get an error. no big deal. I guess I am too used to automatic resolution of dependencies from linux installers these days that I did not expect this.) I now tried your specification: library(lme4) Loading required package: Matrix Loading required package: lattice lmer(y~x+(1|fe)) Linear mixed-effects model fit by REML Formula: y ~ x + (1 | fe) AIC BIC logLik MLdeviance REMLdeviance 282 290 -138270 276 Random effects: Groups NameVariance Std.Dev. fe (Intercept) 0.0445 0.211 Residual 0.889548532468 0.9431588 number of obs: 100, groups: fe, 10 Fixed effects: Estimate Std. Error t value (Intercept) -0.0188 0.0943 -0.199 x 0.0528 0.0904 0.585 Correlation of Fixed Effects: (Intr) x -0.022 Warning messages: 1: Estimated variance for factor 'fe' is effectively zero in: `LMEoptimize-`(`*tmp*`, value = list(maxIter = 200L, tolerance = 0.000149011611938477, 2: $ operator not defined for this S4 class, returning NULL in: x $symbolic.cor Without being a statistician, I can still determine that this is not the model I would like to work with. The coefficient is 0.0528, not 0.0232. (I am also not sure why I am getting these warning messages on my system, either, but I don't think it matters.) is there a simple way to get the equivalent specification for my smple model, using lmer or lme, which does not choke on huge data sets? regards, /ivo __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting- guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] loop in function
Dear Mailing-List, I think this is a newbie question. However, i would like to integrate a loop in the function below. So that the script calculates for each variable within the dataframe df1 the connecting data in df2. Actually it takes only the first row. I hope that's clear. My goal is to apply the function for each data in df1. Many thanks in advance. An example is as follows: df1 -data.frame(b=c(1,2,3,4,5,5,6,7,8,9,10)) df2 -data.frame(x=c(1,2,3,4,5), y=c(2,5,4,6,5), z=c(10, 8, 7, 9, 3)) attach(df2) myfun = function(yxz) (x + y)/(z * df1$b) df1$goal - apply(df2, 1, myfun) df1$goal regards, kay __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] (no subject)
From you function, it looks like you want to return 5 values for each element in the vector 'df1'. Is this what you want? I am not sure what you expect the output to look like. df1 -data.frame(b=c(1,2,3,4,5,5,6,7,8,9,10)) df2 -data.frame(x=c(1,2,3,4,5), y=c(2,5,4,6,5), z=c(10, 8, 7, 9, 3)) goal - list() for (i in df1$b){ + goal[[i]] - (df2$x + df2$y)/(df2$z * i) + } goal [[1]] [1] 0.30 0.875000 1.00 1.11 3.33 [[2]] [1] 0.150 0.4375000 0.500 0.556 1.667 [[3]] [1] 0.100 0.2916667 0.333 0.3703704 1.111 [[4]] [1] 0.075 0.2187500 0.250 0.278 0.833 [[5]] [1] 0.060 0.175 0.200 0.222 0.667 [[6]] [1] 0.050 0.1458333 0.167 0.1851852 0.556 [[7]] [1] 0.04285714 0.1250 0.14285714 0.15873016 0.47619048 [[8]] [1] 0.0375000 0.1093750 0.125 0.139 0.417 [[9]] [1] 0.0333 0.0972 0. 0.12345679 0.37037037 [[10]] [1] 0.030 0.0875000 0.100 0.111 0.333 On 5/5/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Dear Mailing-List, I think this is a newbie question. However, i would like to integrate a loop in the function below. So that the script calculates for each variable within the dataframe df1 the connecting data in df2. Actually it takes only the first row. I hope that's clear. My goal is to apply the function for each data in df1. Many thanks in advance. An example is as follows: df1 -data.frame(b=c(1,2,3,4,5,5,6,7,8,9,10)) df2 -data.frame(x=c(1,2,3,4,5), y=c(2,5,4,6,5), z=c(10, 8, 7, 9, 3)) attach(df2) myfun = function(yxz) (x + y)/(z * df1$b) df1$goal - apply(df2, 1, myfun) df1$goal regards, kay __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Jim Holtman Cincinnati, OH +1 513 646 9390 What is the problem you are trying to solve? __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] rgl install on rhel4 x86_64
Thanks. I did successfully install Friday afternoon using the configure script from version 0.68, however the library would not load, so I removed the package. I will attempt to build a new configure script using autoconf on Monday morning. Thanks much for your feedback, I really appreciate it. Troy -Original Message- From: Duncan Murdoch [mailto:[EMAIL PROTECTED] Sent: Fri 5/4/2007 7:00 PM To: Smith, Troy (NIH/NCI) [C] Cc: r-help@stat.math.ethz.ch Subject: Re: [R] rgl install on rhel4 x86_64 On 04/05/2007 3:43 PM, Smith, Troy (NIH/NCI) [C] wrote: I'm trying to install rgl 0.71 on a redhat enterprise 4, x86_64. I have tried using R 2.2.1, 2.3.1, and 2.5.0. I have successfully installed this version of rgl, using R 2.2.1 on an rhel4 i386 host. On the x86_64 host, I receive the following configuration error: I don't think I can help much: I know very little about configure et al. A few suggestions: - look in config.log to see a few more details - run autoconf on your system to build a new configure script from the configure.ac input. - ask Redhat for help - try previous versions of the configure script (available from the svn server; see rgl.neoscientists.org for details). There have been a lot of changes to it lately to try to get it to work on systems I have access to; unfortunately yours isn't one of them. Perhaps a previous iteration would have worked for you. If you can figure out what's wrong, please let me know, and if it doesn't break other systems, it might make it into 0.72. Duncan Murdoch checking GL/gl.h usability... no checking GL/gl.h presence... yes configure: WARNING: GL/gl.h: present but cannot be compiled configure: WARNING: GL/gl.h: check for missing prerequisite headers? configure: WARNING: GL/gl.h: see the Autoconf documentation configure: WARNING: GL/gl.h: section Present But Cannot Be Compiled configure: WARNING: GL/gl.h: proceeding with the preprocessor's result configure: WARNING: GL/gl.h: in the future, the compiler will take precedence configure: WARNING: ## -- ## configure: WARNING: ## Report this to the AC_PACKAGE_NAME lists. ## configure: WARNING: ## -- ## checking for GL/gl.h... yes checking GL/glu.h usability... no checking GL/glu.h presence... yes configure: WARNING: GL/glu.h: present but cannot be compiled configure: WARNING: GL/glu.h: check for missing prerequisite headers? configure: WARNING: GL/glu.h: see the Autoconf documentation configure: WARNING: GL/glu.h: section Present But Cannot Be Compiled configure: WARNING: GL/glu.h: proceeding with the preprocessor's result configure: WARNING: GL/glu.h: in the future, the compiler will take precedence configure: WARNING: ## -- ## configure: WARNING: ## Report this to the AC_PACKAGE_NAME lists. ## configure: WARNING: ## -- ## checking for GL/glu.h... yes checking for glEnd in -lGL... no configure: error: missing required library GL ERROR: configuration failed for package 'rgl' ** Removing '/usr/local/R-2.3.1/library/rgl' I have verifed libraries are in the correct locations. Both hosts were built using an almost identical kickstart configuration file, the only difference being the installation of 64 bit versions of packages on the x86_64 hosts (i386 versions of many packages are installed on the x86_64 host also). The x86_64 host has slightly newer versions of some packages, as it is ahead of the i386 host in our patch rotation. The OpenGL libraries are installed with xorg-x11-Mesa-libGL, xorg-x11-Mesa-libGLU, xorg-x11-devel. Freeglut and freeglut-devel are also installed. I have tried pointing the installation in various directions for headers and libraries. I apologize if I've missed any important/necessary information, and will provide any additional info required. Any and all assistance is greatly appreciated. Thanks Troy National Cancer Insitute Center for Bioinformatics [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] How to latex tables?
On Sat, 2007-05-05 at 09:43 -0400, steve wrote: Suppose I have a table constructed from structable or simply just an object of class table. How can I convert it to a latex object? I looked in RSiteSearch, but only found info about matrices or data frames. Steve For example, here is a table t2 str(t2) table [1:2, 1:2, 1:2] 6 8 594 592 57 ... - attr(*, dimnames)=List of 3 ..$ Hospital : chr [1:2] A B ..$ Survival : chr [1:2] Died Survived ..$ Condition: chr [1:2] Good Poor Here's what happens with latex(t2): latex(t2) Error in x[, j] : incorrect number of dimensions Next, here's what happens with a structtable. tab=structable(Hospital ~ Condition + Survival, data=t2) tab HospitalAB Condition Survival Good Died 68 Survived 594 592 Poor Died578 Survived 1443 192 If I use latex(tab) I get Error in dimnames(cx) - list(rnam, nam) : length of 'dimnames' [1] not equal to array extent In addition: Warning messages: ...(deleted)... You are trying to apply the latex() function to a 3 dimensional table. I don't know that any of the generally available R functions to generate LaTeX markup (eg. latex() or xtable()) have methods that support 3D tables. You could either generate multiple 2D tables and convert each separately or write your own function to generate the LaTeX markup in a format that you find suitable for your application. One other possible option, which would still require some tweaking depending upon your need, would be to use ftable() to format and convert the 3D table to a 2D table and latex() that. For example, using the UCBAdmissions dataset: str(UCBAdmissions) table [1:2, 1:2, 1:6] 512 313 89 19 353 207 17 8 120 205 ... - attr(*, dimnames)=List of 3 ..$ Admit : chr [1:2] Admitted Rejected ..$ Gender: chr [1:2] Male Female ..$ Dept : chr [1:6] A B C D ... ftable(UCBAdmissions) Dept A B C D E F AdmitGender Admitted Male512 353 120 138 53 22 Female 89 17 202 131 94 24 Rejected Male313 207 205 279 138 351 Female 19 8 391 244 299 317 library(Hmisc) latex(ftable(UCBAdmissions), file = ) % latex.default(ftable(UCBAdmissions), file = ) % \begin{table}[!tbp] \begin{center} \begin{tabular}{rr}\hline\hline \multicolumn{1}{c}{} \multicolumn{1}{c}{} \multicolumn{1}{c}{} \multicolumn{1}{c}{} \multicolumn{1}{c}{} \multicolumn{1}{c}{} \\ \hline $512$$353$$120$$138$$ 53$$ 22$\\ $ 89$$ 17$$202$$131$$ 94$$ 24$\\ $313$$207$$205$$279$$138$$351$\\ $ 19$$ 8$$391$$244$$299$$317$\\ \hline \end{tabular} \end{center} \end{table} HTH, Marc Schwartz __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] How to latex tables?
On Sat, 2007-05-05 at 11:35 -0500, Marc Schwartz wrote: On Sat, 2007-05-05 at 09:43 -0400, steve wrote: Suppose I have a table constructed from structable or simply just an object of class table. How can I convert it to a latex object? I looked in RSiteSearch, but only found info about matrices or data frames. Steve For example, here is a table t2 str(t2) table [1:2, 1:2, 1:2] 6 8 594 592 57 ... - attr(*, dimnames)=List of 3 ..$ Hospital : chr [1:2] A B ..$ Survival : chr [1:2] Died Survived ..$ Condition: chr [1:2] Good Poor Here's what happens with latex(t2): latex(t2) Error in x[, j] : incorrect number of dimensions Next, here's what happens with a structtable. tab=structable(Hospital ~ Condition + Survival, data=t2) tab HospitalAB Condition Survival Good Died 68 Survived 594 592 Poor Died578 Survived 1443 192 If I use latex(tab) I get Error in dimnames(cx) - list(rnam, nam) : length of 'dimnames' [1] not equal to array extent In addition: Warning messages: ...(deleted)... You are trying to apply the latex() function to a 3 dimensional table. I don't know that any of the generally available R functions to generate LaTeX markup (eg. latex() or xtable()) have methods that support 3D tables. You could either generate multiple 2D tables and convert each separately or write your own function to generate the LaTeX markup in a format that you find suitable for your application. One other possible option, which would still require some tweaking depending upon your need, would be to use ftable() to format and convert the 3D table to a 2D table and latex() that. For example, using the UCBAdmissions dataset: str(UCBAdmissions) table [1:2, 1:2, 1:6] 512 313 89 19 353 207 17 8 120 205 ... - attr(*, dimnames)=List of 3 ..$ Admit : chr [1:2] Admitted Rejected ..$ Gender: chr [1:2] Male Female ..$ Dept : chr [1:6] A B C D ... ftable(UCBAdmissions) Dept A B C D E F AdmitGender Admitted Male512 353 120 138 53 22 Female 89 17 202 131 94 24 Rejected Male313 207 205 279 138 351 Female 19 8 391 244 299 317 library(Hmisc) latex(ftable(UCBAdmissions), file = ) % latex.default(ftable(UCBAdmissions), file = ) % \begin{table}[!tbp] \begin{center} \begin{tabular}{rr}\hline\hline \multicolumn{1}{c}{} \multicolumn{1}{c}{} \multicolumn{1}{c}{} \multicolumn{1}{c}{} \multicolumn{1}{c}{} \multicolumn{1}{c}{} \\ \hline $512$$353$$120$$138$$ 53$$ 22$\\ $ 89$$ 17$$202$$131$$ 94$$ 24$\\ $313$$207$$205$$279$$138$$351$\\ $ 19$$ 8$$391$$244$$299$$317$\\ \hline \end{tabular} \end{center} \end{table} Steve, I located a couple of posts by David Whiting regarding converting an ftable() object to LaTeX that you might find helpful: http://tolstoy.newcastle.edu.au/R/help/05/08/11245.html http://tolstoy.newcastle.edu.au/R/help/05/08/11251.html HTH, Marc __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Convert Time from HH.mm to Decimal hours?
I have data in Unixtime that I wish to plot several ways. I have learned to convert to a vector of time strings, and for the sake of plotting times of day vs. heights of tide (to examine seasonality component) I can extract HH:mm pretty swiftly and readily from Unixtime using emacs calc. However, I have had trouble with decimal hours that are needed (or perhaps not) for at least one of the polar plotting packages. Is there an easy way to do this in R? Thank you for any ideas. (I have learned how to do simple polar plots by way of a couple of people on this list. Thanks very much.) Alan -- Alan Davis, Kagman High School, Saipan [EMAIL PROTECTED] I consider that the golden rule requires that if I like a program I must share it with other people who like it. Richard Stallman Every great advance in natural knowledge has involved the absolute rejection of authority. - Thomas H. Huxley __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] pseudo-R2 or GOF for regression trees?
Hello, Is there an accepted way to convey, for regression trees, something akin to R-squared? I'm developing regression trees for a continuous y variable and I'd like to say how well they are doing. In particular, I'm analyzing the results of a simulation model having highly non-linear behavior, and asking what characteristics of the inputs are related to a particular output measure. I've got a very large number of points: n=4000. I'm not able to do a model sensitivity analysis because of the large number of inputs and the model run time. I've been googling around both on the archives and on the rest of the web for several hours, but I'm still having trouble getting a firm sense of the state of the art. Could someone help me to quickly understand what strategy, if any, is acceptable to say something like The regression tree in Figure 3 captures 42% of the variance? The target audience is readers who will be interested in the subsequent verbal explanation of the relationship, but only once they are comfortable that the tree really does capture something. I've run across methods to say how well a tree does relative to a set of trees on the same data, but that doesn't help much unless I'm sure the trees in question are really capturing the essence of the system. I'm happy to be pointed to a web site or to a thread I may have missed that answers this exact question. Thanks very much, Jeff -- Prof. Jeffrey Cardille [EMAIL PROTECTED] ** Département de Géographie ** Bureau: ** ** professeur adjoint / assistant professor** Salle 440 ** ** Université de Montréal ** Pavillon Strathcona ** ** C.P. 6128 ** 520, chemin de la Côte-Ste-Catherine** ** Succursale Centre-ville ** Montreal, QC H2V 2B8** ** Montréal, QC, H3C 3J7 ** Télé: (514) 343-8003** ** Web: ** ** http://www.geog.umontreal.ca/geog/cardille.htm ** ** ** ** Calendrier de Disponibilité à: ** ** http://jeffcardille.googlepages.com/udem** [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] nlme fixed effects specification
On 5/4/07, ivo welch [EMAIL PROTECTED] wrote: hi doug: yikes. could I have done better? Oh dear. I tried to make my example clearer half-way through, but made it worse. I meant set.seed(1); fe = as.factor( as.integer( runif(100)*10 ) ); y=rnorm(100); x=rnorm(100); print(summary(lm( y ~ x + fe))) deleted Coefficients: Estimate Std. Error t value Pr(|t|) (Intercept) 0.1128 0.36800.31 0.76 x 0.0232 0.09600.24 0.81 fe1 -0.6628 0.5467 -1.21 0.23 deleted more fe's Residual standard error: 0.949 on 89 degrees of freedom Multiple R-Squared: 0.0838, Adjusted R-squared: -0.0192 F-statistic: 0.814 on 10 and 89 DF, p-value: 0.616 I really am interested only in this linear specification, the coefficient on x (0.0232) and the R^2 of 8.38% (adjusted -1.92%). If I did not have so much data in my real application, I would never have to look at nlme or nlme4. I really only want to be able to run this specification through lm with far more observations (100,000) and groups (10,000), and be done with my problem. OK, I understand now. You really do want to accomodate for the levels of the factor as fixed effects not as random effects. The lme and lmer functions are fitting a more complicated model in which the variance of the random effects is chosen to maximize the log-likelihood or the restricted log-likelihood but they don't give the results that are of interest to you. As Roger indicated in another reply you should be able to obtain the results you want by sweeping out the means of the groups from both x and y. However, I tried Roger's function and a modified version that I wrote and could not show this. I'm not sure what I am doing wrong. I enclose a transcript that shows that I can reproduce the result from Roger's function but it doesn't do what either of us think it should. BTW, I realize that the estimate for the Intercept should be zero in this case. now, with a few IQ points more, I would have looked at the lme function instead of the nlme function in library(nlme).[then again, I could understand stats a lot better with a few more IQ points.] I am reading the lme description now, but I still don't understand how to specify that I want to have dummies in my specification, plus the x variable, and that's it. I think I am not understanding the integration of fixed and random effects in the same R functions. thanks for pointing me at your lme4 library. on linux, version 2.5.0, I did R CMD INSTALL matrix*.tar.gz R CMD INSTALL lme4*.tar.gz and it installed painlessly. (I guess R install packages don't have knowledge of what they rely on; lme4 requires matrix, which the docs state, but having gotten this wrong, I didn't get an error. no big deal. I guess I am too used to automatic resolution of dependencies from linux installers these days that I did not expect this.) I now tried your specification: library(lme4) Loading required package: Matrix Loading required package: lattice lmer(y~x+(1|fe)) Linear mixed-effects model fit by REML Formula: y ~ x + (1 | fe) AIC BIC logLik MLdeviance REMLdeviance 282 290 -138270 276 Random effects: Groups NameVariance Std.Dev. fe (Intercept) 0.0445 0.211 Residual 0.889548532468 0.9431588 number of obs: 100, groups: fe, 10 Fixed effects: Estimate Std. Error t value (Intercept) -0.0188 0.0943 -0.199 x 0.0528 0.0904 0.585 Correlation of Fixed Effects: (Intr) x -0.022 Warning messages: 1: Estimated variance for factor 'fe' is effectively zero in: `LMEoptimize-`(`*tmp*`, value = list(maxIter = 200L, tolerance = 0.000149011611938477, 2: $ operator not defined for this S4 class, returning NULL in: x$symbolic.cor Without being a statistician, I can still determine that this is not the model I would like to work with. The coefficient is 0.0528, not 0.0232. (I am also not sure why I am getting these warning messages on my system, either, but I don't think it matters.) is there a simple way to get the equivalent specification for my smple model, using lmer or lme, which does not choke on huge data sets? regards, /ivo set.seed(1) y - rnorm(100); x - rnorm(100) fe - gl(10, 10) head(coef(summary(lm(y ~ x + fe Estimate Std. Error t value Pr(|t|) (Intercept) 0.12203179 0.2955477 0.41290050 0.6806726 x0.02927071 0.1053909 0.27773478 0.7818601 fe2 0.13032049 0.4176603 0.31202505 0.7557513 fe3 -0.25552441 0.4164178 -0.61362502 0.5410283 fe4 0.01123732 0.4227301 0.02658272 0.9788521 fe5 0.02836583 0.4255255 0.0071 0.9470013 coef(summary(lm(diffid(y, fe) ~ diffid(x, fe Estimate Std. Error t value Pr(|t|) (Intercept) 9.802350e-18 0.08837912 1.109125e-16 1.000 diffid(x, fe) 2.927071e-02 0.10043495 2.914394e-01 0.7713312 diffid1 function(h,
[R] pseudo-R2 or GOF for regression trees?
All-- Apologies if I have inadvertently posted this message twice; I just joined the list today, after trying to post once. Thanks- Jeff # r-help message is below # Hello, Is there an accepted way to convey, for regression trees, something akin to R-squared? I'm developing regression trees for a continuous y variable and I'd like to say how well they are doing. In particular, I'm analyzing the results of a simulation model having highly non-linear behavior, and asking what characteristics of the inputs are related to a particular output measure. I've got a very large number of points: n=4000. I'm not able to do a model sensitivity analysis because of the large number of inputs and the model run time. I've been googling around both on the archives and on the rest of the web for several hours, but I'm still having trouble getting a firm sense of the state of the art. Could someone help me to quickly understand what strategy, if any, is acceptable to say something like The regression tree in Figure 3 captures 42% of the variance? The target audience is readers who will be interested in the subsequent verbal explanation of the relationship, but only once they are comfortable that the tree really does capture something. I've run across methods to say how well a tree does relative to a set of trees on the same data, but that doesn't help much unless I'm sure the trees in question are really capturing the essence of the system. I'm happy to be pointed to a web site or to a thread I may have missed that answers this exact question. I've seen similar postings but nothing that's an unequivocal answer any help would be greatly appreciated! Thanks very much, Jeff -- Prof. Jeffrey Cardille [EMAIL PROTECTED] ** Département de Géographie ** Bureau: ** ** professeur adjoint / assistant professor** Salle 440 ** ** Université de Montréal ** Pavillon Strathcona ** ** C.P. 6128 ** 520, chemin de la Côte-Ste-Catherine** ** Succursale Centre-ville ** Montreal, QC H2V 2B8** ** Montréal, QC, H3C 3J7 ** Télé: (514) 343-8003** ** Web: ** ** http://www.geog.umontreal.ca/geog/cardille.htm ** ** ** ** Calendrier de Disponibilité à: ** ** http://jeffcardille.googlepages.com/udem** [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] pseudo-R2 or GOF for regression trees?
Prof. Jeffrey Cardille wrote: Hello, Is there an accepted way to convey, for regression trees, something akin to R-squared? I'm developing regression trees for a continuous y variable and I'd like to say how well they are doing. In particular, I'm analyzing the results of a simulation model having highly non-linear behavior, and asking what characteristics of the inputs are related to a particular output measure. I've got a very large number of points: n=4000. I'm not able to do a model sensitivity analysis because of the large number of inputs and the model run time. I've been googling around both on the archives and on the rest of the web for several hours, but I'm still having trouble getting a firm sense of the state of the art. Could someone help me to quickly understand what strategy, if any, is acceptable to say something like The regression tree in Figure 3 captures 42% of the variance? The target audience is readers who will be interested in the subsequent verbal explanation of the relationship, but only once they are comfortable that the tree really does capture something. I've run across methods to say how well a tree does relative to a set of trees on the same data, but that doesn't help much unless I'm sure the trees in question are really capturing the essence of the system. I'm happy to be pointed to a web site or to a thread I may have missed that answers this exact question. Thanks very much, Jeff -- Prof. Jeffrey Cardille [EMAIL PROTECTED] R-help@stat.math.ethz.ch mailing list Ye (below) has a method to get a nearly unbiased estimate of R^2 from recursive partitioning. In his examples the result was similar to using the formula for adjusted R^2 with regression degrees of freedom equal to about 3n/4. You can also use something like 10-fold cross-validation repeated 20 times to get a fairly precise and unbiased estimate of R^2. Frank @ARTICLE{ye98mea, author = {Ye, Jianming}, year = 1998, title = {On measuring and correcting the effects of data mining and model selection}, journal = JASA, volume = 93, pages = {120-131}, annote = {generalized degrees of freedom;GDF;effective degrees of freedom;data mining;model selection;model uncertainty;overfitting;nonparametric regression;CART;simulation setup} } -- Frank E Harrell Jr Professor and Chair School of Medicine Department of Biostatistics Vanderbilt University __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] pseudo-R2 or GOF for regression trees?
On Sat, 5 May 2007, Prof. Jeffrey Cardille wrote: Hello, Is there an accepted way to convey, for regression trees, something akin to R-squared? Why not use R-squared itself for your purposes? Just get the fitted values from however you do the fit, and compute R-squared from the basic formula (the one which compares with an intercept only: all regression trees extend that model). Now, R-squared has lots of problems of its own (to the extent that it is only mentioned as something to avoid in some statistical texts) and these are worse here as the number of parameters fitted is unquantifiable. But as a factual summary it does mean what you quote. Whether any model of comparable complexity would also explain 42% of the variance is a much harder question. (Small anecdote: one of my first experiences of this was a psychologist who had funded a research project to relate personality/intelligence tests to 20-odd measurements on facial profiles by (stepwise) linear regression. My contribution was to point out that the R^2 produced was less for every one of the responses than one would expect on average for the same number of random unrelated regressors. To be systematically worse than such a straw man takes some achieving, and I have always suspected a bug in the fitting software.) I'm developing regression trees for a continuous y variable and I'd like to say how well they are doing. In particular, I'm analyzing the results of a simulation model having highly non-linear behavior, and asking what characteristics of the inputs are related to a particular output measure. I've got a very large number of points: n=4000. I'm not able to do a model sensitivity analysis because of the large number of inputs and the model run time. I've been googling around both on the archives and on the rest of the web for several hours, but I'm still having trouble getting a firm sense of the state of the art. Could someone help me to quickly understand what strategy, if any, is acceptable to say something like The regression tree in Figure 3 captures 42% of the variance? The target audience is readers who will be interested in the subsequent verbal explanation of the relationship, but only once they are comfortable that the tree really does capture something. I've run across methods to say how well a tree does relative to a set of trees on the same data, but that doesn't help much unless I'm sure the trees in question are really capturing the essence of the system. I'm happy to be pointed to a web site or to a thread I may have missed that answers this exact question. Thanks very much, Jeff -- Prof. Jeffrey Cardille [EMAIL PROTECTED] ** Département de Géographie ** Bureau: ** ** professeur adjoint / assistant professor** Salle 440 ** ** Université de Montréal ** Pavillon Strathcona ** ** C.P. 6128 ** 520, chemin de la Côte-Ste-Catherine** ** Succursale Centre-ville ** Montreal, QC H2V 2B8** ** Montréal, QC, H3C 3J7 ** Télé: (514) 343-8003** ** Web: ** ** http://www.geog.umontreal.ca/geog/cardille.htm ** ** ** ** Calendrier de Disponibilité à: ** ** http://jeffcardille.googlepages.com/udem** [[alternative HTML version deleted]] -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595__ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] the ifelse function
Hi Everyone, I think I found a problem with the ifelse function: If the condition argument is NA, it treats it as true. Anyone agree or disagree with this? Jen -- Jennifer Dillon Doctoral Student Harvard Biostatistics Room 414B, Building 1 [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] the ifelse function
Jennifer Dillon wrote: Hi Everyone, I think I found a problem with the ifelse function: If the condition argument is NA, it treats it as true. Anyone agree or disagree with this? I disagree. ifelse(c(5,4,NA) == 5, 1, 0) [1] 1 0 NA Jen -- Chuck Cleland, Ph.D. NDRI, Inc. 71 West 23rd Street, 8th floor New York, NY 10010 tel: (212) 845-4495 (Tu, Th) tel: (732) 512-0171 (M, W, F) fax: (917) 438-0894 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] the ifelse function
Can you proovide the example you are talking about. Here is what I got: ifelse(NA,1,2) [1] NA On 5/5/07, Jennifer Dillon [EMAIL PROTECTED] wrote: Hi Everyone, I think I found a problem with the ifelse function: If the condition argument is NA, it treats it as true. Anyone agree or disagree with this? Jen -- Jennifer Dillon Doctoral Student Harvard Biostatistics Room 414B, Building 1 [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Jim Holtman Cincinnati, OH +1 513 646 9390 What is the problem you are trying to solve? __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Convert Time from HH.mm to Decimal hours?
Is this what you want? x - seq(as.POSIXct(2006-05-05 12:00), by=17 min, length=20) x [1] 2006-05-05 12:00:00 GMT 2006-05-05 12:17:00 GMT 2006-05-05 12:34:00 GMT [4] 2006-05-05 12:51:00 GMT 2006-05-05 13:08:00 GMT 2006-05-05 13:25:00 GMT [7] 2006-05-05 13:42:00 GMT 2006-05-05 13:59:00 GMT 2006-05-05 14:16:00 GMT [10] 2006-05-05 14:33:00 GMT 2006-05-05 14:50:00 GMT 2006-05-05 15:07:00 GMT [13] 2006-05-05 15:24:00 GMT 2006-05-05 15:41:00 GMT 2006-05-05 15:58:00 GMT [16] 2006-05-05 16:15:00 GMT 2006-05-05 16:32:00 GMT 2006-05-05 16:49:00 GMT [19] 2006-05-05 17:06:00 GMT 2006-05-05 17:23:00 GMT # convert to decimal hours -- assuming for the same day x.lt - as.POSIXlt(x) x.hhdd - x.lt$hour + x.lt$min/60 + x.lt$sec/3600 x.hhdd [1] 12.0 12.28333 12.56667 12.85000 13.1 13.41667 13.7 13.98333 14.26667 14.55000 14.8 [12] 15.11667 15.4 15.68333 15.96667 16.25000 16.5 16.81667 17.1 17.38333 On 5/5/07, Alan E. Davis [EMAIL PROTECTED] wrote: I have data in Unixtime that I wish to plot several ways. I have learned to convert to a vector of time strings, and for the sake of plotting times of day vs. heights of tide (to examine seasonality component) I can extract HH:mm pretty swiftly and readily from Unixtime using emacs calc. However, I have had trouble with decimal hours that are needed (or perhaps not) for at least one of the polar plotting packages. Is there an easy way to do this in R? Thank you for any ideas. (I have learned how to do simple polar plots by way of a couple of people on this list. Thanks very much.) Alan -- Alan Davis, Kagman High School, Saipan [EMAIL PROTECTED] I consider that the golden rule requires that if I like a program I must share it with other people who like it. Richard Stallman Every great advance in natural knowledge has involved the absolute rejection of authority. - Thomas H. Huxley __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Jim Holtman Cincinnati, OH +1 513 646 9390 What is the problem you are trying to solve? __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] NA in wilcox.test
Hello, I'm trying to compare the allozyme data from two taxa. I have several columns of data (19 loci) for each species. I want to do a Mann-Whitney U-test or the wilcox.test (two sample Wilcoxon). When I try to run my code (the first two columns are 1:name of the species, 2:name of individual) I get the error message: Error in wilcox.test.default(CaScSc, CaScCo, alternative = two.sided, : 'x' must be numeric I do have several NAs in the data which is all I can figure that is non-numeric. Any suggestions as to the problem? Is it a problem with having several columns/sets of info for each individual? Thanks for any help anyone can give (I've also used Arlequin and GDA but want non-parametric tests) Michelle DePrenger-Levin My code is: scirconv2 = read.csv(CaScSc070420_2.csv, na.strings=?) CaScSc = scirconv2[1:250,3:21] CaScCo = scirconv2[251:475,3:21] ScCoMWU = wilcox.test(CaScSc, CaScCo, alternative = two.sided, mu = 0.5, paired = FALSE, exact = NULL, correct = TRUE, conf.int = TRUE, conf.level = 0.95) some of the data: ADH TPI1 TPI2 SOD DIA1 MNR1DIA2 MNR2DIA3 ME AAT1 AAT2 G3PDH SDH SDH2 PGI2 PGD PGM2 MDH1 MDH3 IDH2 251 111 1111 111 1 11 2 14114 252 NA NA NA NA NA NA1 NA NA2 1 1 NA 2 NA NA NA NA NA 253 111 1112 112 1 11 2 14124 254 111 1111 111 1 11 2 14114 255 111 1112 112 1 11 2 14124 [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Creating contingency table from mixed data
Hi, I am new in R. Please help me in the following case. I have data in hand: http://www.nabble.com/file/8225/Data.txt Data.txt There are some categorical (binary and nominal) and continuous variables. How can i get a generic RXC contingency table from this table? My main objective is to fine count in each cell and mean of continuous variables in each cell. Please reply. Thanks in advance. -- View this message in context: http://www.nabble.com/Creating-contingency-table-from-mixed-data-tf3698055.html#a10341180 Sent from the R help mailing list archive at Nabble.com. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] simple table ordering question
Hi all, I'm sure this is simple but I don't get it. I have a table mytable-c(rep(Disagree,37),rep(Agree,64)) table(mytable) this gives me AgreeDisagree 6437 but I didn't ask for it to be in alphabetic order. How can I get it in original order? DisagreeAgree 3764 Thanks, Jeff Jeffrey. M. Miller, PhD Statistics Data Anaysis Consultant CEO - AlphaPoint05, Inc http://maps.yahoo.com/py/maps.py?Pyt=Tmapaddr=2792+SE+27th+Avecsz=Gainesv ille%2C+FL+32641country=us 2792 SE 27th Ave Gainesville, FL 32641 mailto:[EMAIL PROTECTED] [EMAIL PROTECTED] http://www.alphapoint05.net/ www.alphapoint05.net mobile: 352-359-6611 ** This E-mail message is confidential, intended only for the named recipient(s) above and may contain information that is privileged, confidential or exempt from disclosure under applicable law. No part of this E-Mail should be reproduced, adapted or transmitted without the written consent of the data proprietor and/or president of AlphaPoint05, Inc. Violation will result in fines and/or imprisonment as determined by federal/state judiciary verdict. If you receive it in error, please let us know by reply E-mail, delete it from your system and destroy any copies. Thank you. ** [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] simple table ordering question
Convert them to factors in which you specify the order: mytable-factor(c(rep(Disagree,37),rep(Agree,64)), levels=c(Disagree,Agree)) table(mytable) mytable DisagreeAgree 37 64 On 5/5/07, AP05 [EMAIL PROTECTED] wrote: Hi all, I'm sure this is simple but I don't get it. I have a table mytable-c(rep(Disagree,37),rep(Agree,64)) table(mytable) this gives me AgreeDisagree 6437 but I didn't ask for it to be in alphabetic order. How can I get it in original order? DisagreeAgree 3764 Thanks, Jeff Jeffrey. M. Miller, PhD Statistics Data Anaysis Consultant CEO - AlphaPoint05, Inc http://maps.yahoo.com/py/maps.py?Pyt=Tmapaddr=2792+SE+27th+Avecsz=Gainesv ille%2C+FL+32641country=us 2792 SE 27th Ave Gainesville, FL 32641 mailto:[EMAIL PROTECTED] [EMAIL PROTECTED] http://www.alphapoint05.net/ www.alphapoint05.net mobile: 352-359-6611 ** This E-mail message is confidential, intended only for the named recipient(s) above and may contain information that is privileged, confidential or exempt from disclosure under applicable law. No part of this E-Mail should be reproduced, adapted or transmitted without the written consent of the data proprietor and/or president of AlphaPoint05, Inc. Violation will result in fines and/or imprisonment as determined by federal/state judiciary verdict. If you receive it in error, please let us know by reply E-mail, delete it from your system and destroy any copies. Thank you. ** [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Jim Holtman Cincinnati, OH +1 513 646 9390 What is the problem you are trying to solve? __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] loop in function
Actually I am not sure what you want exactly, but is it df1 -data.frame(b=c(1,2,3,4,5,5,6,7,8,9,10)) df2 -data.frame(x=c(1,2,3,4,5), y=c(2,5,4,6,5), z=c(10, 8, 7, 9, 3)) df1 - cbind(df1, colnames-(sapply(with(df2,(x+y)/z), function(a,b) a/b,b=df1$b), paste(goal,seq(nrow(df2)),sep=))) round(df1,2) b goal1 goal2 goal3 goal4 goal5 1 1 0.30 0.88 1.00 1.11 3.33 2 2 0.15 0.44 0.50 0.56 1.67 3 3 0.10 0.29 0.33 0.37 1.11 4 4 0.07 0.22 0.25 0.28 0.83 5 5 0.06 0.17 0.20 0.22 0.67 6 5 0.06 0.17 0.20 0.22 0.67 7 6 0.05 0.15 0.17 0.19 0.56 8 7 0.04 0.12 0.14 0.16 0.48 9 8 0.04 0.11 0.12 0.14 0.42 10 9 0.03 0.10 0.11 0.12 0.37 11 10 0.03 0.09 0.10 0.11 0.33 each column goal corresponds to row of df1. Alternatively, the sapply() function can be rewritten with apply(): apply(df2,1, function(a,b) (a[x]+a[y])/(a[z]*b), b=df1$b) Hope this answered your question... --- [EMAIL PROTECTED] wrote: Dear Mailing-List, I think this is a newbie question. However, i would like to integrate a loop in the function below. So that the script calculates for each variable within the dataframe df1 the connecting data in df2. Actually it takes only the first row. I hope that's clear. My goal is to apply the function for each data in df1. Many thanks in advance. An example is as follows: df1 -data.frame(b=c(1,2,3,4,5,5,6,7,8,9,10)) df2 -data.frame(x=c(1,2,3,4,5), y=c(2,5,4,6,5), z=c(10, 8, 7, 9, 3)) attach(df2) myfun = function(yxz) (x + y)/(z * df1$b) df1$goal - apply(df2, 1, myfun) df1$goal regards, kay __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.