Re: [R] artificial data matrix with 100000 rows
On 9/10/07, Martin Maechler [EMAIL PROTECTED] wrote: PS On 9/9/07, kevinchang [EMAIL PROTECTED] wrote: I tried to made the matrix with this size by either matrix() or array(). However, there seems to be default limit of number for rows made. I got sort of error message from R .To be specific, m--matrix(ncol=3,nrow=10) error message:[ reached getOption(max.print) -- omitted 7 rows ]] or a-array(dim=c(1,3,10)) error message:reached getOption(max.print) -- omitted 6667 row(s) and 6 matrix slice(s) ] PS That is not an error message, I guess. Definitely not, thank you, Paul! Also, they were not produced by what Kevin showed (namely assignments) but rather when he *prints* the contents of his huge matrix / array. PS When the matrices are huge, R is unable to print them PS totally on the screen, but all data are present. Not at all unable !! R protects you from accidentally overflowing your console with huge amount of non-sensical output. As the warning above mentions, you should look at ? getOption ? options and particularly the 'max.print' option Is '' reached getOption(max.print) '' too difficult to read? You *can* increase the 'max.print' option as much as you like, and that's why I said 'not at all unable' above. Regards, Martin PS For instance, m[(nrow(m)-10):nrow(m),] PS [,1] [,2] [,3] PS [1,] NA NA NA PS [2,] NA NA NA PS [3,] NA NA NA PS [4,] NA NA NA PS [5,] NA NA NA PS [6,] NA NA NA PS [7,] NA NA NA PS [8,] NA NA NA PS [9,] NA NA NA PS [10,] NA NA NA PS [11,] NA NA NA or rather just tail(m) or tail(m, 11) or head(m) or str(m) etc etc PS See PS ?getOption yes indeed. Thanks, Martin, for your detailed comments. I have learned something from them. Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] artificial data matrix with 100000 rows
On 9/9/07, kevinchang [EMAIL PROTECTED] wrote: I tried to made the matrix with this size by either matrix() or array(). However, there seems to be default limit of number for rows made. I got sort of error message from R .To be specific, m--matrix(ncol=3,nrow=10) error message:[ reached getOption(max.print) -- omitted 7 rows ]] or a-array(dim=c(1,3,10)) error message:reached getOption(max.print) -- omitted 6667 row(s) and 6 matrix slice(s) ] That is not an error message, I guess. When the matrices are huge, R is unable to print them totally on the screen, but all data are present. For instance, m[(nrow(m)-10):nrow(m),] [,1] [,2] [,3] [1,] NA NA NA [2,] NA NA NA [3,] NA NA NA [4,] NA NA NA [5,] NA NA NA [6,] NA NA NA [7,] NA NA NA [8,] NA NA NA [9,] NA NA NA [10,] NA NA NA [11,] NA NA NA See ?getOption Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] argument 'lib' is missing: using '/usr/lib/R/library'
Dear All, When installing packages, I get the following warning: install.packages(sqldf) Warning in install.packages(sqldf) : argument 'lib' is missing: using '/usr/lib/R/library' Any ideas? The details of my R installation are: version _ platform i386-redhat-linux-gnu arch i386 os linux-gnu system i386, linux-gnu status major 2 minor 5.1 year 2007 month 06 day27 svn rev42083 language R version.string R version 2.5.1 (2007-06-27) R is here installed on Fedora 7. Thanks in advance, Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] argument 'lib' is missing: using '/usr/lib/R/library'
On 9/8/07, Gabor Csardi [EMAIL PROTECTED] wrote: Paul, what is the question? If the question is why you get this warning message, the reason is that the 'lib' argument is missing and install.packages is using '/usr/lib/R/library'. If you want to get rid of the warning supply the 'lib' argument. Thanks, Gabor. Yes, I want to get rid of the warning message. Since the warning did not appear before, even without supplying the 'lib' argument, I thought that something was wrong here. Paul On Sat, Sep 08, 2007 at 11:26:44AM +0100, Paul Smith wrote: Dear All, When installing packages, I get the following warning: install.packages(sqldf) Warning in install.packages(sqldf) : argument 'lib' is missing: using '/usr/lib/R/library' Any ideas? The details of my R installation are: version _ platform i386-redhat-linux-gnu arch i386 os linux-gnu system i386, linux-gnu status major 2 minor 5.1 year 2007 month 06 day27 svn rev42083 language R version.string R version 2.5.1 (2007-06-27) R is here installed on Fedora 7. Thanks in advance, Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Csardi Gabor [EMAIL PROTECTED]MTA RMKI, ELTE TTK __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Running a PERL script from R
Dear Ken. You could also try and use RSPerl (http://www.omegahat.org/RSPerl/). It allows one to use R commands in Perl and vice-versa. regards, Paul Pierce, Ken schreef: Is there a way to run a simple perl script from R? Kenneth B. Pierce Jr. Research Ecologist Landscape Ecology, Modeling, Mapping and Analysis Team PNW Research Station - USDA-FS 3200 SW Jefferson Way, Corvallis, OR 97331 [EMAIL PROTECTED] 541 750-7393 http://www.fsl.orst.edu/lemma/gnnfire http://www.fsl.orst.edu/R_users/index.php [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Drs. Paul Hiemstra Department of Physical Geography Faculty of Geosciences University of Utrecht Heidelberglaan 2 P.O. Box 80.115 3508 TC Utrecht Phone: +31302535773 Fax:+31302531145 http://intamap.geo.uu.nl/~paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Automatic detachment of dependent packages
Dear All, When one loads certain packages, some other dependent packages are loaded as well. Is there some way of detaching them automatically when one detaches the first package loaded? For instance, library(sqldf) Loading required package: RSQLite Loading required package: DBI Loading required package: gsubfn Loading required package: proto but detach(package:sqldf) search() [1] .GlobalEnvpackage:gsubfnpackage:proto [4] package:RSQLite package:DBI package:stats [7] package:graphics package:grDevices package:utils [10] package:datasets package:methods Autoloads [13] package:base The packages RSQLite DBI gsubfn proto were not detached. Thanks in advance, Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Automatic detachment of dependent packages
On 9/7/07, Barry Rowlingson [EMAIL PROTECTED] wrote: When one loads certain packages, some other dependent packages are loaded as well. Is there some way of detaching them automatically when one detaches the first package loaded? For instance, library(sqldf) Loading required package: RSQLite Loading required package: DBI Loading required package: gsubfn Loading required package: proto but detach(package:sqldf) search() [1] .GlobalEnvpackage:gsubfnpackage:proto [4] package:RSQLite package:DBI package:stats [7] package:graphics package:grDevices package:utils [10] package:datasets package:methods Autoloads [13] package:base The packages RSQLite DBI gsubfn proto were not detached. The danger here is that after attaching sqldf you might attach some other package that needs, say, DBI, then when your cleanup routine detaches DBI that other package dies because DBI isn't there. The way to do it would be to detach any packages that are only depended on by the package you are detaching. You'd have to call packageDescription(foo, fields=Depends) for currently attached packages to build the dependency tree and then work out which ones you can remove... There's a bit of recursive tree-walking in there, but it should be simple... Ummm... Thanks, Barry and Gabor. Please, look at the following: library(sqldf) Loading required package: RSQLite Loading required package: DBI Loading required package: gsubfn Loading required package: proto packageDescription(sqldf, fields=Depends) [1] R (= 2.5.1), RSQLite (= 0.5-5), gsubfn packageDescription does not mention the packages DBI and proto. Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Delete query in sqldf?
On 9/7/07, Gabor Grothendieck [EMAIL PROTECTED] wrote: Yes but delete does not return anything so its not useful. In the devel version of sqldf you can pass multiple command so try this using the builtin data frame BOD noting that the record with demand = 8.3 was removed: library(sqldf) Loading required package: RSQLite Loading required package: DBI Loading required package: gsubfn Loading required package: proto # overwrite with devel version of the sqldf.R file source(http://sqldf.googlecode.com/svn/trunk/R/sqldf.R;) sqldf(c(delete from BOD where demand = 8.3, select * from BOD)) Time__1 demand 1 2 10.3 2 3 19.0 3 4 16.0 4 5 15.6 5 7 19.8 I see, Gabor, but I would expect as more natural to have sqldf(delete from BOD where demand = 8.3) working, with no second command. Paul On 9/7/07, Paul Smith [EMAIL PROTECTED] wrote: Dear All, Is sqldf equipped with delete queries? I have tried delete queries but with no success. Thanks in advance, Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Optimization under an absolute value constraint
On 9/7/07, Phil Xiang [EMAIL PROTECTED] wrote: I need to optimize a multivariate function f(w, x, y, z, ...) under an absolute value constraint. For instance: min { (2x+y) (w-z) } under the constraint: |w| + |x| + |y| + |z| = 1.0 . Is there any R function that does this? Thank you for your help! I think that the minimum value of the function f(x) := -2*x*(1-x), with 0 = x = 1 is also the minimum value of the objective function of your problem (but correct me if I am wrong). Thus, x y w z -0.50 0 -0.5 -0.50 0.1 -0.4 -0.50 0.3 -0.2 0.5 0 -0.50 -0.50 0.5 0 0.5 0 -0.40.1 0.5 0 -0.20.3 0.5 0 0 0.5 are all solutions for your problem. Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] order intervals in a data.frame
On 9/6/07, João Fadista [EMAIL PROTECTED] wrote: I would like to know how can I order a data.frame with increasing the dat$Interval (dat$Interval is a factor). There is an example below. Original data.frame: dat Interval Number_reads 0-100 685 200-300 744 100-2001082 300-4004213 Desired_dat: Interval Number_reads 0-100 685 100-200 1082 200-300 744 300-400 4213 What about Desired_dat - dat[match(dat$Interval,sort(dat$Interval)),] ? Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Embedding Audio Files in Interactive Graphs
Hi Michael Lawrence wrote: On 9/4/07, Sam Ferguson [EMAIL PROTECTED] wrote: Thanks for your reply Bruno. No - as I said, I know how to do that - the movie15 and the multimedia package are basically the same, and it is relatively straightforward to get an audio file into a pdf with them. However, real interactivity is not easily achieved in latex IMO (as it's not its purpose). At least I'm hoping for a bit more flexibility. R seems like a better place to do interactivity, and with the field of information visualisation pointing out that interactivity is a very useful element for investigation of data it seems that clicking around graphical displays may become more and more popular in time. In my field I'm interested in audio data, and so simple interactive visual and auditory displays would be great. A (very useful) start would be 5 separate waveform plots that would play their appropriate sounds when clicked. More complex figures could plot in a 2d space and allow selection of data points or ranges perhaps. I love R for graphics and for Sweave though, and would like to use it if possible - ideally it would be to produce a figure that included the appropriate audiofiles and interactive scripts, which could then be incorporated into a latex document \includegraphics. However, from the deafening silence on this list it seems like I may be attempting to push a square block through a round hole unfortunately. Seems I am back to Matlab and handle graphics - but it won't do this properly either. Lots of things can be embedded into PDF documents, like javascript, flash and svg. Maybe it would be feasible to use the gridSVG package to output some graphics as svg with javascript to play the sounds and embed that into a pdf? The short answer is that R cannot do this sort of thing and is unlikely to be able to do it anytime soon. The basic problem is that core R graphics has no concept of animation, audio, hyperlinks, etc AND there is no way to access these features on devices that do have these concepts (e.g., PDF). It would be nice to change that, but it is a large redesign problem that is not anywhere near the top of anyone's todo list (to my knowledge). As Michael mentioned, the gridSVG package allows you to draw using the grid package and access some of the fancier SVG features (including embedding scripts). It has its own problems of course, but may be worth trying. Paul Cheers Sam On 03/09/2007, at 5:39 PM, Bruno C.. wrote: Are you asking on how to include an audio file into a pdf? This is already feasible via latex and the movie 15 package ;) Ciao Hi R-ers, I'm wondering if anyone has investigated a method for embedding audio files in R graphs (pdf format), and allowing their playback to be triggered interactively (by clicking on a graph element for instance). I know how to do this in latex pdfs with the multimedia package, but it seems that R would provide a more appropriate platform for many reasons. Thanks for any help you can provide. Sam Ferguson Faculty of Architecture, Design and Planning The University of Sydney __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting- guide.html and provide commented, minimal, self-contained, reproducible code. -- Leggi GRATIS le tue mail con il telefonino i-mode™ di Wind http://i-mode.wind.it/ -- Sam Ferguson Faculty of Architecture The University of Sydney [EMAIL PROTECTED] +61 2 93515910 0410 719535 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Dr Paul Murrell Department of Statistics The University of Auckland Private Bag 92019 Auckland New Zealand 64 9 3737599 x85392 [EMAIL PROTECTED] http://www.stat.auckland.ac.nz/~paul/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] length of a string
On 9/5/07, João Fadista [EMAIL PROTECTED] wrote: I would like to know how can I compute the length of a string in a dataframe. Example: SEQUENCE ID TGCTCCCATCTCCACGGHR04FS00645 ACTGAACTCCCATCTCCAAT HR0595847847 I would like to know how to compute the length of each SEQUENCE. Maybe the following code? data var1 var2 1 This is a string 12 2 This is another string 34 nchar(data[,1]) [1] 16 22 Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] how to sub-sample a variable on another file coordinates
Dear Yogesh, This question seems more appropriate for the r-sig-geo mailing list (https://stat.ethz.ch/mailman/listinfo/r-sig-geo). The sampling of the netcdf files can be done by using the overlay function from the sp-package (available on CRAN). You would have to read both the netcdf file and the ASCII file into the spatial classes presented by sp (SpatialGrid for the CO2 data (I assume that it is a grid) and SpatialPoints for the ASCII data). This could be done using the rgdal-package. good luck! Paul Yogesh Tiwari schreef: Hello 'R' Users, I have a monthly mean CO2 necdf data file defined on 1x1 lat by lon coordinate. I want to sub-sample this variable CO2 on the coordinates of another ASCII data file. The coordinates of another ASCII data file are as: -24.01 152.06 -18.58 150.19 -13.46 148.35 -8.29 147.03 -3.14 146.19 1.53 145.59 7.08 145.33 12.25 145.02 17.46 144.31 22.44 142.35 27.53 141.26 33.04 140.15 -23.49 152.07 -18.56 150.18 -13.41 150.14 -8.24 150.04 -3.07 149.21 2.05 148.31 7.19 147.45 12.37 147.03 17.53 146.21 22.56 144.47 28.04 143.38 32.54 142.26 Kindly anybody can help on this. Many thanks, Cheers, Yogesh [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Drs. Paul Hiemstra Department of Physical Geography Faculty of Geosciences University of Utrecht Heidelberglaan 2 P.O. Box 80.115 3508 TC Utrecht Phone: +31302535773 Fax:+31302531145 http://intamap.geo.uu.nl/~paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Function modification: how to calculate values for every combination?
On 9/2/07, Lauri Nikkinen [EMAIL PROTECTED] wrote: I have a function like this: fun - function (x, y) { a - log(10)*y b - log(15)*x extr - a-b extr } fun(2,3) [1] 1.491655 x - c(1,2,3) y - c(4,5,6) fun(x, y) [1] 6.502290 6.096825 5.691360 How do I have to modify my function that I can calculate results using every combination of x and y? I would like to produce a matrix which includes the calculated values in every cell and names(x) and names(y) as row and column headers respectively. Is the outer-function a way to solution? Try the following code and adapt it to fill the matrix: fun - function (x, y) { a - log(10)*y b - log(15)*x extr - a-b extr } x - c(1,2,3) y - c(4,5,6) combs - expand.grid(x,y) for (i in 1:nrow(combs)) cat(fun(combs[i,1],combs[i,2]),\n) Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Function modification: how to calculate values for every combination?
On 9/2/07, Paul Smith [EMAIL PROTECTED] wrote: I have a function like this: fun - function (x, y) { a - log(10)*y b - log(15)*x extr - a-b extr } fun(2,3) [1] 1.491655 x - c(1,2,3) y - c(4,5,6) fun(x, y) [1] 6.502290 6.096825 5.691360 How do I have to modify my function that I can calculate results using every combination of x and y? I would like to produce a matrix which includes the calculated values in every cell and names(x) and names(y) as row and column headers respectively. Is the outer-function a way to solution? Try the following code and adapt it to fill the matrix: fun - function (x, y) { a - log(10)*y b - log(15)*x extr - a-b extr } x - c(1,2,3) y - c(4,5,6) combs - expand.grid(x,y) for (i in 1:nrow(combs)) cat(fun(combs[i,1],combs[i,2]),\n) The complete code can be: fun - function (x, y) { a - log(10)*y b - log(15)*x extr - a-b extr } x - c(1,2,3) y - c(4,5,6) combs - expand.grid(x,y) a - vector() for (i in 1:nrow(combs)) a[i] - fun(combs[i,1],combs[i,2]) m - matrix(a,3,3) rownames(m) - x colnames(m) - y m Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Synchronzing workspaces
Thanks for sharing your experience. In my case, the involved machines are Windows Vista, XP and 2000. Not sure whether it contributes to my problem or not. I will look into this further. I just noticed the two arguments ascii and compress for save. However, my .RData file was created by q() with yes. The manual says that q() is equivalent to save(list = ls(all=TRUE), file = .RData). There seems to be no way to set ascii or compression of save through q function, unless the q function is replaced explicitly with save(list = ls(all=TRUE), file = .RData, ascii = T). Paul. - Original Message From: Gabor Grothendieck [EMAIL PROTECTED] To: Paul August [EMAIL PROTECTED] Cc: r-help@stat.math.ethz.ch Sent: Thursday, August 30, 2007 11:24:31 PM Subject: Re: [R] Synchronzing workspaces I haven't had similar experience but note that save has ascii= and compress= arguments. You could check if varying those parameter values makes a difference. On 8/30/07, Paul August [EMAIL PROTECTED] wrote: I used to work on several computers and to use a flash drive to synchronize the workspace on each machine before starting to work on it. I found that .RData always caused some trouble: Often it is corrupted even though there is no error in copying process. Does anybody have the similar experience? Paul. - Original Message From: Barry Rowlingson [EMAIL PROTECTED] To: Eric Turkheimer [EMAIL PROTECTED] Cc: r-help@stat.math.ethz.ch Sent: Wednesday, August 22, 2007 9:43:57 AM Subject: Re: [R] Synchronzing workspaces Eric Turkheimer wrote: How do people go about synchronizing multiple workspaces on different workstations? I tend to wind up with projects spread around the various machines I work on. I find that placing the directories on a server and reading them remotely tends to slow things down. If R were to store all its workspace data objects in individual files instead of one big .RData file, then you could use a revision control system like SVN. Check out the data, work on it, check it in, then on another machine just update to get the changes. However SVN doesn't work too well for binary files - conflicts being hard to resolve without someone backing down - so maybe its not such a good idea anyway... On unix boxes and derivatives, you can keep things in sync efficiently with the 'rsync' command. I think there are GUI addons for it, and Windows ports. Barry __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. Comedy with an Edge to see what's on, when. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Studio 11 compiling of Rmath standalone versus compiling R
Hi, I have a UNIX-Solaris-Sparc-Studio 11 compilers question. Since it's not exactly a programming question, I'm posting it here instead of to the developers list. Here's the relevant info: 1) I'm using SunOS 5.9 on a 32 bit machine without parallel processing. The chip is Sparc. 2) I've installed the Sun Studio 11 compilers. 3) I've successfully built and tested R 2.5.1 with Studio 11--all the make checks were successful as were my own tests. 4) The README to the Rmath standalone library warns that mlutils.c _cannot_ be compiled with Sun's cc compiler because it can't do compile-time floating point arithmetic. This is confirmed by the output I get when I run the make (after the full R package was built and installed): (cd ../../include; make Rmath.h) `Rmath.h' is up to date. Copying source files cc -I. -I../../../src/include -I../../../src/include -I./.. -I/usr/local/include -DHAVE_CONFIG_H -DMATHLIB_STANDALONE -g -c mlutils.c -o mlutils.o /usr/ucb/cc: language optional software package not installed *** Error code 1 make: Fatal error: Command failed for target `mlutils.o' Current working directory /export/home/ploua/R_HOME/R-2.5.1/src/nmath/standalone *** Error code 1 make: Fatal error: Command failed for target `static' 5) My question: Why does the full R package compile and run without difficulty using the same Studio 11 compilers? I'd like to be able to build the Rmath standalone library without mixing compilers on this machine, i.e., I don't want to use gcc and Studio 11 together. Is there some way to get Studio 11 to compile the standalone library by altering the makefile? Thanks for any advice/explanation people can offer. I'm a newcomer to compiling code on UNIX. Paul Louisell 650-833-6254 [EMAIL PROTECTED] Associate Predictive Modeler (Statistician) Modeling Data Analytics ARPC [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] R CMD BATCH: cat does not print
Dear All, I am trying to write my first R script. The code is simply cat(Hello!\n) However, when I run $ R CMD BATCH myscript.R I do not see Hello! on the console. I am using Fedora 7 (Linux) and R-2.5.1. Any ideas? Thanks in advance, Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] R CMD BATCH: cat does not print
On 8/30/07, Barry Rowlingson [EMAIL PROTECTED] wrote: I am trying to write my first R script. The code is simply cat(Hello!\n) However, when I run $ R CMD BATCH myscript.R I do not see Hello! on the console. I am using Fedora 7 (Linux) and R-2.5.1. Any ideas? You shouldn't see it on the console! BATCH writes its output to a file. You should find a file called myscript.Rout that does contain the 'Hello!'. Thanks, Barry. Indeed, the file myscript.Rout exists and contains the output of cat. I was expecting a behavior similar to the bash scripts. And by the way, cannot a R script write only on the console and just what one tells it to write, likewise bash scripts? Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] R CMD BATCH: cat does not print
On 8/30/07, Vladimir Eremeev [EMAIL PROTECTED] wrote: Use rscript Rscript myscript.R or Rscript -e 'cat(Hello!\n)' will show Hello! on the console. R CMD BATCH writes its output to the file myscript.Rout Thanks, Vladimir. Rscript is exactly what I was looking for! Paul Paul Smith wrote: Dear All, I am trying to write my first R script. The code is simply cat(Hello!\n) However, when I run $ R CMD BATCH myscript.R I do not see Hello! on the console. I am using Fedora 7 (Linux) and R-2.5.1. Any ideas? Thanks in advance, Paul -- View this message in context: http://www.nabble.com/R-CMD-BATCH%3A-cat-does-not-print-tf4353572.html#a12405494 Sent from the R help mailing list archive at Nabble.com. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Nested functions.
nalluri pratap schreef: Hi All, I have two variables X, Y. The question is if the value of X is equal to one, then the values in Y have to be reversed other wise it should not perfom any action. I think this should be done using lapply function? Example Y values : 1 2 3 NA X Y (ORIGINAL) Y (REVERSED) 1 NA 1 0 --- 1 2 3 1 1 4 1 3 2 ... Can anyone provide solution to this? Thanks, Pratap - [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. Dear Pratap, You could try something like this: x = c(1,0,1,1) y = c(1,2,3,NA) y_rev = rev(y) ifelse(x == 1, y_rev, y) hope this helps, Paul -- Drs. Paul Hiemstra Department of Physical Geography Faculty of Geosciences University of Utrecht Heidelberglaan 2 P.O. Box 80.115 3508 TC Utrecht Phone: +31302535773 Fax:+31302531145 http://intamap.geo.uu.nl/~paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] R and Web Applications
Dear Chris, I use Python (http://www.python.org) in combination with Rpy (http://rpy.sourceforge.net/). Rpy enables you to use R commands inside Python, not the other way around. Works quite well, also for different R versions (I currently run R 2.5.1 under Linux). cheers, Paul Chris Parkin schreef: Hello, I'm curious to know how people are calling R from web applications (I've been looking for Perl but I'm open to other languages). After doing a search, I came across the R package RSPerl, but I'm having difficulties getting it installed (on Mac OSX). I believe the problem probably has to do with changes in R since the package release. Below you will see where the installation process comes to an end. Does anyone have any suggestions, or perhaps a direction to point me in? Thanks in advance for your insight! Chris * Installing to library '/Library/Frameworks/R.framework/Resources/library' * Installing *source* package 'RSPerl' ... checking for perl... /usr/bin/perl No support for any of the Perl modules from calling Perl from R. * Set PERL5LIB to /Library/Frameworks/R.framework/Versions/2.5/Resources/library/RSPerl/perl * Testing: -F/Library/Frameworks/R.framework/.. -framework R Using '/usr/bin/perl' as the perl executable Perl modules (no): Adding R package to list of Perl modules to enable callbacks to R from Perl Creating the C code for dynamically loading modules with native code for Perl: R modules: R; linking: checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed Support R in Perl: yes configure: creating ./config.status config.status: creating src/Makevars config.status: creating inst/scripts/RSPerl.csh config.status: creating inst/scripts/RSPerl.bsh config.status: creating src/RinPerlMakefile config.status: creating src/Makefile.PL config.status: creating cleanup config.status: creating src/R.pm config.status: creating R/perl5lib.R making target all in RinPerlMakefile RinPerlMakefile:5: /Library/Frameworks/R.framework/Resources/etc/Makeconf: No such file or directory make: *** No rule to make target `/Library/Frameworks/R.framework/Resources/etc/Makeconf'. Stop. [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Drs. Paul Hiemstra Department of Physical Geography Faculty of Geosciences University of Utrecht Heidelberglaan 2 P.O. Box 80.115 3508 TC Utrecht Phone: +31302535773 Fax:+31302531145 http://intamap.geo.uu.nl/~paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Synchronzing workspaces
I used to work on several computers and to use a flash drive to synchronize the workspace on each machine before starting to work on it. I found that .RData always caused some trouble: Often it is corrupted even though there is no error in copying process. Does anybody have the similar experience? Paul. - Original Message From: Barry Rowlingson [EMAIL PROTECTED] To: Eric Turkheimer [EMAIL PROTECTED] Cc: r-help@stat.math.ethz.ch Sent: Wednesday, August 22, 2007 9:43:57 AM Subject: Re: [R] Synchronzing workspaces Eric Turkheimer wrote: How do people go about synchronizing multiple workspaces on different workstations? I tend to wind up with projects spread around the various machines I work on. I find that placing the directories on a server and reading them remotely tends to slow things down. If R were to store all its workspace data objects in individual files instead of one big .RData file, then you could use a revision control system like SVN. Check out the data, work on it, check it in, then on another machine just update to get the changes. However SVN doesn't work too well for binary files - conflicts being hard to resolve without someone backing down - so maybe its not such a good idea anyway... On unix boxes and derivatives, you can keep things in sync efficiently with the 'rsync' command. I think there are GUI addons for it, and Windows ports. Barry __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. Comedy with an Edge to see what's on, when. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Q: how to interrupt long calculation?
On 8/29/07, D. R. Evans [EMAIL PROTECTED] wrote: The subject says it all really: I've tried hitting control-C (multiple times), but that doesn't seem to be a reliable way to interrupt a long calculation. What is the right way to interrupt a calculation that has been proceeding for several minutes and shows no sign of finishing soon? I forgot to mention that this is on an amd64 system running dapper (in case that makes any difference). Assuming that you are using Linux, try the command killall R The instance of R running will be immediately killed and then you can start R again. Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Nodes edges with similarity matrix
Hello, I apologise if someone has already answered this but I searched and googled but didn't find anything. I have a matrix which gives me the similarity of each item to each other. I would like to turn this matrix into something like what they have in the graph package with the nodes and edges. http://cran.r-project.org/doc/packages/graph.pdf . However I cannot find a method to convert my matrix to an object that graph can use. my similarity matrix looks like: sim[1:4,] a b c d [a] 1.0 0.0223676 0.9723831 0.3943310 [b] 0.325141612 1.000 0.9644216 0.5460461 [c] 0.002109751 0.3426540 1.000 0.7080224 [d] 0.250153137 0.1987485 0.7391222 1.000 please don't get caught up with the numbers I simple made this to show. I have not produce the code yet to make my similitary matrix. Does anyone know a method to do this or do I have to write something. :( If I do any starter code :D jj. If I've read something wrong or misunderstood my apologies. cheers, Paul -- Research Technician Mass Spectrometry o The / o Scripps \ o Research / o Institute __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Forall symbol with plotmath/grid
Hi Michael Hoffman wrote: I am trying to get the forall symbol (upside down A) as part of the label of a lattice plot. Is there an easy way to do this? It is easy enough to produce a forall on its own ... library(grid) grid.text(\042, gp=gpar(fontfamily=symbol)) ... but combining that with other text is trickier. One approach is to draw separate pieces of text next to each other ... grid.newpage() tg - textGrob(whatever , just=c(left, bottom)) grid.draw(tg) grid.text(\042, x=unit(.5, npc) + grobWidth(tg), just=c(left, bottom), gp=gpar(fontfamily=symbol)) ... but that can get tedious. A better solution is to use plotmath, but the problem there is that you cannot access the forall symbol (even though it is in the symbol font). I have added a feature to plotmath in the development version (to be R 2.6.0) to make it easier to get at any symbol you want. In the next version of R, you will be able to do something like this ... grid.newpage() grid.text(expression(whatever *symbol(\042))) Paul -- Dr Paul Murrell Department of Statistics The University of Auckland Private Bag 92019 Auckland New Zealand 64 9 3737599 x85392 [EMAIL PROTECTED] http://www.stat.auckland.ac.nz/~paul/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] figure-definition and heatmap question
Hi Antje wrote: There is no one who could help me with this? Antje schrieb: Hello, I have two questions. I'd like to visualize data with a heatmap and I have the following testcase: x - rnorm(256) nx - x + abs(min(x)) nnx - 255/max(nx) * nx x - matrix(nnx, 16, 16) rownames(x) - c(A,B,C,D,E,F,G,I,H,J,K,L,M,N,O,P) par(fin=c(8.0,8.0)) cp - colorRampPalette(c(white,springgreen,darkgreen),space=Lab) heatmap(x, Rowv = NA, Colv = NA, scale=none, col=cp(200)) I defined the figure region to make sure that each position is a square. But with these settings I get the following output (though it looks nice): x - rnorm(256) nx - x + abs(min(x)) nnx - 255/max(nx) * nx x - matrix(nnx, 16, 16) rownames(x) - c(A,B,C,D,E,F,G,I,H,J,K,L,M,N,O,P) par(fin=c(8.0,8.0)) cp - colorRampPalette(c(white,springgreen,darkgreen),space=Lab) heatmap(x, Rowv = NA, Colv = NA, scale=none, col=cp(200)) Fehler in par(op) : ungültiger Wert für den Grafikparameter fig spezifiziert par(fin=c(8.0,8.0)) cp - colorRampPalette(c(white,springgreen,darkgreen),space=Lab) heatmap(x, Rowv = NA, Colv = NA, scale=none, col=cp(200)) Fehler in par(op) : ungültiger Wert für den Grafikparameter fig spezifiziert par()$fig [1] 7.267443e-04 1.305448e-02 -2.862294e-17 9.876543e-01 par()$fin [1] 0.08834875 7.06790021 Why do I get this error? Why does the parameters have these strange values (though I set the fin parameter before...) You are getting the error because you are setting the figure region to be larger than the current device (typically 6 or 7 inches wide/high). You SHOULD be getting the error when you try par(fin), BUT there is a check missing in the C code, so what happens is that heatmap saves your par settings and then tries to reset them (this is where par(op) comes from), and because it saves BOTH par(fig) and par(fin) it resets both of them, and when it resets par(fig) there IS a check on the values, the values are larger than the current device and you get the error. Now, because there is an error in resetting par(fig), that parameter is not reset, so when you type par()$fin (or, equivalently, par(fin)) after the heatmap() call, you get the last setting that heatmap() did, which was from a layout inside heatmap, and so par(fig) is NOT what you set. Finally, there is no point in setting par(fig) before heatmap() because heatmap() is one of those functions that takes over the whole device anyway, so your par(fig), even if it was valid, would have no effect. If you want to make the heatmap() plot take up less of the page, you could set outer margins (see par(oma)), e.g., ... par(oma=rep(4, 4)) heatmap(x, Rowv = NA, Colv = NA, scale=none, col=cp(200)) If you want to make sure that each position in the heatmap is square, DO NOTHING, because the layout that heatmap() sets up is using respect so the image will be square no matter what you do. Paul And another question concerning the heatmap: May I force the funtion to plot A1 at the upper left corner instead of the lower left? I'll be glad about any idea how to solve these problems... Ciao, Antje __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Dr Paul Murrell Department of Statistics The University of Auckland Private Bag 92019 Auckland New Zealand 64 9 3737599 x85392 [EMAIL PROTECTED] http://www.stat.auckland.ac.nz/~paul/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] figure-definition and heatmap question
Hi Antje wrote: Hi Paul, You are getting the error because you are setting the figure region to be larger than the current device (typically 6 or 7 inches wide/high). You SHOULD be getting the error when you try par(fin), BUT there is a check missing in the C code, so what happens is that heatmap saves your par settings and then tries to reset them (this is where par(op) comes from), and because it saves BOTH par(fig) and par(fin) it resets both of them, and when it resets par(fig) there IS a check on the values, the values are larger than the current device and you get the error. Now, because there is an error in resetting par(fig), that parameter is not reset, so when you type par()$fin (or, equivalently, par(fin)) after the heatmap() call, you get the last setting that heatmap() did, which was from a layout inside heatmap, and so par(fig) is NOT what you set. Finally, there is no point in setting par(fig) before heatmap() because heatmap() is one of those functions that takes over the whole device anyway, so your par(fig), even if it was valid, would have no effect. If you want to make the heatmap() plot take up less of the page, you could set outer margins (see par(oma)), e.g., ... par(oma=rep(4, 4)) heatmap(x, Rowv = NA, Colv = NA, scale=none, col=cp(200)) thank you very much for the explanation. Now I understand at least the strange fig/fin values ;) There is only one question left: If you want to make sure that each position in the heatmap is square, DO NOTHING, because the layout that heatmap() sets up is using respect so the image will be square no matter what you do. Okay, for the example, I've chosen it might be true. My initial reason to try to force it to squares has been the visualization of matrix which is not quadratic (e.g. 12x8). In this case heatmap stretches the coloured areas to rectangles. I planned to set the figure region with the same length/width ratio as the matrix is to get squares... If I understood everything now, I have to think about something else than heatmap to make sure to get squares, right? The original heatmap() author may need to confirm this, but from my look at the code, yes. Paul Thanks, Jim, I'll test this method for my purpose :) Ciao, Antje Paul And another question concerning the heatmap: May I force the funtion to plot A1 at the upper left corner instead of the lower left? I'll be glad about any idea how to solve these problems... Ciao, Antje __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Dr Paul Murrell Department of Statistics The University of Auckland Private Bag 92019 Auckland New Zealand 64 9 3737599 x85392 [EMAIL PROTECTED] http://www.stat.auckland.ac.nz/~paul/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Calculating diameters of cirkels in a picture.
On 8/23/07, Bart Joosen [EMAIL PROTECTED] wrote: Maybe this is more a programming questions than a specific R-project question, but maybe there is someone who can point me in the right direction. I have a picture of cirkels which I took with a digital camera. Now I want to use the diameter of the cirkels on the picture for analysis in R. I can use pixmap to import the picture, but how do I find the outside cirkels and calculate the diameter? I pointed out that I can use the edci package, but then I need to preprocess the data to reduce the points, otherwise it takes a long time, and my computer crashes. If you want to see such a picture, I cropped a larger one, and highlighted the cirkel which is of interest. In a real world, this is a plate with 36 cirkels, which all should be measured. www.users.skynet.be/fa244930/fotos/outlined.jpg You mean the diameter measured in number of pixels? Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Formatting Sweave in R-News
Hi Arjun Ravi Narayan wrote: Hi, I am editing a document for submission to the R-news newsletter, and in my article my Sweave code inserts a dynamically generated PDF report that my R program generates. However, when I insert the PDF using the following Sweave code: \newpage \includegraphics[scale=1.0]{\Sexpr{print(location)}} \newpage (in tex this looks like): \newpage \includegraphics[scale=1.0]{/home/arjun/sample.pdf} \newpage Try putting your image in a figure* environment (should go full width of the page). Paul However, the r-news style package over-rides everything that I can set (including using the minipage option) to make my included PDF small sized. Part of the problem is that the R-news style specifies a two-column formatting, and so the PDF is shrunk to fit in one column. How can I, for just one page, over-ride the styles to include the PDF? Even if I hard-hack the graphics to be scaled up in size, that does not get rid of the vertical line that in between the two columns, and thus breaking my image. I realise that this is not an R problem, but more a latex problem, but I am hoping that somebody has faced similar problems with the Rnews styles and has an idea on how to do this. Thank you, Yours sincerely, -- Dr Paul Murrell Department of Statistics The University of Auckland Private Bag 92019 Auckland New Zealand 64 9 3737599 x85392 [EMAIL PROTECTED] http://www.stat.auckland.ac.nz/~paul/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Evaluating f(x(2,3)) on a function f- function(a,b){a+b}
On 8/22/07, Søren Højsgaard [EMAIL PROTECTED] wrote: I have a function and a vector, say f - function(a,b){a+b} x - c(2,3) I want to evaluate f on x in the sense of computing f(x[1],x[2]). I would like it to be so that I can write f(x). (I know I can write a wrapper function g - function(x){f(x[1],x[2])}, but this is not really what I am looking for). Is there a general way doing this (programmatically)? (E.g. by unpacking the elements of x and putting them in the right places when calling f...) I've looked under formals, alist etc. but so far without luck. I hope that the following helps: f - function(x) {sum(x)} f(c(2,3)) [1] 5 f(c(2,3,5)) [1] 10 Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Differentiation
On 8/20/07, nalluri pratap [EMAIL PROTECTED] wrote: try library(numDeriv) ?genD Pratap Shubha Vishwanath Karanth [EMAIL PROTECTED] wrote: Hi, Could anyone tell me what is the command used in R to do 1. Differentiation 2. Newton Raphson method (Numerical Analysis in general...) Are there any packages separately for this? In addition, you can use Ryacas: «Ryacas (google code name ryacas) is an R package that allows R users to access the yacas computer algebra system from within R. It can be used for computer algebra, exact arithmetic, ASCII pretty printing and R to TeX output.» Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Help with npmc
I cant't seem to get npmc to make a comparison to a control level summary(npmc(brain), type=BF, control=1) $`Data-structure` group.index class.level nobs c 1 c 30 l 2 l 30 r 3 r 30 $`Results of the multiple Behrens-Fisher-Test` cmpeffect lower.cl upper.cl p.value.1s p.value.2s 1 1-2 0.643 0.4610459 0.8256208 0.08595894 0.14750647 2 1-3 0.444 0.2576352 0.6312537 0.99636221 0.75376639 3 2-3 0.328 0.1602449 0.4964218 1. 0.04476692 What elementary error am I making. Thanks, Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] diffusing GIS data in maps
Hi Lawrence, You could use the gstat (geostatistics) package to perform an interpolation, it also requires the package sp. It offers inverse distance interpolation and several forms of kriging. Making an interpolation would look something like: library(gstat) # Also loads sp coordinates(geocode) = ~LAT + LONG # make a spatial object out of geocode grid = spsample(geocode, type = regular, n = 1000) # make a map of target locations. # Regular grid. n is the number of cells, increase this for more detail in the resulting map gridded(grid) = TRUE # Identifying this object as a grid interpolated = krige(VALUE ~ 1, geocode, grid) # Do inverse distance interpolation, data is 'geocode', target locations are 'grid'. spplot(interpolated, var1.pred, sp.layout = list(sp.points, geocode)) # Plot the predictions (grid) and the points (geocode) hope this helps, Paul Lawrence D. Brenninkmeyer schreef: Hi- I am trying to find a way to diffuse GIS data on a European map. I have a dataset consisting of particular locations scattered across Europe, along with magnitude and value information. I can plot these as discrete points with something like the following: geocode is a dataframe with four columns: LAT; LONG; MAGNITUDE;VALUE. library(maps) library(mapdata) map(worldHires, regions=c(Germany, Belgium, Netherlands)) points(geocode$LONG, geocode$LAT, cex=geocode$MAGNITUDE / 2500, col=rainbow(length(geocode$VALUE), start=0, end=.4)[rank(geocode$VALUE)]) This gives me a map of Europe with my datapoints highlighted in two ways: magnitude is represented by the size of the point, and value is represented by the color. However, what I would really like is for there to be some sort of diffusion, such that instead of discrete points, the European map is covered in color so I can see more clearly whether there are regional patterns (something that will presumably look like this contour chart: http://addictedtor.free.fr/graphiques/RGraphGallery.php?graph=20 only on the European map). I have absolutely no idea where to start because I can't find a function that will allow me to diffuse the datapoints on a map. thank you for any help ldb __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Drs. Paul Hiemstra Department of Physical Geography Faculty of Geosciences University of Utrecht Heidelberglaan 2 P.O. Box 80.115 3508 TC Utrecht Phone: +31302535773 Fax:+31302531145 http://intamap.geo.uu.nl/~paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] labelling plots with ancillary data in data.frame
Hi Wesley, Try the text() function. An example: a = rep(10,10) b = seq(1,10) plot(a,b) text(a,b, labels = b, pos = 4, offset = 0.7) ?text hth, Paul Wesley Roberts schreef: Hi All, I am busy using R to do some regression modelling and have been using plot(x,y,) to visualise my variables. I would now like to label my points using data stored in the data.frame used for the regression analysis. For example each of my data points is made up of a field measured forest volume value and a remotely sensed vegetation estimate (NDVI). Each point is an enumeration plot and I would like to label each the points in the xy-scatterplot with their respective plot numbers. Is this possible in R, if so how do I go about doing it? Many thanks for your help Wesley Wesley Roberts MSc. Researcher: Forest Assessment (Remote Sensing GIS) Forestry and Forest Products Research Centre CSIR Tel: +27 (31) 242-2353 Fax: +27 (31) 261-1216 http://ffp.csir.co.za/ To know the road ahead, ask those coming back. - Chinese proverb -- Drs. Paul Hiemstra Department of Physical Geography Faculty of Geosciences University of Utrecht Heidelberglaan 2 P.O. Box 80.115 3508 TC Utrecht Phone: +31302535773 Fax:+31302531145 http://intamap.geo.uu.nl/~paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] rates and rate ratios for survey-weighted data
Hello, I am looking at data on child survival from a survey with a stratified two-stage design. I have been trying out the excellent survey package, with which I have done a descriptive analysis and Cox models. I need to calculate rates (with a child-year denominator) and various rate ratios plus confidence intervals (basically, something like the STATA commands strate and stmh used after svyset and stset) but haven't been able to find a convenient way of doing this in R despite much searching. Is this possible, either with the survey package or any other way? Many thanks for any advice. Paul -- Dr. Paul Cleary, Specialist Registrar in Public Health, Chester Health Protection Unit. [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] invert 160000x160000 matrix
I don't think you can define a matrix this large in R, even if you have the memory. Then, of course, inverting it there may be other programs that have limitations. Paul Jiao Yang wrote: Can R invert a 16x16 matrix with all positive numbers? Thanks a lot! __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. La version française suit le texte anglais. This email may contain privileged and/or confidential inform...{{dropped}} __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Help using gPath
Hi Emilio Gagliardi wrote: haha Paul, It's important not only to post code, but also to make sure that other people can run it (i.e., include real data or have the code generate data or use one of R's predefined data sets). Oh, I hadn't thought of using the predefined datasets, thats a good idea! Also, isn't this next time ? :) By next time I meant, when I ask a question in the future, I didn't think you'd respond! So here is some code! library(reshape) library(ggplot2) theme_t - list(grid.fill=white,grid.colour=lightgrey,background.colour=black,axis.colour=dimgrey) ggtheme(theme_t) grp - c(2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3) time - c(1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2) cc - c(0.7271,0.7563,0.6979,0.8208,0.7521,0.7875,0.7563,0.7771,0.8208,0.7938,0.8083,0.7188,0.7521,0.7854,0.7979,0.7583,0.7646,0.6938,0.6813,0.7708,0.7375,0.8104,0.8104,0.7792,0.7833,0.8083,0.8021,0.7313,0.7958,0.7021 ,0.8167,0.8167,0.7583,0.7167,0.6563,0.6896,0.7333,0.8208,0.7396,0.8063,0.7083,0.6708,0.7292,0.7646,0.7667,0.775,0.8021,0.8125,0.7646,0.6917,0.7458,0.7833,0.7396,0.7229,0.7708,0.7729,0.8083,0.7771,0.6854,0.8417,0.7667,0.7063 ,0.75,0.7813,0.8271,0.7896,0.7979,0.625,0.7938,0.7583,0.7396,0.7583,0.7938,0.7333,0.7875,0.8146) data - as.data.frame(cbind(time,grp,cc)) data$grp - factor(data$grp,labels=c(Group A,Group B)) data$time - factor(data$time,labels=c(Pre-test,Post-test)) boxplot - qplot(grp, cc, data=data, geom=boxplot, orientation=horizontal, ylim=c(0.5,1), main=Hello World!, xlab=Label X, ylab=Label Y, facets=.~time, colour=red, size=2) boxplot + geom_jitter(aes(colour=steelblue)) + scale_colour_identity() + scale_size_identity() grid.gedit(ylabel, gp=gpar(fontsize=16)) Great. Thanks. Some example code of my own below ... There's a book that provides a full explanation and the (basic) grid chapter is online (see http://www.stat.auckland.ac.nz/~paul/RGraphics/rgraphics.html http://www.stat.auckland.ac.nz/~paul/RGraphics/rgraphics.html) Awesome, I'll check that out. Yep, the facilities for investigating the viewport and grob tree are basically inadequate. Based on some work Hadley did for ggplot, the development version of R has a slightly better tool called grid.ls() that can show how the grob tree and the viewport tree intertwine. That would allow you to see which viewport each grob was drawn in, which would help you, for example, to know which viewport you had to go to to replace a rectangle you want to remove. okie dokie, I'm ready to be amazed! hehe. Or perhaps amused ... Here's a partial extract from a sample session after running your code (NOTE this is using the development version of R; grid.ls() does not exist in R 2.5.1 or earlier): Inspect the grob tree with grid.ls() (similar to Hadley's current.grobTree(), but with different formatting) ... grid.ls() plot-surrounds GRID.cellGrob.118 background GRID.cellGrob.119 plot.gTree.113 background guide.gTree.90 background.rect.80 minor-horizontal.segments.82 minor-vertical.segments.84 # OUTPUT TRUNCATED ... It is not necessarily obvious which grob is which, but a little trial and error (e.g., grid.edit() to change the colour of a grob) shows that the border on the first panel is 'guide.rect.92', which is a child of 'plot.gTree.113' (NOTE the numbers come from a fresh R session). Use grid.get() to grab that gTree and inspect that further using grid.ls(), this time also showing the viewports involved ... grid.ls(grid.get(plot.gTree.113), viewports=TRUE, fullNames=TRUE) gTree[plot.gTree.113] viewport[layout] viewport[strip_h_1_1] upViewport[1] viewport[strip_h_1_2] upViewport[1] viewport[strip_v_1_1] upViewport[1] viewport[axis_h_1_1] upViewport[1] viewport[axis_h_1_2] upViewport[1] viewport[axis_v_1_1] upViewport[1] viewport[panel_1_1] upViewport[1] viewport[panel_1_2] upViewport[2] rect[background] downViewport[layout] downViewport[panel_1_1] gTree[guide.gTree.90] rect[background.rect.80] segments[minor-horizontal.segments.82] # OUTPUT TRUNCATED grid.ls(grid.get(plot.gTree.113), viewports=TRUE, print=grobPathListing) |plot.gTree.113 |plot.gTree.113::background layout::panel_1_1|plot.gTree.113::guide.gTree.90 layout::panel_1_1|plot.gTree.113::guide.gTree.90::background.rect.80 layout::panel_1_1|plot.gTree.113::guide.gTree.90::minor-horizontal.segments.82 layout::panel_1_1|plot.gTree.113::guide.gTree.90::minor-vertical.segments.84
Re: [R] Any special interest in R/pic interface?
Hi A couple of comments/ideas/rambles: (o) I've read about pic, but never used it, but what I saw looked very cool. (i) In a way, this sounds a bit like the R2HTML package, but instead of taking R objects from data analysis and converting them to HTML, you take R objects from plotting and convert them to pic. So you'd end up with, say, a picify() method for different R objects. This assumes that you have an R object to work with, but in the case of lattice plots that is true. You would have to do a lot of work yourself though (replicating work that lattice does) to determine where to draw each piece of the puzzle. (ii) In another way, this also sounds a bit like the gridSVG package (http://www.stat.auckland.ac.nz/~paul/index.html), but instead of converting grid objects to SVG, you convert them to pic. This does avoid going through the R graphics engine (C code), which munches everything into tiny little pieces, plus the objects you deal with already contain the information about where to draw, but the amount of semantic information that is retained can still be low. For example, lattice produces a pretty flat hierarchy of grid objects, so a lot of the information like this rectangle is actually a histogram bar is lost. OTOH, ggplot produces much richer grid objects from which some semantic information could possibly be retrieved. (iii) On a completely different tack, some pic-like operations can be done in R (grid) these days. The grobX(), grobY, grobWidth(), and grobHeight() functions, plus grid.curve() allow drawing to occur relative to the location and size of other objects in the scene. For example, see slides 14 and 24 in http://www.stat.auckland.ac.nz/~paul/Talks/rgraphs.pdf Paul (Ted Harding) wrote: Hi Folks, I'm wondering if there are people out there who would be interested in what would be involved in developing an interface between R graphics and the 'pic' language. Explanation; 'pic' has been part of the Unix 'troff' typesetting suite since very early days (1970s), and also of the GNU troff: 'groff'. Its function is to act as a preprocessor, translating textual descriptions of graphical displays into the formatting language used by troff. Example: .PS ## Need x- and y-scale factors to exist before referring to them xsc=1.0 ; ysc=1.0 ## Define the basic graphics object: the histogram bar ## uses positional parameters $1, 42, $3, $4 in the data line define bar { box width ($2 - $1)*xsc height $3*ysc \ with .sw at ($1*xsc,0) fill $4 } ## Draw the basic histogram xsc=1.0 ; ysc=0.75 copy thru bar until EOT -2.5 2.5 31.0 0.25 2.5 7.5 69.0 0.25 7.5 12.5 50.0 0.25 12.5 17.5 0.0 0.25 17.5 22.5 0.0 0.25 22.5 27.5 0.0 0.25 27.5 32.5 0.0 0.25 32.5 37.5 1.0 0.25 37.5 42.5 1.0 0.25 42.5 47.5 0.0 0.25 EOT .PE which will draw the histogram bars, each with a black border, and grey-filled at grey level 0.25. The above is readily entered by hand (with the data copied from R or imported from a file). But it is a very simple example. Each bar is a 'pic' box object, with width equal to the difference between its upper and lower breakpoints (so variable-width can be accomodated), scaled by factor 'xsc'. The placement of the box is set by specifying that its SouthWest corner .sw is at (x,0) where 'x' is the lower of the two breakpoints for the box. When a histogram (one of my histograms, MH, in this case) has been constructed by MH - hist(..., plot=FALSE) the first two columns above are available as MH$breaks[1:10] and MH$breaks[2:11]. The height of each box is set to the value of the 3rd column, scaled by factor 'ysc'; the values in the 3rd column are available in MH$counts. The 4th column (grey-level) is whatever you like. The whole array can be constructed in R using a simple cbind(), and it is easy to see how to write an R routine which would output the whole of the above code block to a file, which could then be copied into a troff source document (which is ASCII text throughout). The above example is of course the bare basics. You would need to add extra 'pic' statements to draw the axes, with coordinate values as annotations (this is a straightforward loop in 'pic'). You could amend the code to cause the count-values (when non-zero) to be placed on the tops of the bars as follows: define bar { box width ($2 - $1)*xsc height $3*ysc \ with .sw at ($1*xsc,0) fill $4 if($30) then { sprintf(%.0f,$3) at top of last box } } The top of the box is the midpoint of its top side, and the function sprintf(%.0f,$3) does what R users would expect, producing a text object which is then stacked on top of the empty text object , the whole being vertically and horizontally centred at the top of the box (this ensures that the visible text is just above the top side; otherwise it would be vertically centred on, i.e. cut by, this line). You
Re: [R] Help using gPath
Hi Emilio Gagliardi wrote: Hi Paul, I'm sorry for not posting code, I wasn't sure if it would be helpful without the data...should I post the code and a sample of the data? I will remember to do that next time! It's important not only to post code, but also to make sure that other people can run it (i.e., include real data or have the code generate data or use one of R's predefined data sets). Also, isn't this next time ? :) grid.gedit(gPath(ylabel.text.382), gp=gpar(fontsize=16)) OK, I think my confusion comes from the notation that current.grobTree() produces and what strings are required in order to make changes to the underlying grobs. But, from what you've provided, it looks like I can access each grob with its unique name, regardless of which parent it is nested in...that helps Yes. By default, grid will search the tree of all grobs to find the name you provide. You can even just provide part of the name and it will find partial matches (depending on argument settings). On the other hand, by specifying a path that specified parent and child grobs, you can make sure you get exactly the grob you want. like to remove the left border on the first panel. I'd like to adjust the I'd guess you'd have to remove the grob background.rect.345 and then draw in just the sides you want, which would require getting to the right viewport, for which you'll need to study the viewport tree (see current.vpTree()) I did some digging into this and it seems pretty complicated, is there an example anywhere that makes sense to the beginner? The whole viewport grob relationship is not clear to me. So, accessing viewports and removing objects and drawing new ones is beyond me at this point. I can get my mind around your example below because I can see the object I want to modify in the viewer, and the code changes a property of that object, click enter, and bang the object changes. When you start talking external pointers and finding viewports and pushing and popping grobs I just get lost. I found the viewports for the grobTree, it looks like this: There's a book that provides a full explanation and the (basic) grid chapter is online (see http://www.stat.auckland.ac.nz/~paul/RGraphics/rgraphics.html) viewport[ROOT]-(viewport[layout]-(viewport[axis_h_1_1]-(viewport[bottom_axis]-(viewport[labels], viewport[ticks])), viewport[axis_h_1_2]-(viewport[bottom_axis]-(viewport[labels], viewport[ticks])), viewport[axis_v_1_1]-(viewport[left_axis]-(viewport[labels], viewport[ticks])), viewport[panel_1_1], viewport[panel_1_2], viewport[strip_h_1_1], viewport[strip_h_1_2], viewport[strip_v_1_1])) at that point I was like, ok, I'm done. :S Yep, the facilities for investigating the viewport and grob tree are basically inadequate. Based on some work Hadley did for ggplot, the development version of R has a slightly better tool called grid.ls() that can show how the grob tree and the viewport tree intertwine. That would allow you to see which viewport each grob was drawn in, which would help you, for example, to know which viewport you had to go to to replace a rectangle you want to remove. Something like ... grid.gedit(geom_bar.rect, gp=gpar(col=green)) Again, it would really help to have some code to run. My apologies, I thought the grobTree was sufficient in this case. Thanks very much for your help. Sorry to harp on about it, but if I had your code I could show you an example of how grid.ls() might help. Paul -- Dr Paul Murrell Department of Statistics The University of Auckland Private Bag 92019 Auckland New Zealand 64 9 3737599 x85392 [EMAIL PROTECTED] http://www.stat.auckland.ac.nz/~paul/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Positioning text in top left corner of plot
Hi Daniel Brewer wrote: Thanks for the replies, but I still cannot get what I want. I do not want the label inside the plot area, but in the top left of the paper, I suppose in the margins. When I try to use text to do this, it does not seem to plot it outside the plot area. I have also tried to use mtext, but that does not really cut it, as I cannot get the label in the correct position. Ideally, it would be best if I could use legend but have it outside the plot area. Any ideas? plot(1:10) library(grid) grid.text(What do we want? Text in the corner!\nWhere do we want it? Here!, x=unit(2, mm), y=unit(1, npc) - unit(2, mm), just=c(left, top)) Paul Thanks Benilton Carvalho wrote: maybe this is what you want? plot(rnorm(10)) legend(topleft, A), bty=n) ? b On Aug 7, 2007, at 11:08 AM, Daniel Brewer wrote: Simple question how can you position text in the top left hand corner of a plot? I am plotting multiple plots using par(mfrow=c(2,3)) and all I want to do is label these plots a), b), c) etc. I have been fiddling around with both text and mtext but without much luck. text is fine but each plot has a different scale on the axis and so this makes it problematic. What is the best way to do this? Many thanks Dan -- Dr Paul Murrell Department of Statistics The University of Auckland Private Bag 92019 Auckland New Zealand 64 9 3737599 x85392 [EMAIL PROTECTED] http://www.stat.auckland.ac.nz/~paul/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Help using gPath
Hi Emilio Gagliardi wrote: Hi everyone,I'm trying to figure out how to use gPath and the documentation is not very helpful :( I have the following plot object: plot-surrounds:: background plot.gTree.378:: background guide.gTree.355:: (background.rect.345, minor-horizontal.segments.347, minor-vertical.segments.349, major-horizontal.segments.351, major-vertical.segments.353) guide.gTree.356:: (background.rect.345, minor-horizontal.segments.347, minor-vertical.segments.349, major-horizontal.segments.351, major-vertical.segments.353) yaxis.gTree.338:: ticks.segments.321 labels.gTree.335:: (label.text.324, label.text.326, label.text.328, label.text.330, label.text.332, label.text.334) xaxis.gTree.339:: ticks.segments.309 labels.gTree.315:: (label.text.312, label.text.314) xaxis.gTree.340:: ticks.segments.309 labels.gTree.315:: (label.text.312, label.text.314) strip.gTree.364:: (background.rect.361, label.text.363) strip.gTree.370:: (background.rect.367, label.text.369) guide.rect.357 guide.rect.358 boxplots.gTree.283:: geom_boxplot.gTree.273:: (GRID.segments.267, GRID.segments.268, geom_bar.rect.270, geom_bar.rect.272) geom_boxplot.gTree.281:: (GRID.segments.275, GRID.segments.276, geom_bar.rect.278, geom_bar.rect.280) boxplots.gTree.301:: geom_boxplot.gTree.291:: (GRID.segments.285, GRID.segments.286, geom_bar.rect.288, geom_bar.rect.290) geom_boxplot.gTree.299:: (GRID.segments.293, GRID.segments.294, geom_bar.rect.296, geom_bar.rect.298) geom_jitter.points.303 geom_jitter.points.305 guide.rect.357 guide.rect.358 ylabel.text.382 xlabel.text.380 title It would be easier to help if we also had the code used to produce this plot, but in the meantime ... Could someone be so kind and create the proper call to grid.gedit() to access a couple of different aspects of this graph? I tried: grid.gedit(gPath(ylabel.text.382,labels), gp=gpar(fontsize=16)) # error That is looking for a grob called labels that is the child of a grob called ylabel.text.382. I can see a grob called ylabel.text.382, but it has no children. Try just ... grid.gedit(gPath(ylabel.text.382), gp=gpar(fontsize=16)) I'd like to change the margins on the label for the yaxis (not the tick marks) to put more space between the label and the tick marks. I'd also Margins may be tricky because it likely depends on a layout generated by ggplot; Hadley Wickham may have to help us out with a ggplot argument here ... (?) like to remove the left border on the first panel. I'd like to adjust the I'd guess you'd have to remove the grob background.rect.345 and then draw in just the sides you want, which would require getting to the right viewport, for which you'll need to study the viewport tree (see current.vpTree()) size of the font for the axis labels independently of the tick marks. I'd That's the one we've already done, right? like to change the color of the lines that make up the boxplots. Plus, I'd Something like ... grid.gedit(geom_bar.rect, gp=gpar(col=green)) ...? Again, it would really help to have some code to run. Paul like to change the margins of the strip labels. If you could show me a couple of examples I'm sure I cold get the rest working. Thanks so much, emilio [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Dr Paul Murrell Department of Statistics The University of Auckland Private Bag 92019 Auckland New Zealand 64 9 3737599 x85392 [EMAIL PROTECTED] http://www.stat.auckland.ac.nz/~paul/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] S4 methods: unable to find an inherited method
Hello all, I consider myself pretty new to the whole OO based programming so I'm sorry if I'm doing something stupid. xml-read.metlin(url) Error in function (classes, fdef, mtable) : unable to find an inherited method for function read.metlin, for signature url read.metlin standardGeneric for read.metlin defined from package .GlobalEnv function (xml, ...) standardGeneric(read.metlin) environment: 0x83a8ae4 Methods may be defined for arguments: xml url description http://metlin.scripps.edu/download/MSMS_test.XML; class url mode r text text opened closed can read yes can write no I defined my methods as : if (!isGeneric(read.metlin) ) setGeneric(read.metlin, function(xml) standardGeneric(read.metlin)) setMethod(read.metlin, xcmsRaw, function(xml) { #Parsing the METLIN XML File reading-readLines(xml) #do rest of script }) Any help as to why I'm getting the inherited method error would be great. Cheers, Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Optimal control package?
Dear All, Is there some package to do optimal control? Thanks in advance, Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Sorting data for multiple regressions
So I am trying to perform a robust regression (fastmcd in the robust package) on a dataset and I want to perform individual regressions based on the groups within the data. We have over 300 sites and we want to perform a regression based on the day of week and the hour for every site. I was wondering if anyone knows of a 'by' command similar to the one used in SAS that automatically groups the data for the regressions. If not, does anyone have any tips on how to split the data into smaller sets and then perform the regression on each set. I am new to R, so I don't know all of the common work arounds and such. At the moment the only method I can think of is to split the data using condition statements and manually running the regression on each set. Thanks or your help -Paul _ CONFIDENTIALITY NOTICE\ IMPORTANT: This communication, toget...{{dropped}} __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] constraint in constrOptim
On 8/2/07, André Rossi [EMAIL PROTECTED] wrote: I'm using the function constrOptim together with the SANN method and my objective function (f) has two parameters. One of the parameters needs be into (2^(-10), 2^4) range and the other into (2^(-2), 2^12) range. How can I do it using constrOptim?? I think that you, André, do not need constrOptim; optim is enough. See ?optim Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] constraint in constrOptim
On 8/2/07, Paul Smith [EMAIL PROTECTED] wrote: I'm using the function constrOptim together with the SANN method and my objective function (f) has two parameters. One of the parameters needs be into (2^(-10), 2^4) range and the other into (2^(-2), 2^12) range. How can I do it using constrOptim?? I think that you, André, do not need constrOptim; optim is enough. See ?optim The following example shows how to use optim to calculate the maximum of f(x,y) := (x-y)^2 s.t. 0 = x,y = 1. Paul --- myfunc - function(x) { x1 - x[1] x2 - x[2] (x1-x2)^2 } mygrad - function(x) { x1 - x[1] x2 - x[2] c(2, -2) * c(x1, x2) } optim(c(0.5,0.5),myfunc,mygrad,lower=c(0,0),upper=c(1,1),method=L-BFGS-B,control=list(fnscale=-1)) __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] RDCOM and R versions
Dear R-helpers, I have several versions of R installed on my computer, and I cannot do without any of these. However RCDOM seems to authorize only one version installed. Do you know any means to overcome this problem ? Thank you very much for your response. Paul Poncet - [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] memory error with 64-bit R in linux
You might try running top while R runs, to get a better idea of what is happening. 64-bit R takes more memory than 32-bit (longer pointers) and for a large problem I would say that 2GB RAM is a minimum if you want any speed. Slowness is likely related to needing to use swap space. The cannot allocate error is because you run out of both RAM and swap. If you are close to finishing your calculation you may resolve things by increasing swap, but don't expect it to be fast. There is also a possibility that your user id is restricted, but I'm not sure how that works anymore. It used to be controlled by ulimit, but that does not seem to be the case in newer versions of Linux. If you are still debugging your code, and there is some chance you are just gobbling up memory endlessly until it runs out, then you can speed things up (i.e. fail more quickly) by turning swap off. There are debugging situations where this turns out to be useful. HTH, Paul zhihua li wrote: Hi netters, I'm using the 64-bit R-2.5.0 on a x86-64 cpu, with an RAM of 2 GB. The operating system is SUSE 10. The system information is: -uname -a Linux someone 2.6.13-15.15-smp #1 SMP Mon Feb 26 14:11:33 UTC 2007 x86_64 x86_64 x86_64 GNU/Linux I used heatmap to process a matrix of the dim [16000,100]. After 3 hours of desperating waiting, R told me: cannot allocate vector of size 896 MB. I know the matrix is very big, but since I have 2 GB of RAM and in a 64-bit system, there should be no problem to deal with a vector smaller than 1 GB? (I was not running any other applications in my system) Does anyone know what's going on? Is there a hardware limit where I have to add more RAM, or is there some way to resolve it softwarely? Also is it possible to speed up the computing (I don't wanna wait another 3 hours to know I get another error message) Thank you in advance! _ 享用世界上最大的电子邮件系统― MSN Hotmail。 http://www.hotmail.com __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. La version française suit le texte anglais. This email may contain privileged and/or confidential inform...{{dropped}} __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Optimization
On 7/16/07, massimiliano.talarico [EMAIL PROTECTED] wrote: I need a suggest to obtain the max of this function: Max x1*0.021986+x2*0.000964+x3*0.02913 with these conditions: x1+x2+x3=1; radq((x1*0.114434)^2+(x2*0.043966)^2+(x3*0.100031)^2)=0.04; x1=0; x1=1; x2=0; x2=1; x3=0; x3=1; Any suggests ? What do you mean by 'radq', Massimiliano? Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Antwort: Re: pgup/pgdown in R Graphics Window under Linux ['Watchdog': checked]
Dear Prof. Ripley, Prof Brian Ripley [EMAIL PROTECTED] schrieb am 05.07.2007 21:46:20: Dear S-users. This is the help forum for R users Indeed. (How embarrasing not to be able to spell a one-letter word correctly...) How do I change pages on an X11 graphics device under linux? It is baffling, rather than easy. What did you find in your homework that told you that the X11() device had 'pages' and responded to those keys? My first experience with R was on a windows box (which accepts page-up/-down for flipping pages in a graphics device). Then I read the statements re. platform independence found on R-project.org (R is a free software [...]. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS.), and assumed that runs implied that most important features are implemented indepently of the underlying OS. My homework included a rather large number of variations over the following theme: http://www.google.com/search?q=linux+r-project+xyplot+pgup Finally, I turned to the r-help mailing list for help... After reading your reply, I am surprised that the current implementation of the X11 device apparently renders Z in xyplot(..., layout(X,Y,Z)) quite useless? (I'm sure you'll not hesitate to correct me if I'm wrong?) KR, PMD. ***Abbott GmbH Co. KG *** Sitz der Gesellschaft: Wiesbaden, Amtsgericht Wiesbaden HRA 4888 Persönlich haftende Gesellschafterin: Abbott Management GmbH Sitz der Gesellschaft: Wiesbaden, Amtsgericht Wiesbaden HRB 12889 Geschäftsführer: Siegfried Brune, Jaime Contreras, Rodolfo Viana Vorsitzender des Aufsichtsrates: John Landgraf *** L e g a l D is c l a i m e r *** Der Inhalt dieser Nachricht ist vertraulich, kann gesetzlichen Bestimmungen unterliegen, kann vertrauliche Informationen beinhalten und ist nur für den direkten Empfänger bestimmt.Sie ist Eigentum von Abbott Laboratories bzw. der betreffenden Niederlassung. Nicht authorisierte Benutzung, unbefugte Weitergabe sowie Kopieren jeglicher Bestandteile dieser Information ist streng verboten und kann als rechtswidrige Handlung eingestuft werden. Sollten Sie diese Nachricht fälschlicherweise erhalten haben, informieren Sie bitte Abbott Laboratories umgehend, indem Sie die Email zurückschicken und diese dann zusammen mit allen zugehörigen Kopien oder Dateianhängen zerstören. The information contained in this communication is confident...{{dropped}} __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Antwort: Re: pgup/pgdown in R Graphics Window under Linux ['Watchdog': checked]
Hi Deepayan, Deepayan Sarkar [EMAIL PROTECTED] schrieb am 06.07.2007 02:05:02: On 7/5/07, Paul Matthias Diderichsen [EMAIL PROTECTED] wrote: library(lattice) xyplot(speed~dist|speed, data=cars, layout=c(3,3)) If this is your use case, you might be interested in http://cran.r-project.org/src/contrib/Descriptions/plotAndPlayGTK.html Thanks a lot for the pointer; this package seems to be very useful when coding your own plots. However, it's not exactly my use case - rather an example to illustrate the the X11 graphics device is apparently not too useful for multi-page plots. The motivation for my question was that I want to use xpose4 ( http://xpose.sourceforge.net/) under linux. Xpose is a program that provides functions for producing diagnostic plots for population PKPD model evaluation. I am not able to rewrite the entire package, wrapping every call to multi-page plot functions with plotAndPlayGTK. That's why I was hoping that there exist some obscure configuration option for X11 (seems not to be the case, cf. Prof Ripley's reply) or an alternative graphic device that runs under linux. Thank you for your suggestion anyways! KR, PMD. ***Abbott GmbH Co. KG *** Sitz der Gesellschaft: Wiesbaden, Amtsgericht Wiesbaden HRA 4888 Persönlich haftende Gesellschafterin: Abbott Management GmbH Sitz der Gesellschaft: Wiesbaden, Amtsgericht Wiesbaden HRB 12889 Geschäftsführer: Siegfried Brune, Jaime Contreras, Rodolfo Viana Vorsitzender des Aufsichtsrates: John Landgraf *** L e g a l D is c l a i m e r *** Der Inhalt dieser Nachricht ist vertraulich, kann gesetzlichen Bestimmungen unterliegen, kann vertrauliche Informationen beinhalten und ist nur für den direkten Empfänger bestimmt.Sie ist Eigentum von Abbott Laboratories bzw. der betreffenden Niederlassung. Nicht authorisierte Benutzung, unbefugte Weitergabe sowie Kopieren jeglicher Bestandteile dieser Information ist streng verboten und kann als rechtswidrige Handlung eingestuft werden. Sollten Sie diese Nachricht fälschlicherweise erhalten haben, informieren Sie bitte Abbott Laboratories umgehend, indem Sie die Email zurückschicken und diese dann zusammen mit allen zugehörigen Kopien oder Dateianhängen zerstören. The information contained in this communication is confident...{{dropped}} __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] pgup/pgdown in R Graphics Window under Linux
Dear S-users. This should be an easy one: How do I change pages on an X11 graphics device under linux? I thought that the page-up/page-down keys were supposed to do the trick, but the frame (window) seems to be kind of immune to any kind of keyboard input. The only reaction I ever see is that the mouse pointer changes to a + when moved into the frame. I issue these commands: -- QUOTE START [EMAIL PROTECTED] ~]$ R R version 2.5.1 (2007-06-27) Copyright (C) 2007 The R Foundation for Statistical Computing ISBN 3-900051-07-0 R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. Natural language support but running in an English locale R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. library(lattice) xyplot(speed~dist|speed, data=cars, layout=c(3,3)) sessionInfo() R version 2.5.1 (2007-06-27) i686-redhat-linux-gnu locale: LC_CTYPE=en_US.UTF-8;LC_NUMERIC=C;LC_TIME=en_US.UTF-8;LC_COLLATE=en_US.UTF-8;LC_MONETARY=en_US.UTF-8;LC_MESSAGES=en_US.UTF-8;LC_PAPER=en_US.UTF-8;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_MEASUREMENT=en_US.UTF-8;LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods [7] base other attached packages: lattice 0.15-11 -QUOTE END - Which produces the following frame: Is it necessary to initialize the graphics device so that old pages are stored and accessible for paging? Or am I just pressing the wrong buttons. Any input is appreciated! Please let me know if further info re. versions/installed packages/etc is needed. Thanks, PMD. ***Abbott GmbH Co. KG *** Sitz der Gesellschaft: Wiesbaden, Amtsgericht Wiesbaden HRA 4888 Persönlich haftende Gesellschafterin: Abbott Management GmbH Sitz der Gesellschaft: Wiesbaden, Amtsgericht Wiesbaden HRB 12889 Geschäftsführer: Siegfried Brune, Jaime Contreras, Rodolfo Viana Vorsitzender des Aufsichtsrates: John Landgraf *** L e g a l D is c l a i m e r *** Der Inhalt dieser Nachricht ist vertraulich, kann gesetzlichen Bestimmungen unterliegen, kann vertrauliche Informationen beinhalten und ist nur für den direkten Empfänger bestimmt.Sie ist Eigentum von Abbott Laboratories bzw. der betreffenden Niederlassung. Nicht authorisierte Benutzung, unbefugte Weitergabe sowie Kopieren jeglicher Bestandteile dieser Information ist streng verboten und kann als rechtswidrige Handlung eingestuft werden. Sollten Sie diese Nachricht fälschlicherweise erhalten haben, informieren Sie bitte Abbott Laboratories umgehend, indem Sie die Email zurückschicken und diese dann zusammen mit allen zugehörigen Kopien oder Dateianhängen zerstören. The information contained in this communication is confidential, may be subject to legal privileges, may constitute inside information, and is intended only for the use of the addressee. It is the property of Abbott Laboratories or its relevant affiliate. Unauthorized use, disclosure or copying of this communication or any part thereof is strictly prohibited and may be unlawful. If you have received this communication in error, please notify Abbott Laboratories immediately by return e-mail and destroy this communication and all copies thereof, including all attachments.__ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Adding data to existing plot with new=TRUE does not appear to work
Dear all, I am trying to shove a number of cmdscale() results into a single plot (k=1 so I'm trying to get multiple columns in the plot). From ?par I learned that I can/should set new=TRUE in either par() or the plot function itself. However with the following reduced code, I get only a plot with a column of data points with x==2. plot(1,10, xlim=range(0,3), ylim=range(0,10), type='n') aa - rep(1,10) bb - 1:10 plot(aa,bb, xlim=range(0,3), ylim=range(0,10), new=TRUE) aa - rep(2,10) plot(aa,bb, xlim=range(0,3), ylim=range(0,10), new=TRUE) Also, when I insert a op - par(new=TRUE) either before or immediately after the first plot statement (the type='n' one) in the above code fragment, the resulting graph still only shows one column of data. Have I misinterpreted the instructions or the functionality of new=TRUE? Thank you, Paul Lemmens __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Fine tunning rgenoud
On 7/4/07, RAVI VARADHAN [EMAIL PROTECTED] wrote: Here is another approach: I wrote an R function that would generate interior points as starting values for constrOptim. This might work better than the LP approach, since the LP approach gives you a starting value that is on the boundary of the feasible region, i.e a vertex of the polyhedron, whereas this new approach gives you points on the interior. You can generate as many points as you wish, but the approach is brute-force and is very inefficient - it takes on the order of a 1000 tries to find one feasible point. Thanks again, Ravi. Actually, the LP approach also works here. Let g(X) = k be the constraints. Then, by solving a LP problem with the constraints g(X) = (k+0.2) returns an interior starting value for constrOptim. I am aware that the new set of constraints may correspond to an impossible linear system, but it works in many cases. Paul - Original Message - From: Paul Smith [EMAIL PROTECTED] Date: Tuesday, July 3, 2007 7:32 pm Subject: Re: [R] Fine tunning rgenoud To: R-help r-help@stat.math.ethz.ch On 7/4/07, Ravi Varadhan [EMAIL PROTECTED] wrote: It should be easy enough to check that your solution is valid (i.e. a local minimum): first, check to see if the solution satisfies all the constraints; secondly, check to see if it is an interior point (i.e. none of the constraints become equality); and finally, if the solution is an interior point, check to see whether the gradient there is close to zero. Note that if the solution is one of the vertices of the polyhedron, then the gradient may not be zero. I am having bad luck: all constraints are satisfied, but the solution given by constrOptim is not interior; the gradient is not equal to zero. Paul -Original Message- From: [EMAIL PROTECTED] [ On Behalf Of Paul Smith Sent: Tuesday, July 03, 2007 5:10 PM To: R-help Subject: Re: [R] Fine tunning rgenoud On 7/3/07, Ravi Varadhan [EMAIL PROTECTED] wrote: You had indicated in your previous email that you are having trouble finding a feasible starting value for constrOptim(). So, you basically need to solve a system of linear inequalities to obtain a starting point. Have you considered using linear programming? Either simplex() in the boot package or solveLP() in linprog would work. It seems to me that you could use any linear objective function in solveLP to obtain a feasible starting point. This is not the most efficient solution, but it might be worth a try. I am aware of other methods for generating n-tuples that satisfy linear inequality constraints, but AFAIK those are not available in R. Thanks, Ravi. I had already conceived the solution that you suggest, actually using lpSolve. I am able to get a solution for my problem with constrOptim, but I am not enough confident that the solution is right. That is why I am trying to get a solution with rgenoud, but unsuccessfully until now. Paul -Original Message- From: [EMAIL PROTECTED] [ On Behalf Of Paul Smith Sent: Tuesday, July 03, 2007 4:10 PM To: R-help Subject: [R] Fine tunning rgenoud Dear All, I am trying to solve the following maximization problem, but I cannot have rgenoud giving me a reliable solution. Any ideas? Thanks in advance, Paul library(rgenoud) v - 0.90 O1 - 10 O2 - 20 O0 - v*O1+(1-v)*O2 myfunc - function(x) { U0 - x[1] U1 - x[2] U2 - x[3] q0 - x[4] q1 - x[5] q2 - x[6] p - x[7] if (U0 0) return(-1e+200) else if (U1 0) return(-1e+200) else if (U2 0) return(-1e+200) else if ((U0-(U1+(O1-O0)*q1)) 0) return(-1e+200) else if ((U0-(U2+(O2-O0)*q2)) 0) return(-1e+200) else if ((U1-(U0+(O0-O1)*q0)) 0) return(-1e+200) else if ((U1-(U2+(O2-O1)*q2)) 0) return(-1e+200) else if((U2-(U0+(O0-O2)*q0)) 0) return(-1e+200) else if((U2-(U1+(O1-O2)*q1)) 0) return(-1e+200) else if(p 0) return(-1e+200) else if(p 1) return(-1e+200) else if(q0 0) return(-1e+200) else if(q1 0) return(-1e+200) else if(q2 0) return(-1e+200) else return(p*(sqrt(q0)-(O0*q0+U0))+(1-p)*(v*(sqrt(q1)-(O1*q1+U1))+(1-v)*(sqrt(q2 )-(O2*q2+U2 } genoud(myfunc,nvars=7,max=T,pop.size=6000,starting.values=runif(7),wait.gene rations=150,max.generations=300,boundary.enforcement=2) __ R-help@stat.math.ethz.ch mailing list
Re: [R] Adding data to existing plot with new=TRUE does not appear to work
Hi Petr, On 7/4/07, Petr PIKAL [EMAIL PROTECTED] wrote: par(new=T) plot(aa,bb, xlim=range(0,3), ylim=range(0,10), new=TRUE) So I need to activate the par(new=T) really just ahead of time when I need it, not as sort of a general clause at the beginning of my script? However you can get similar result with using points Yes I new that, but I wanted to try and go without an if() for deciding between the first and consecutive columns. Thnx for helping out! Paul Lemmens __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Fine tunning rgenoud
On 7/4/07, RAVI VARADHAN [EMAIL PROTECTED] wrote: My point is that it might be better to try multiple (feasible) starting values for constrOptim to ensure that you have a good local minimum, since it appears that constrOptim converges to a boundary solution where the gradient is non-zero. That is why my code could be useful. Thanks, Ravi. I have used your function, which works pretty fine. However, constrOptim returns solutions markedly different, depending on the starting values. That is true that I am expecting a solution in the boundary, but should not constrOptim find boundary solutions correctly? The set of solution that I got is below. Paul 2.67682495728743e-080.676401684216637 5.18627076390355e-09 0.00206463986063195 0.871859686128364.32039325909089e-11 0.9996234 3.71711020733097e-080.539853580957444 1.82592937615235e-08 0.00206941041763503 0.933052503934472.08076621230984e-11 0.9995774 1.55648443014316e-080.356047772992972 8.61341165816411e-09 0.00207149128044574 0.939531540703735 2.55211186629222e-12 0.424 2.20685747493755e-070.575689534431218 5.30976753476747e-08 0.00210500604605837 0.588947341576757 3.1310360048386e-10 0.9998789 1.92961662926727e-080.773588030510204 1.04841835042200e-08 0.00206723852358352 0.816755014708394 3.89478290348532e-11 0.9997794 0.0002798240512890820.0003109923855228861.01467522935252e-06 3.11645639181419e-050.00249801538651552 3.0978819115532e-05 7.11821104872585e-06 2.81901448690893e-070.381718731525906 4.72860507882539e-08 0.00206807672109157 0.769178513763055 1.39278079797628e-09 0.9967123 5.58938545019597e-050.00171253668169328 4.54005998518212e-09 0.00165663757292733 0.00247994862102590 6.20992250482468e-06 0.419169641865998 1.03300938985890e-080.438357835603591 6.89854079723234e-09 0.00206693286138396 0.977554885433201 1.17209206267609e-10 0.996921 7.63336821363444e-050.00177141538041517 1.88050423143828e-10 0.00169507950991094 0.00249739505142207 8.4814984916537e-06 0.470929220605509 9.16005846107533e-090.682179815036755 1.63255733785783e-09 0.00206922107327189 0.919323193130209 5.71436138398897e-11 0.999629 1.40968913167328e-080.343606628343661 1.33227447885302e-08 0.00206789984370423 0.343671264496824 1.11679312116211e-11 0.822 4.76054734844857e-090.593022549313178 2.28102966623129e-09 0.00206625165098398 0.947562121256448 8.9437610753173e-11 0.992 1.96950784184139e-070.579488113726155 1.61915231214025e-07 0.00208000350528798 1.008913405950401.22248906754713e-10 0.9996493 8.1448937742933e-09 0.441088618716555 4.54846390087941e-09 0.00207634940425852 0.446155700100820 4.81439647816238e-12 0.39 4.82439218405912e-080.557771049256698 3.53737879481732e-08 0.0020663035737319 0.588137767965923 2.6568947800491e-11 0.9988615 2.43086751126363e-080.522927598354163 2.26886829089137e-08 0.00206533531066324 0.611696593543814 4.51226610050184e-11 0.087 3.05498959434100e-080.465522202845817 1.09246302124670e-08 0.00207004066920179 0.465583376966915 3.24213847202457e-11 0.9997366 1.88687179088788e-070.783614197203923 4.51346471059839e-08 0.00222403775221293 0.786422171740329 8.17865794171933e-10 0.9986103 1.0154423824979e-08 0.30265579883 9.06923080122203e-09 0.00206615353968094 0.359722316646974 8.27866320956902e-12 0.998461 8.91008717665837e-080.0020661526864997 3.08619455858999e-09 0.00206579199039568 0.00275523149199496 9.55650084108725e-09 0.985185595958656 1.25320647920029e-070.635217955401437 7.44627883600107e-08 0.00206656250455391 0.855937507707323 3.70326032870889e-10 0.9998375 2.57618374406559e-080.636499151952225 1.09822023878715e-08 0.00206677354204888 0.772636071860102 8.99370944431481e-11 0.9978744 1.09474196877990e-080.501469973722704 1.19992915868609e-10 0.00206117941606503 0.501594064757161 1.34320044786225e-11 0.9991232 5.24203710193977e-050.0001279983401441093.33258623630601e-09 7.55779680724378e-050.00248898574263025 5.82411313482383e-06 0.0221497278110802 3.80217498132259e-070.576645687031891.01755510162620e-08 0.00207232950382402 0.944031557945531 5.30703662426069e-10 0.9995957 1.45159816281038e-09
Re: [R] Fine tunning rgenoud
On 7/4/07, Paul Smith [EMAIL PROTECTED] wrote: On 7/4/07, RAVI VARADHAN [EMAIL PROTECTED] wrote: My point is that it might be better to try multiple (feasible) starting values for constrOptim to ensure that you have a good local minimum, since it appears that constrOptim converges to a boundary solution where the gradient is non-zero. That is why my code could be useful. Thanks, Ravi. I have used your function, which works pretty fine. However, constrOptim returns solutions markedly different, depending on the starting values. That is true that I am expecting a solution in the boundary, but should not constrOptim find boundary solutions correctly? The set of solution that I got is below. Unless, there are many local optimal solutions... Paul 2.67682495728743e-080.676401684216637 5.18627076390355e-09 0.00206463986063195 0.871859686128364.32039325909089e-11 0.9996234 3.71711020733097e-080.539853580957444 1.82592937615235e-08 0.00206941041763503 0.933052503934472.08076621230984e-11 0.9995774 1.55648443014316e-080.356047772992972 8.61341165816411e-09 0.00207149128044574 0.939531540703735 2.55211186629222e-12 0.424 2.20685747493755e-070.575689534431218 5.30976753476747e-08 0.00210500604605837 0.588947341576757 3.1310360048386e-10 0.9998789 1.92961662926727e-080.773588030510204 1.04841835042200e-08 0.00206723852358352 0.816755014708394 3.89478290348532e-11 0.9997794 0.0002798240512890820.0003109923855228861.01467522935252e-06 3.11645639181419e-050.00249801538651552 3.0978819115532e-05 7.11821104872585e-06 2.81901448690893e-070.381718731525906 4.72860507882539e-08 0.00206807672109157 0.769178513763055 1.39278079797628e-09 0.9967123 5.58938545019597e-050.00171253668169328 4.54005998518212e-09 0.00165663757292733 0.00247994862102590 6.20992250482468e-06 0.419169641865998 1.03300938985890e-080.438357835603591 6.89854079723234e-09 0.00206693286138396 0.977554885433201 1.17209206267609e-10 0.996921 7.63336821363444e-050.00177141538041517 1.88050423143828e-10 0.00169507950991094 0.00249739505142207 8.4814984916537e-06 0.470929220605509 9.16005846107533e-090.682179815036755 1.63255733785783e-09 0.00206922107327189 0.919323193130209 5.71436138398897e-11 0.999629 1.40968913167328e-080.343606628343661 1.33227447885302e-08 0.00206789984370423 0.343671264496824 1.11679312116211e-11 0.822 4.76054734844857e-090.593022549313178 2.28102966623129e-09 0.00206625165098398 0.947562121256448 8.9437610753173e-11 0.992 1.96950784184139e-070.579488113726155 1.61915231214025e-07 0.00208000350528798 1.008913405950401.22248906754713e-10 0.9996493 8.1448937742933e-09 0.441088618716555 4.54846390087941e-09 0.00207634940425852 0.446155700100820 4.81439647816238e-12 0.39 4.82439218405912e-080.557771049256698 3.53737879481732e-08 0.0020663035737319 0.588137767965923 2.6568947800491e-11 0.9988615 2.43086751126363e-080.522927598354163 2.26886829089137e-08 0.00206533531066324 0.611696593543814 4.51226610050184e-11 0.087 3.05498959434100e-080.465522202845817 1.09246302124670e-08 0.00207004066920179 0.465583376966915 3.24213847202457e-11 0.9997366 1.88687179088788e-070.783614197203923 4.51346471059839e-08 0.00222403775221293 0.786422171740329 8.17865794171933e-10 0.9986103 1.0154423824979e-08 0.30265579883 9.06923080122203e-09 0.00206615353968094 0.359722316646974 8.27866320956902e-12 0.998461 8.91008717665837e-080.0020661526864997 3.08619455858999e-09 0.00206579199039568 0.00275523149199496 9.55650084108725e-09 0.985185595958656 1.25320647920029e-070.635217955401437 7.44627883600107e-08 0.00206656250455391 0.855937507707323 3.70326032870889e-10 0.9998375 2.57618374406559e-080.636499151952225 1.09822023878715e-08 0.00206677354204888 0.772636071860102 8.99370944431481e-11 0.9978744 1.09474196877990e-080.501469973722704 1.19992915868609e-10 0.00206117941606503 0.501594064757161 1.34320044786225e-11 0.9991232 5.24203710193977e-050.0001279983401441093.33258623630601e-09 7.55779680724378e-050.00248898574263025 5.82411313482383e-06 0.0221497278110802
[R] Fine tunning rgenoud
Dear All, I am trying to solve the following maximization problem, but I cannot have rgenoud giving me a reliable solution. Any ideas? Thanks in advance, Paul library(rgenoud) v - 0.90 O1 - 10 O2 - 20 O0 - v*O1+(1-v)*O2 myfunc - function(x) { U0 - x[1] U1 - x[2] U2 - x[3] q0 - x[4] q1 - x[5] q2 - x[6] p - x[7] if (U0 0) return(-1e+200) else if (U1 0) return(-1e+200) else if (U2 0) return(-1e+200) else if ((U0-(U1+(O1-O0)*q1)) 0) return(-1e+200) else if ((U0-(U2+(O2-O0)*q2)) 0) return(-1e+200) else if ((U1-(U0+(O0-O1)*q0)) 0) return(-1e+200) else if ((U1-(U2+(O2-O1)*q2)) 0) return(-1e+200) else if((U2-(U0+(O0-O2)*q0)) 0) return(-1e+200) else if((U2-(U1+(O1-O2)*q1)) 0) return(-1e+200) else if(p 0) return(-1e+200) else if(p 1) return(-1e+200) else if(q0 0) return(-1e+200) else if(q1 0) return(-1e+200) else if(q2 0) return(-1e+200) else return(p*(sqrt(q0)-(O0*q0+U0))+(1-p)*(v*(sqrt(q1)-(O1*q1+U1))+(1-v)*(sqrt(q2)-(O2*q2+U2 } genoud(myfunc,nvars=7,max=T,pop.size=6000,starting.values=runif(7),wait.generations=150,max.generations=300,boundary.enforcement=2) __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Fine tunning rgenoud
On 7/3/07, Ravi Varadhan [EMAIL PROTECTED] wrote: You had indicated in your previous email that you are having trouble finding a feasible starting value for constrOptim(). So, you basically need to solve a system of linear inequalities to obtain a starting point. Have you considered using linear programming? Either simplex() in the boot package or solveLP() in linprog would work. It seems to me that you could use any linear objective function in solveLP to obtain a feasible starting point. This is not the most efficient solution, but it might be worth a try. I am aware of other methods for generating n-tuples that satisfy linear inequality constraints, but AFAIK those are not available in R. Thanks, Ravi. I had already conceived the solution that you suggest, actually using lpSolve. I am able to get a solution for my problem with constrOptim, but I am not enough confident that the solution is right. That is why I am trying to get a solution with rgenoud, but unsuccessfully until now. Paul -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Paul Smith Sent: Tuesday, July 03, 2007 4:10 PM To: R-help Subject: [R] Fine tunning rgenoud Dear All, I am trying to solve the following maximization problem, but I cannot have rgenoud giving me a reliable solution. Any ideas? Thanks in advance, Paul library(rgenoud) v - 0.90 O1 - 10 O2 - 20 O0 - v*O1+(1-v)*O2 myfunc - function(x) { U0 - x[1] U1 - x[2] U2 - x[3] q0 - x[4] q1 - x[5] q2 - x[6] p - x[7] if (U0 0) return(-1e+200) else if (U1 0) return(-1e+200) else if (U2 0) return(-1e+200) else if ((U0-(U1+(O1-O0)*q1)) 0) return(-1e+200) else if ((U0-(U2+(O2-O0)*q2)) 0) return(-1e+200) else if ((U1-(U0+(O0-O1)*q0)) 0) return(-1e+200) else if ((U1-(U2+(O2-O1)*q2)) 0) return(-1e+200) else if((U2-(U0+(O0-O2)*q0)) 0) return(-1e+200) else if((U2-(U1+(O1-O2)*q1)) 0) return(-1e+200) else if(p 0) return(-1e+200) else if(p 1) return(-1e+200) else if(q0 0) return(-1e+200) else if(q1 0) return(-1e+200) else if(q2 0) return(-1e+200) else return(p*(sqrt(q0)-(O0*q0+U0))+(1-p)*(v*(sqrt(q1)-(O1*q1+U1))+(1-v)*(sqrt(q2 )-(O2*q2+U2 } genoud(myfunc,nvars=7,max=T,pop.size=6000,starting.values=runif(7),wait.gene rations=150,max.generations=300,boundary.enforcement=2) __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Fine tunning rgenoud
On 7/4/07, Ravi Varadhan [EMAIL PROTECTED] wrote: It should be easy enough to check that your solution is valid (i.e. a local minimum): first, check to see if the solution satisfies all the constraints; secondly, check to see if it is an interior point (i.e. none of the constraints become equality); and finally, if the solution is an interior point, check to see whether the gradient there is close to zero. Note that if the solution is one of the vertices of the polyhedron, then the gradient may not be zero. That is a very good idea, Ravi! Thanks! Paul -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Paul Smith Sent: Tuesday, July 03, 2007 5:10 PM To: R-help Subject: Re: [R] Fine tunning rgenoud On 7/3/07, Ravi Varadhan [EMAIL PROTECTED] wrote: You had indicated in your previous email that you are having trouble finding a feasible starting value for constrOptim(). So, you basically need to solve a system of linear inequalities to obtain a starting point. Have you considered using linear programming? Either simplex() in the boot package or solveLP() in linprog would work. It seems to me that you could use any linear objective function in solveLP to obtain a feasible starting point. This is not the most efficient solution, but it might be worth a try. I am aware of other methods for generating n-tuples that satisfy linear inequality constraints, but AFAIK those are not available in R. Thanks, Ravi. I had already conceived the solution that you suggest, actually using lpSolve. I am able to get a solution for my problem with constrOptim, but I am not enough confident that the solution is right. That is why I am trying to get a solution with rgenoud, but unsuccessfully until now. Paul -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Paul Smith Sent: Tuesday, July 03, 2007 4:10 PM To: R-help Subject: [R] Fine tunning rgenoud Dear All, I am trying to solve the following maximization problem, but I cannot have rgenoud giving me a reliable solution. Any ideas? Thanks in advance, Paul library(rgenoud) v - 0.90 O1 - 10 O2 - 20 O0 - v*O1+(1-v)*O2 myfunc - function(x) { U0 - x[1] U1 - x[2] U2 - x[3] q0 - x[4] q1 - x[5] q2 - x[6] p - x[7] if (U0 0) return(-1e+200) else if (U1 0) return(-1e+200) else if (U2 0) return(-1e+200) else if ((U0-(U1+(O1-O0)*q1)) 0) return(-1e+200) else if ((U0-(U2+(O2-O0)*q2)) 0) return(-1e+200) else if ((U1-(U0+(O0-O1)*q0)) 0) return(-1e+200) else if ((U1-(U2+(O2-O1)*q2)) 0) return(-1e+200) else if((U2-(U0+(O0-O2)*q0)) 0) return(-1e+200) else if((U2-(U1+(O1-O2)*q1)) 0) return(-1e+200) else if(p 0) return(-1e+200) else if(p 1) return(-1e+200) else if(q0 0) return(-1e+200) else if(q1 0) return(-1e+200) else if(q2 0) return(-1e+200) else return(p*(sqrt(q0)-(O0*q0+U0))+(1-p)*(v*(sqrt(q1)-(O1*q1+U1))+(1-v)*(sqrt(q2 )-(O2*q2+U2 } genoud(myfunc,nvars=7,max=T,pop.size=6000,starting.values=runif(7),wait.gene rations=150,max.generations=300,boundary.enforcement=2) __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Fine tunning rgenoud
On 7/4/07, Ravi Varadhan [EMAIL PROTECTED] wrote: It should be easy enough to check that your solution is valid (i.e. a local minimum): first, check to see if the solution satisfies all the constraints; secondly, check to see if it is an interior point (i.e. none of the constraints become equality); and finally, if the solution is an interior point, check to see whether the gradient there is close to zero. Note that if the solution is one of the vertices of the polyhedron, then the gradient may not be zero. I am having bad luck: all constraints are satisfied, but the solution given by constrOptim is not interior; the gradient is not equal to zero. Paul -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Paul Smith Sent: Tuesday, July 03, 2007 5:10 PM To: R-help Subject: Re: [R] Fine tunning rgenoud On 7/3/07, Ravi Varadhan [EMAIL PROTECTED] wrote: You had indicated in your previous email that you are having trouble finding a feasible starting value for constrOptim(). So, you basically need to solve a system of linear inequalities to obtain a starting point. Have you considered using linear programming? Either simplex() in the boot package or solveLP() in linprog would work. It seems to me that you could use any linear objective function in solveLP to obtain a feasible starting point. This is not the most efficient solution, but it might be worth a try. I am aware of other methods for generating n-tuples that satisfy linear inequality constraints, but AFAIK those are not available in R. Thanks, Ravi. I had already conceived the solution that you suggest, actually using lpSolve. I am able to get a solution for my problem with constrOptim, but I am not enough confident that the solution is right. That is why I am trying to get a solution with rgenoud, but unsuccessfully until now. Paul -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Paul Smith Sent: Tuesday, July 03, 2007 4:10 PM To: R-help Subject: [R] Fine tunning rgenoud Dear All, I am trying to solve the following maximization problem, but I cannot have rgenoud giving me a reliable solution. Any ideas? Thanks in advance, Paul library(rgenoud) v - 0.90 O1 - 10 O2 - 20 O0 - v*O1+(1-v)*O2 myfunc - function(x) { U0 - x[1] U1 - x[2] U2 - x[3] q0 - x[4] q1 - x[5] q2 - x[6] p - x[7] if (U0 0) return(-1e+200) else if (U1 0) return(-1e+200) else if (U2 0) return(-1e+200) else if ((U0-(U1+(O1-O0)*q1)) 0) return(-1e+200) else if ((U0-(U2+(O2-O0)*q2)) 0) return(-1e+200) else if ((U1-(U0+(O0-O1)*q0)) 0) return(-1e+200) else if ((U1-(U2+(O2-O1)*q2)) 0) return(-1e+200) else if((U2-(U0+(O0-O2)*q0)) 0) return(-1e+200) else if((U2-(U1+(O1-O2)*q1)) 0) return(-1e+200) else if(p 0) return(-1e+200) else if(p 1) return(-1e+200) else if(q0 0) return(-1e+200) else if(q1 0) return(-1e+200) else if(q2 0) return(-1e+200) else return(p*(sqrt(q0)-(O0*q0+U0))+(1-p)*(v*(sqrt(q1)-(O1*q1+U1))+(1-v)*(sqrt(q2 )-(O2*q2+U2 } genoud(myfunc,nvars=7,max=T,pop.size=6000,starting.values=runif(7),wait.gene rations=150,max.generations=300,boundary.enforcement=2) __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Print grid/ggplot to a metafile
Hi Dieter Menne wrote: Dear UseRs called Hadley, or Paul, I am trying to print an edited ggplot2/grid graphics to a metafile. With the commented line below it works, but when I edit the plot by uncommenting the line, it fails, because it's illegal to have 2 graphics in a metafile. It works with pdf, but even then I get two plots, which is a nuisance. I found a workaround by using windows(); savePlot, but it only works in interactive mode, not when called with something like (Windows) rterm --no-save printit.r Any ideas? You can capture the ggplot drawing as a grid grob (gTree), edit that (no drawing occurs to this point), and then draw it ... gridggplot - grid.grabExpr(print(ggplot(mtcars, aes(x=cyls)) + geom_bar())) modgridggplot - editGrob(gridggplot, xaxis::labels::label.text, just=c(center,center), grep=TRUE, global=TRUE) win.metafile(file=bar.emf) grid.draw(modgridggplot) dev.off() Paul Dieter #-- library(ggplot2) win.metafile(file=bar.emf) mtcars$cyls = factor(mtcars$cyl, labels=c(four\ncylinders,six\ncylinders,eight\ncylinders)) ggplot(mtcars, aes(x=cyls)) + geom_bar() #grid.gedit(xaxis::labels::label.text,just=c(center,center)) dev.off() __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Dr Paul Murrell Department of Statistics The University of Auckland Private Bag 92019 Auckland New Zealand 64 9 3737599 x85392 [EMAIL PROTECTED] http://www.stat.auckland.ac.nz/~paul/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Looking for easy way of getting the starting value for constrOptim
Dear All, I am using constrOptim, but facing a difficulty: how can I get the starting value? Is there some automatic way of getting it? Since I have many inequalities as constraints (all linear), it is quite difficult to find a feasible value. I have tried to use rgenoud solution as a feasible value, but rgenoud seems not being converging. Any ideas? Thanks in advance, Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Determining whether a function's return value is assigned
Dear all, Does R offer a means by which a function can determine whether its return value is assigned? I am using R 2.4.1 for Windows. Suppose what I am looking for is called return.value.assigned. Then one might use it like this myfunction - function () { # Create bigobject here if (return.value.assigned()) { bigobject } else { summary(bigobject) } } and x - myfunction() # bigobject is assigned myfunction() # summary of bigobject is printed Octave and MATLAB have the nargout function that does what I want, and Perl has the wantarray function detecting the context in which a function is called. Perhaps match.call() can be made to do what I want, but, if so, I don't see it in reading the documentation. Sincerely, Paul Laub __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] cmdscale(eurodist)
Dear all, I have a question regarding the 'y - -loc[,2]' in ?cmdscale. Although I see that the plot is more sensible when using the '-loc' instead of just 'y - loc[,2]', I don't understand if there is a statistical reason to do '-loc[,2]'. So is this just to make the graph look better, or should I always use -loc for the y-axis of a similar plot for a completely different data set. Kind regards, Paul Lemmens __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Prediction Intervals
Hi. In part answer to my own question, I just found this link: https://stat.ethz.ch/pipermail/r-help/2003-August/037654.html which explains how one might calculate an approximation to the variance of a non-linear function and use that to calculate a confidence/prediction interval. If anyone knows an easier way, then I'd still be glad to hear from you. Thanks, Paul On 21/06/07, Paul Suckling [EMAIL PROTECTED] wrote: Hello. Does anyone out there know if there is a function (or functions) that will enable me to calculate prediction and confidence intervals from a non-linear least squares model? (predict.nls does not do this.) Thank you, Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Prediction Intervals
Hello. Does anyone out there know if there is a function (or functions) that will enable me to calculate prediction and confidence intervals from a non-linear least squares model? (predict.nls does not do this.) Thank you, Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] help with using grid to modify ggplot/lattice plots
Hi Vikas Rawal wrote: I want to use grid to modify some boxplots made using ggplot. I would really appreciate if somebody could guide me to a resource on how to use grid to modify such graphics. I guess the basic approach will be similar to using grid to modify lattice graphics. To that extent something that explains use of grid to modify lattice graphics may also be useful. I have gone through vignettes in the grid package but am somehow not able to understand the overall approach. It would be useful if there is something more specific that deals with using grid to modify such graphics. A couple of suggestions: - look at Hadley Wickham's online book draft for ggplot2 http://had.co.nz/ggplot2/ - look at the online chapter on grid from my book http://www.stat.auckland.ac.nz/~paul/RGraphics/chapter5.pdf Paul Vikas Rawal Associate Professor Centre for Economic Studies and Planning Jawaharlal Nehru University New Delhi __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Dr Paul Murrell Department of Statistics The University of Auckland Private Bag 92019 Auckland New Zealand 64 9 3737599 x85392 [EMAIL PROTECTED] http://www.stat.auckland.ac.nz/~paul/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Large Binary file reader for Simple minds
Hi Todd Remund wrote: I'm more like a caveman when it comes to programming tools. So, with that in mind, is there a way to use readBin in a batch format to read in pieces of a large binary file? Thank you for the consideration of my question. The 'hexView' package might be useful to you. See Viewing Binary Files with the hexView Package in R News 7/1. Paul Todd Remund __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Dr Paul Murrell Department of Statistics The University of Auckland Private Bag 92019 Auckland New Zealand 64 9 3737599 x85392 [EMAIL PROTECTED] http://www.stat.auckland.ac.nz/~paul/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] data type for block data?
Dear All, I have a matrix with data that is not organised. I would like to go through this and extract it. Each feature has 2 vectors which express the data. I also have an index of the places where the data should be cut. eg. class(cc) matrix cc [,1] [,2] [1,]1 26 [2,]2 27 [3,]3 28 [4,]4 29 [5,]5 30 [6,]6 31 [7,]7 32 [8,]8 33 [9,]9 34 [10,]1 27 [11,]1 28 [12,]2 30 [13,]3 34 ect.. index [1] 10 40 Is there a way to take cc[i:index[i-1],] to another format as to where each block could be worked on separately. ie so in one block would be rows1:10 the next block would be rows11:40 and so on. Thanks, Paul -- Research Technician Mass Spectrometry o The / o Scripps \ o Research / o Institute __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] [OT] 'gv' and fractional points
Hi If I understand correctly, this is something that the 'grImport' package might be very useful for. You can import the PostScript image into R, which means that you can draw the image, but you also have the locations of everything that is drawn as numeric values so you should be able (probably after a bit of transformation) to extract the values to very good accuracy. If you can provide me with an example file, I'd be happy to play around and see if I could get this to work. Paul (Ted Harding) wrote: On 15-Jun-07 16:29:53, Ted Harding wrote: [...] However, as a follow-up, I've since found that one can (somewhat tediously) do what I was asking with the GIMP. As well as the awkwardness of doing it the GIMP way, I've discovered another disadvantage. I'd previously tried it on a rather small image (175x70 points, = 2.43x0.97 inches). I then tried it on a full A4 page. Even at a GIMP Scale factor of 300, this leads to a 50MB temporary file being created. At 1000, this would rise to some 550MB, as I found out after this attempt filled up the limited spare space I have on the disk drive in question ... No doubt Scale=300 (as opposed to the default of 100) may be ample for most purposes, but the overhead is still unpleasant! Hence I'm once again hankering after something which will display a PS file as efficiently as 'gv', but will report the cursor position in fractions of a point! Best wishes to all, Ted. E-Mail: (Ted Harding) [EMAIL PROTECTED] Fax-to-email: +44 (0)870 094 0861 Date: 15-Jun-07 Time: 19:18:48 -- XFMail -- E-Mail: (Ted Harding) [EMAIL PROTECTED] Fax-to-email: +44 (0)870 094 0861 Date: 15-Jun-07 Time: 20:33:19 -- XFMail -- __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Dr Paul Murrell Department of Statistics The University of Auckland Private Bag 92019 Auckland New Zealand 64 9 3737599 x85392 [EMAIL PROTECTED] http://www.stat.auckland.ac.nz/~paul/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] random effects in logistic regression (lmer)-- identification question
3.5889e-01 1.000 number of obs: 6201, groups: NonwhiteID, 1736; PLACE, 33 The variance component estimated for NonwhiteID means that the variance observed among Nonwhite respondents is not substantially different from the implicit, unestimated individual level random error. However, it still appears that there is a substantial place effect, for Nonwhites only. Do I understand that right? Well, thanks in advance, as usual. -- Paul E. Johnson Professor, Political Science 1541 Lilac Lane, Room 504 University of Kansas __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Can I access the filename of the active editor window?
When I run a script from an open editor window (using Ctrl-A, Ctrl-R), I would like the filename of the script to be automatically written into the program output, to keep up with frequent version changes. Is there a way to access the filename (+ path) of the open script (the active one, if there is more than one editor window open)? I'm using R 2.4.1 on Windows XP. Paul [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Venn diagram
I'm really glad to see this topic come up. Area-proportional Venn diagrams are a phantastic way to visualize agreement. For the 2-rater scenario this is straightforward; here are two examples from my own work that were done in R. (I'm far too embarrassed to enclose the code). http://myweb.dal.ca/partes/venn_example.jpg For 3 (or more!) variables, it becomes tricky, but Chow and Ruskey have solved this recently (see link below). http://www.cs.uvic.ca/~ruskey/Publications/VennArea/VennArea.html I think there is even a Java applet for demonstration purposes. It would be phantastic to have a good R-implementation of this... Unfortunately my own skills are several log units below what's required. I have written to Chow and Ruskey before but unfortunately not heard anything. Anyone up for the job?? Best wishes Paul Nina Hubner wrote: Hello, I am a total beginner with “R” and found a package “venn” to create a venn diagram. The problem is, I cannot create the vectors required for the diagram. The manual say: R venn(accession, libname, main = All samples) where accession was a vector containing the codes identifying the RNA sequences, and libname was a vector containing the codes identifying the tissue sample (library). The structure of my data is as follows: R structure(list(cyto = c(A, “B”, “C”, “D”), nuc = c(“A”, “B”, “E”, “”), chrom = c(“B”, “F”, “”, “”)),.Names = c(cyto, Nuc, chrom)) accession should be A, B, and libname schould be cyto, nuc and chrom as I understand it... Could you help me? Sorry, that might be a very simple question, but I am a total beginner as said before! The question has already been asked, but unfortunately there was no answer... Thank you a lot, Nina Hubner __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- View this message in context: http://www.nabble.com/Venn-diagram-tf3846402.html#a10897668 Sent from the R help mailing list archive at Nabble.com. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] 3D plots with data.frame
Dear all, Thank you for any help. I have a data.frame and would like to plot it in 3D. I have tried wireframe() and cloud(), I got scatterplot3d(xs) Error: could not find function scatterplot3d wireframe(xs) Error in wireframe(xs) : no applicable method for wireframe persp(x=x, y=y, z=xs) Error in persp.default(x = x, y = y, z = xs) : (list) object cannot be coerced to 'double' class(xs) [1] data.frame Where x and y were a sequence of my min - max by 50 of xs[,1] and xs[,2]. my data is/looks like: dim(xs) [1] 400 4 xs[1:5,] x y Z1 Z2 1 27172.4 19062.4 0128 2 27000.9 19077.8 0 0 3 27016.8 19077.5 0 0 4 27029.5 19077.3 0 0 5 27045.4 19077.0 0 0 Cheers, Paul -- Research Technician Mass Spectrometry o The / o Scripps \ o Research / o Institute __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] quartz() on MAC OSX
On Tue, 22 May 2007, hadley wickham wrote: On 5/22/07, Rolf Turner [EMAIL PROTECTED] wrote: On 22/5/07 6:48 AM, hadley wickham [EMAIL PROTECTED] wrote: Two possible solutions: * DISPLAY=0.0.0.0:0 R - and then X11() should work without having to use xterm * install.packages(CarbonEL); library(CarbonEL); quartz() It is clear that life is determined to frustrate me. I had a look at CRAN just now and could find no sign of a package called CarbonEL. The list jumps from car to cat --- no Carbon of any flavour. What gives? Also I tried setting the DISPLAY (probably incorrectly, since I don't understand what's going on). I used Sys.setenv(DISPLAY=0.0.0.0:0 R) X11() And got the error message Error in X11() : X11 module cannot be loaded Sorry, just type DISPLAY=0.0.0.0:0 R at the command prompt Edit ~/.Rprofile (create if necessary) as below: $ cat NOMORE ~/.Rprofile Sys.putenv(DISPLAY=:0.0) NOMORE $ You should start the X11 server first before expecting the functionality to work, as below: [[ Terminal.app ]] $ open /Applications/Utilities/X11.app $ R X11() Discussion of Mac-specific issues belongs in r-sig-mac, not r-help. -- SIGSIG -- signature too long (core dumped) __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] R2 always increases as variables are added?
On 5/21/07, Alberto Monteiro [EMAIL PROTECTED] wrote: Paul Lynch wrote: I don't think it makes sense to compare models with and without an intercept term. (Also, I don't know what the point of using a model without an intercept term would be, but that is probably just my ignorance.) Suppose that you are 100% sure that the intercept term is zero, or so insignifantly small as not to matter. For example, if you are measuring the density of some material, and you determine a lot of pairs (mass, volume), you know that mass = density * volume, with intercept zero. In that case, you are 100% sure that the intercept *should* be zero, but you aren't 100% sure that the measurements have a best fit with intercept zero. There could have been some systematic error that is throwing things off. It seems safer to leave the intercept in and let the data show that the intercept is insignificantly small. However, I don't really know enough to know whether that is always the best approach. (And given that R provides a facility for excluding the intercept, I suspect there must be some good reason for doing so in some circumstances.) -- Paul Lynch Aquilent, Inc. National Library of Medicine (Contractor) __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] R2 always increases as variables are added?
Junjie, First, a disclaimer: I am not a statistician, and have only taken one statistics class, but I just took it this Spring, so the concepts of linear regression are relatively fresh in my head and hopefully I will not be too inaccurate. According to my statistics textbook, when selecting variables for a model, the intercept term is always present. The variables under consideration do not include the constant 1 that multiplies the intercept term. I don't think it makes sense to compare models with and without an intercept term. (Also, I don't know what the point of using a model without an intercept term would be, but that is probably just my ignorance.) Similarly, the formula you were using for R**2 seems to only be useful in the context of a standard linear regression (i.e., one that includes an intercept term). As your example shows, it is easy to construct a fit (e.g. y = 10,000,000*x) so that SSR SST if one is not deriving the fit from the regular linear regression process. --Paul On 5/19/07, 李俊杰 [EMAIL PROTECTED] wrote: I know that -1 indicates to remove the intercept term. But my question is why intercept term CAN NOT be treated as a variable term as we place a column consited of 1 in the predictor matrix. If I stick to make a comparison between a model with intercept and one without intercept on adjusted r2 term, now I think the strategy is always to use another definition of r-square or adjusted r-square, in which r-square=sum(( y.hat)^2)/sum((y)^2). Am I in the right way? Thanks Li Junjie 2007/5/19, Paul Lynch [EMAIL PROTECTED]: In case you weren't aware, the meaning of the -1 in y ~ x - 1 is to remove the intercept term that would otherwise be implied. --Paul On 5/17/07, 李俊杰 [EMAIL PROTECTED] wrote: Hi, everybody, 3 questions about R-square: -(1)--- Does R2 always increase as variables are added? -(2)--- Does R2 always greater than 1? -(3)--- How is R2 in summary(lm(y~x-1))$r.squared calculated? It is different from (r.square=sum((y.hat-mean (y))^2)/sum((y-mean(y))^2)) I will illustrate these problems by the following codes: -(1)--- R2 doesn't always increase as variables are added x=matrix(rnorm(20),ncol=2) y=rnorm(10) lm=lm(y~1) y.hat=rep(1*lm$coefficients,length(y)) (r.square=sum((y.hat-mean(y))^2)/sum((y-mean(y))^2)) [1] 2.646815e-33 lm=lm(y~x-1) y.hat=x%*%lm$coefficients (r.square=sum((y.hat-mean(y))^2)/sum((y-mean(y))^2)) [1] 0.4443356 This is the biggest model, but its R2 is not the biggest, why? lm=lm(y~x) y.hat=cbind(rep(1,length(y)),x)%*%lm$coefficients (r.square=sum((y.hat-mean(y))^2)/sum((y-mean(y))^2)) [1] 0.2704789 -(2)--- R2 can greater than 1 x=rnorm(10) y=runif(10) lm=lm(y~x-1) y.hat=x*lm$coefficients (r.square=sum((y.hat-mean(y))^2)/sum((y-mean(y))^2)) [1] 3.513865 -(3)--- How is R2 in summary(lm(y~x-1))$r.squared calculated? It is different from (r.square=sum((y.hat-mean (y))^2)/sum((y-mean(y))^2)) x=matrix(rnorm(20),ncol=2) xx=cbind(rep(1,10),x) y=x%*%c(1,2)+rnorm(10) ### r2 calculated by lm(y~x) lm=lm(y~x) summary(lm)$r.squared [1] 0.9231062 ### r2 calculated by lm(y~xx-1) lm=lm(y~xx-1) summary(lm)$r.squared [1] 0.9365253 ### r2 calculated by me y.hat=xx%*%lm$coefficients (r.square=sum((y.hat-mean(y))^2)/sum((y-mean(y))^2)) [1] 0.9231062 Thanks a lot for any cue:) -- Junjie Li, [EMAIL PROTECTED] Undergranduate in DEP of Tsinghua University, [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Paul Lynch Aquilent, Inc. National Library of Medicine (Contractor) -- Junjie Li, [EMAIL PROTECTED] Undergranduate in DEP of Tsinghua University, -- Paul Lynch Aquilent, Inc. National Library of Medicine (Contractor) __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] stacked barplot with positive and negatvie values
Hello I'm trying to create a barplot with a couple of stacked positive values and with one negative value for each group. example: trees-c(20,30,10) shrubs-c(12,23,9) veg-c(2,3,4) soil-c(-100,-123,-89) example1-t(cbind(trees,shrubs,veg)) barplot(example1) #this works so far #but now: example2-t(cbind(trees,shrubs,veg,soil)) barplot(example2) This shows no more stacked bars. But I want to keep the bars like example1 and just add the negative values which have another scale downwards. So I tried: barplot(example1,axes=F) barplot(example2[soil,],add=T,axes=F) axis(side=2,at=c(-150,-100,-50,0,10,20,30)) But I still does not work for the axis?? I would appriciate any kind of hint Greetings Paul Magdon -- ___ BSc. Paul Magdon -Research Assistant- Institute of Forest Management Forest Assessment Remote Sensing, Forest Growth, Forest Planning Faculty of Forest Sciences and Forest Ecology Georg-August-University Göttingen Phone +49 551 39 3573 [EMAIL PROTECTED] / skype: paul.magdon __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Follow-up about ordinal logit with mixtures: how about 'continuation ratio' strategy?
On 5/10/07, Frank E Harrell Jr [EMAIL PROTECTED] wrote: Paul Johnson wrote: This is a follow up to the message I posted 3 days ago about how to estimate mixed ordinal logit models. I hope you don't mind that I am just pasting in the code and comments from an R file for your feedback. Actual estimates are at the end of the post. . . . Paul, lrm does not give an incorrect sign on the intercepts. Just look at how it states the model in terms of Prob(Y=j) so that its coefficients are consistent with the way people state binary models. I'm not clear on your generation of simulated data. I specify the population logit, anti-logit that, and generate binary responses with those probabilities. I don't use rlogis. Thank you. I don't think I'm telling you anything you don't already know, but for the record, here goes. I think the difference in signs is just convention within fields. In choice models (the econometric tradition), we usually write that the response is in a higher category if eta + random cutpoint and that's how I created the data--rlogis supplies the random noise. Then eta - cutpoint random or cutpoint - eta random and so Prob ( higher outcome ) = Prob ( random cutpoint - eta) In the docs on polr from MASS, VR say they have the logit equal to cutpoint - eta so their parameterization is consistent with mine. On the other hand, your approach is to say the response is in a higher category if intercept + eta random, where I think your intercept is -cutpoint. So the signs in your results are reversed. -cutpoint + eta random But this is aside from the major question I am asking. Do we think that the augmented data frame approach described in Cole, Allison, and Ananth is a good alternative to maximum likelihood estimation of ordinal logit models, whether they are interpreted as proportional odds, continuation, or stopping models? In the cases I've tested, the parameter estimates from the augmented data frame are consistent with polr or lrm, but the standard errors and other diagnostic informations are quite different. I do not think I can follow your suggestion to use penalties in lrm because I have to allow for the possibilities that there are random effects across clusters of observations, possibly including random slope effects, but certainly including random intercepts for 2 levels of groupings (in the HLM sense of these things). Meanwhile, I'm studying how to use optim and numeric integration to see if the results are comparable. pj See if using the PO model with lrm with penalization on the factor does what you need. lrm is not set up to omit an intercept with the -1 notation. My book goes into details about the continuation ratio model. Frank Harrell -- Paul E. Johnson Professor, Political Science 1541 Lilac Lane, Room 504 University of Kansas __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Nonlinear constrains with optim
Dear All I am dealing at the moment with optimization problems with nonlinear constraints. Regenoud is quite apt to solve that kind of problems, but the precision of the optimal values for the parameters is sometimes far from what I need. Optim seems to be more precise, but it can only accept box-constrained optimization problems. I read in the list archives that optim can also be used with nonlinear constrains through penalizations. However, I am not familiar with the technique of penalizations. Could someone please indicate to me a site or a book to learn about that penalization technique? Thanks in advance, Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Increasing precision of rgenoud solutions
Thanks, Jasjeet, for your reply, but maybe I was not enough clear. The analytical solution for the optimization problem is the pair (sqrt(2)/2,sqrt(2)/2), which, approximately, is (0.707106781186548,0.707106781186548). The solution provided by rgenoud, with solution.tolerance=0.1 was $par [1] 0.7090278 0.7051806 which is not very precise comparing with the values of the (analytical) solution. Is it possible to increase the degree of closeness of the rgenoud solutions with the analytical ones? Paul On 5/10/07, Jasjeet Singh Sekhon [EMAIL PROTECTED] wrote: Hi Paul, Solution.tolerance is the right way to increase precision. In your example, extra precision *is* being obtained, but it is just not displayed because the number of digits which get printed is controlled by the options(digits) variable. But the requested solution precision is in the object returned by genoud(). For example, if I run a - genoud(myfunc, nvars=2, Domains=rbind(c(0,1),c(0,1)),max=TRUE,boundary.enforcement=2,solution.tolerance=0.01) I get a$par [1] 0.7062930 0.7079196 But if I set options(digits=12), and without rerunning anything I check a$par again, I observe that: a$par [1] 0.706293049455 0.707919577559 Cheers, Jas. === Jasjeet S. Sekhon Associate Professor Survey Research Center UC Berkeley http://sekhon.berkeley.edu/ V: 510-642-9974 F: 617-507-5524 === Paul Smith writes: Dear All I am using rgenoud to solve the following maximization problem: myfunc - function(x) { x1 - x[1] x2 - x[2] if (x1^2+x2^2 1) return(-999) else x1+x2 } genoud(myfunc, nvars=2, Domains=rbind(c(0,1),c(0,1)),max=TRUE,boundary.enforcement=2,solution.tolerance=0.01) How can one increase the precision of the solution $par [1] 0.7072442 0.7069694 ? I have tried solution.tolerance but without a significant improvement. Any ideas? Thanks in advance, Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Increasing precision of rgenoud solutions
Thanks a lot, Jasjeet. That is it. Paul On 5/10/07, Jasjeet Singh Sekhon [EMAIL PROTECTED] wrote: Hi Paul, I see. You want to increase the population size (pop.size) option---of lesser importance are the max.generations, wait.generations and P9 options. For more details, see http://sekhon.berkeley.edu/papers/rgenoudJSS.pdf. For example, if I run a - genoud(myfunc, nvars=2, Domains=rbind(c(0,1),c(0,1)),max=TRUE,boundary.enforcement=2,solution.tolerance=0.001, pop.size=6000, P9=50) options(digits=12) I obtain: #approx analytical solution sum(c(0.707106781186548,0.707106781186548)) [1] 1.41421356237 #genoud solution #a$value [1] 1.41421344205 #difference a$value-sum(c(0.707106781186548,0.707106781186548)) [1] -2.91195978441e-09 If that's not enough precision, increase the options (and the run-time). This would be faster with analytical derivatives. Cheers, Jas. === Jasjeet S. Sekhon Associate Professor Travers Department of Political Science Survey Research Center UC Berkeley http://sekhon.berkeley.edu/ V: 510-642-9974 F: 617-507-5524 === Paul Smith writes: Thanks, Jasjeet, for your reply, but maybe I was not enough clear. The analytical solution for the optimization problem is the pair (sqrt(2)/2,sqrt(2)/2), which, approximately, is (0.707106781186548,0.707106781186548). The solution provided by rgenoud, with solution.tolerance=0.1 was $par [1] 0.7090278 0.7051806 which is not very precise comparing with the values of the (analytical) solution. Is it possible to increase the degree of closeness of the rgenoud solutions with the analytical ones? Paul Paul Smith writes: Dear All I am using rgenoud to solve the following maximization problem: myfunc - function(x) { x1 - x[1] x2 - x[2] if (x1^2+x2^2 1) return(-999) else x1+x2 } genoud(myfunc, nvars=2, Domains=rbind(c(0,1),c(0,1)),max=TRUE,boundary.enforcement=2,solution.tolerance=0.01) How can one increase the precision of the solution $par [1] 0.7072442 0.7069694 ? I have tried solution.tolerance but without a significant improvement. Any ideas? Thanks in advance, Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Nonlinear constrains with optim
On 5/10/07, Spencer Graves [EMAIL PROTECTED] wrote: I don't know of any sources, but the idea is quite simple. For each constraint that is broken, the penalty is the amount by which the constraint is broken times a penalty rate. The total penalty to add to the objective is the sum of penalties over all constraints. There is a catch or two when using this with derivative-based optimizers. The objective typically becomes non-differentiable at the boundary, and optimizers can get confused. I believe I've gotten good results with penalties that are the SQUARE of the amount by which the constraints were violated. These are continuously differentiable and so don't confuse the derivative-based optimizers much. Also, I start with a small penalty, then increase the penalty until I get a solution that seems sensible. If you can't handle a solution just a little outside your constraints, shrink a little the place at which the penalty starts. Hope this helps. Spencer Graves They might be less confused with smaller penalty rates. However if the penalty rate is too small, then you can get a solution that breaks one or more penalties. Starting from a solution given by Rgenoud or its ilk is probably a good idea. Patrick Burns [EMAIL PROTECTED] +44 (0)20 8525 0696 http://www.burns-stat.com (home of S Poetry and A Guide for the Unwilling S User) Paul Smith wrote: Dear All I am dealing at the moment with optimization problems with nonlinear constraints. Regenoud is quite apt to solve that kind of problems, but the precision of the optimal values for the parameters is sometimes far from what I need. Optim seems to be more precise, but it can only accept box-constrained optimization problems. I read in the list archives that optim can also be used with nonlinear constrains through penalizations. However, I am not familiar with the technique of penalizations. Could someone please indicate to me a site or a book to learn about that penalization technique? Thanks in advance, Thanks to all. I have meanwhile had a look at Fletcher's book (as suggested by Ravi) and I think that now I understand the idea behind using penalties with constrained optimization problems. Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Increasing precision of rgenoud solutions
Dear All I am using rgenoud to solve the following maximization problem: myfunc - function(x) { x1 - x[1] x2 - x[2] if (x1^2+x2^2 1) return(-999) else x1+x2 } genoud(myfunc, nvars=2, Domains=rbind(c(0,1),c(0,1)),max=TRUE,boundary.enforcement=2,solution.tolerance=0.01) How can one increase the precision of the solution $par [1] 0.7072442 0.7069694 ? I have tried solution.tolerance but without a significant improvement. Any ideas? Thanks in advance, Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Unit Testing Frameworks: summary and brief discussion
Tony Thanks for the summary. My ad hoc system is pretty good for catching flagged errors, and numerical errors when I have a check. Could you (or someone else) comment on how easy it would be with one of these more formal frameworks to do three things I have not been able to accomplish easily: - My code gives error and warning messages in some situations. I want to test that the errors and warnings work, but these flags are the correct response to the test. In fact, it is an error if I don't get the flag. How easy is it to set up automatic tests to check warning and error messages work? - For some things it is the printed format that matters. How easy is it to set up a test of the printed output? (Something like the Rout files used in R CMD check.) I think this is what Tony Plate is calling transcript file tests, and I guess it is not automatically available. I am not really interested in something I would have to change with each new release of R, and I need it to work cross-platform. I want to know when something has changed, in R or my own code, without having to examine the output carefully. - (And now the hard one.) For some things it is the plotted output that matters. Is it possible to set up automatic tests of plotting? I can already test that plots run. I want to know if they look very different. And no, I don't have a clue where to start on this one. Paul Gilbert [EMAIL PROTECTED] wrote: Greetings - I'm finally finished review, here's what I heard: from Tobias Verbeke: [EMAIL PROTECTED] wrote: Greetings! After a quick look at current programming tools, especially with regards to unit-testing frameworks, I've started looking at both butler and RUnit. I would be grateful to receieve real world development experience and opinions with either/both.Please send to me directly (yes, this IS my work email), I will summarize (named or anonymous, as contributers desire) to the list. I'm founding member of an R Competence Center at an international consulting company delivering R services mainly to the financial and pharmaceutical industries. Unit testing is central to our development methodology and we've been systematically using RUnit with great satisfaction, mainly because of its simplicity. The presentation of test reports is basic, though. Experiences concerning interaction with the RUnit developers are very positive: gentle and responsive people. We've never used butler. I think it is not actively developed (even if the developer is very active). It should be said that many of our developers (including myself) have backgrounds in statistics (more than in cs or software engineering) and are not always acquainted with the functionality in other unit testing frameworks and the way they integrate in IDEs as is common in these other languages. I'll soon be personally working with a JUnit guru and will take the opportunity to benchmark RUnit/ESS/emacs against his toolkit (Eclipse with JUnit- and other plugins, working `in perfect harmony' (his words)). Even if in my opinion the philosophy of test-driven development is much more important than the tools used, it is useful to question them from time to time and your message reminded me of this... I'll keep you posted if it interests you. Why not work out an evaluation grid / check list for unit testing frameworks ? Totally unrelated to the former, it might be interesting to ask oneself how ESS could be extended to ease unit testing: after refactoring a function some M-x ess-unit-test-function automagically launches the unit-test for this particular function (based on the test function naming scheme), opens a *test report* buffer etc. Kind regards, Tobias from Tony Plate: Hi, I've been looking at testing frameworks for R too, so I'm interested to hear of your experiences perspective. Here's my own experiences perspective: The requirements are: (1) it should be very easy to construct and maintain tests (2) it should be easy to run tests, both automatically and manually (3) it should be simple to look at test results and know what went wrong where I've been using a homegrown testing framework for S-PLUS that is loosely based on the R transcript style tests (run *.R and compare output with *.Rout.save in 'tests' dir). There are two differences between this test framework and the standard R one: (1) the output to match and the input commands are generated from an annotated transcript (annotations can switch some tests in or out depending on the version used) (2) annotations can include text substitutions (regular expression style) to be made on the output before attempting to match (this helps make it easier to construct tests that will match across different versions that might have minor cosmetic differences in how output is formatted). We use this test framework for both
Re: [R] Unit Testing Frameworks: summary and brief discussion
Hi Paul Gilbert wrote: Tony Thanks for the summary. My ad hoc system is pretty good for catching flagged errors, and numerical errors when I have a check. Could you (or someone else) comment on how easy it would be with one of these more formal frameworks to do three things I have not been able to accomplish easily: - My code gives error and warning messages in some situations. I want to test that the errors and warnings work, but these flags are the correct response to the test. In fact, it is an error if I don't get the flag. How easy is it to set up automatic tests to check warning and error messages work? - For some things it is the printed format that matters. How easy is it to set up a test of the printed output? (Something like the Rout files used in R CMD check.) I think this is what Tony Plate is calling transcript file tests, and I guess it is not automatically available. I am not really interested in something I would have to change with each new release of R, and I need it to work cross-platform. I want to know when something has changed, in R or my own code, without having to examine the output carefully. - (And now the hard one.) For some things it is the plotted output that matters. Is it possible to set up automatic tests of plotting? I can already test that plots run. I want to know if they look very different. And no, I don't have a clue where to start on this one. For text-based graphics formats, you can just use diff; for raster formats, you can do per pixel comparisons. These days there is ImageMagick to do a compare and it will even produce an image of the difference. I have an old package called graphicsQC (not on CRAN) that implemented some of these ideas (there was a talk at DSC 2003, see http://www.stat.auckland.ac.nz/~paul/index.html). A student worked on a much better approach more recently, but I haven't put that up on the web yet. Let me know if you'd like to take a look at the newer package (it would help to have somebody nagging me to get it finished off). Paul Paul Gilbert [EMAIL PROTECTED] wrote: Greetings - I'm finally finished review, here's what I heard: from Tobias Verbeke: [EMAIL PROTECTED] wrote: Greetings! After a quick look at current programming tools, especially with regards to unit-testing frameworks, I've started looking at both butler and RUnit. I would be grateful to receieve real world development experience and opinions with either/both.Please send to me directly (yes, this IS my work email), I will summarize (named or anonymous, as contributers desire) to the list. I'm founding member of an R Competence Center at an international consulting company delivering R services mainly to the financial and pharmaceutical industries. Unit testing is central to our development methodology and we've been systematically using RUnit with great satisfaction, mainly because of its simplicity. The presentation of test reports is basic, though. Experiences concerning interaction with the RUnit developers are very positive: gentle and responsive people. We've never used butler. I think it is not actively developed (even if the developer is very active). It should be said that many of our developers (including myself) have backgrounds in statistics (more than in cs or software engineering) and are not always acquainted with the functionality in other unit testing frameworks and the way they integrate in IDEs as is common in these other languages. I'll soon be personally working with a JUnit guru and will take the opportunity to benchmark RUnit/ESS/emacs against his toolkit (Eclipse with JUnit- and other plugins, working `in perfect harmony' (his words)). Even if in my opinion the philosophy of test-driven development is much more important than the tools used, it is useful to question them from time to time and your message reminded me of this... I'll keep you posted if it interests you. Why not work out an evaluation grid / check list for unit testing frameworks ? Totally unrelated to the former, it might be interesting to ask oneself how ESS could be extended to ease unit testing: after refactoring a function some M-x ess-unit-test-function automagically launches the unit-test for this particular function (based on the test function naming scheme), opens a *test report* buffer etc. Kind regards, Tobias from Tony Plate: Hi, I've been looking at testing frameworks for R too, so I'm interested to hear of your experiences perspective. Here's my own experiences perspective: The requirements are: (1) it should be very easy to construct and maintain tests (2) it should be easy to run tests, both automatically and manually (3) it should be simple to look at test results and know what went wrong where I've been using a homegrown testing framework for S-PLUS that is loosely based on the R
[R] Follow-up about ordinal logit with mixtures: how about 'continuation ratio' strategy?
This is a follow up to the message I posted 3 days ago about how to estimate mixed ordinal logit models. I hope you don't mind that I am just pasting in the code and comments from an R file for your feedback. Actual estimates are at the end of the post. ### Subject: mixed ordinal logit via augmented data setup. ### I've been interested in estimating an ordinal logit model with ### a random parameter. I asked in r-help about it. It appears to be ### a difficult problem because even well established commercial ### programs like SAS are prone to provide unreliable estimates. ### So far, I've found 3 avenues for research. 1) Go Bayesian and use ### MCMC to estimate the model. 2) Specify a likelihood function and ### then use R's optim function (as described in Laura A. Thompson, ### 2007, S-PLUS (and R) Manual to Accompany Agresti's Categorical ### Data Analysis (2002) 2nd edition). My guess is that either of ### those approaches would be worth the while, but I might have ### trouble persuading a target audience that they have good ### properties. 3) Adapt a continuation ratio approach. ### This latter approach was suggested by a post in r-help by Daniel ### Farewell farewelld_at_Cardiff.ac.uk ### http://tolstoy.newcastle.edu.au/R/help/06/08/32398.html#start ### It pointed me in the direction of continuation ratio logit models ### and one way to estimate an ordinal logit model with random ### parameters. ### Farewell's post gives working example code that shows a way to ### convert a K category ordinal variable into K-1 dichotomous ### indicators (a continuation ratio model). Those K-1 indicators ### can be stacked into one column and then a logistic regression ### program that is written for a two-valued output can be used. ### Farewell reasoned that one might then use a program for two-valued ### outputs including mixed effects. In his proposal, one would use ### the program lmer (package: lme4) ( a binomial family with a logit ### link) to estimate parameters for a dichotomous logit model with ### random parameters. ### This is the sort of magic trick I had suspected might be possible. ### Still, it is hard to believe it would work. But in the r-help ### response to the post by Farewell, there is no general objection ### against his modeling strategy. ### I had not studied continuation ratio logit models before, so I ### looked up a few articles on estimation of ordinal models by ### re-coding the output as a sequence of binary comparisons (stop ### ratios, continuation ratios, etc). The article that is most clear ### on how this can be done to estimate a proportional odds logistic ### model is ### Stephen R. Cole, Paul D. Allison, and Cande V. Ananth, ### Estimation of Cumulative Odds Ratios ### Ann Epidemiol 2004;14:172–178. ### They claim that one can recode an n-chotomy into n-1 dichotomous ### indicators. Each observation in the original dataset begets n-1 ### lines in the augmented version. After creating the dichotomous ### indicator, one uses an ordinary dichotomous logit model to ### estimate parameters and cutpoints for an ordinal logit ### model. Cole, et al., are very clear. ### There is an additional benefit to the augmented data approach. ### One can explicitly test the proportional odds assumption by checking ### for interactions between the included independent variables and the ### level of the dependent variable being considered. The Cole ### article shows some examples where the proportional assumption appears ### to be violated. ### To test it, I created the following example. This shows the ### results of maximum likelihood estimation with the programs polr ### (package:MASS) and lrm (package: Design). The estimates from ### the augmented data approach are not exactly the same as polr or ### lrm, but they are close. It appears to me the claims about the ### augmented data approach are mostly correct. The parameter ### estimates are pretty close to the true values, while the estimates ### of the ordinal cutpoints are a bit difficult to interpret. ### I don't know what to make of the model diagnostics for the augmented ### data model. Should I have confidence in the standard errors? How to interpret the degrees of freedom when 3 lines ### of data are manufactured from 1 observation? Are likelihood-ratio ### (anova) tests valid in this context? Are these estimates from the ### augmented data equivalent to maximum likelihood? What does it ### mean that the t-ratios are so different? That seems to be prima-facie ### evidence that the estimates based on the augmented data set are not ### trustworthy. ### Suppose I convince myself that the estimates of the ordinal model ### are as good as maximum likelihood. Is it reasonable to take the ### next step, and follow Farewell's idea of using this kind of model ### to estimate a mixture model? There are K-1 lines per case ### in the augmented data set. Suppose the observations were grouped ### into M sets
Re: [R] Bad optimization solution
It seems that there is here a problem of reliability, as one never knows whether the solution provided by R is correct or not. In the case that I reported, it is fairly simple to see that the solution provided by R (without any warning!) is incorrect, but, in general, that is not so simple and one may take a wrong solution as a correct one. Paul On 5/8/07, Ravi Varadhan [EMAIL PROTECTED] wrote: Your function, (x1-x2)^2, has zero gradient at all the starting values such that x1 = x2, which means that the gradient-based search methods will terminate there because they have found a critical point, i.e. a point at which the gradient is zero (which can be a maximum or a minimum or a saddle point). However, I do not why optim converges to the boundary maximum, when analytic gradient is supplied (as shown by Sundar). Ravi. --- Ravi Varadhan, Ph.D. Assistant Professor, The Center on Aging and Health Division of Geriatric Medicine and Gerontology Johns Hopkins University Ph: (410) 502-2619 Fax: (410) 614-9625 Email: [EMAIL PROTECTED] Webpage: http://www.jhsph.edu/agingandhealth/People/Faculty/Varadhan.html -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Paul Smith Sent: Monday, May 07, 2007 6:26 PM To: R-help Subject: Re: [R] Bad optimization solution On 5/7/07, Paul Smith [EMAIL PROTECTED] wrote: I think the problem is the starting point. I do not remember the details of the BFGS method, but I am almost sure the (.5, .5) starting point is suspect, since the abs function is not differentiable at 0. If you perturb the starting point even slightly you will have no problem. Paul Smith [EMAIL PROTECTED] To Sent by: R-help r-help@stat.math.ethz.ch [EMAIL PROTECTED] cc at.math.ethz.ch Subject [R] Bad optimization solution 05/07/2007 04:30 PM Dear All I am trying to perform the below optimization problem, but getting (0.5,0.5) as optimal solution, which is wrong; the correct solution should be (1,0) or (0,1). Am I doing something wrong? I am using R 2.5.0 on Fedora Core 6 (Linux). Thanks in advance, Paul -- myfunc - function(x) { x1 - x[1] x2 - x[2] abs(x1-x2) } optim(c(0.5,0.5),myfunc,lower=c(0,0),upper=c(1,1),method=L-BFGS-B,control= list(fnscale=-1)) Yes, with (0.2,0.9), a correct solution comes out. However, how can one be sure in general that the solution obtained by optim is correct? In ?optim says: Method 'L-BFGS-B' is that of Byrd _et. al._ (1995) which allows _box constraints_, that is each variable can be given a lower and/or upper bound. The initial value must satisfy the constraints. This uses a limited-memory modification of the BFGS quasi-Newton method. If non-trivial bounds are supplied, this method will be selected, with a warning. which only demands that the initial value must satisfy the constraints. Furthermore, X^2 is everywhere differentiable and notwithstanding the reported problem occurs with myfunc - function(x) { x1 - x[1] x2 - x[2] (x1-x2)^2 } optim(c(0.2,0.2),myfunc,lower=c(0,0),upper=c(1,1),method=L-BFGS-B,control= list(fnscale=-1)) Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] A function for raising a matrix to a power?
Ravi Varadhan wrote: Paul, Your solution based on SVD does not work Ooops. I really am getting rusty. The idea is based on eigenvalues which, of course, are not always the same as singular values. Paul even for the matrix in your example (the reason it worked for e=3, was that it is an odd power and since P is a permutation matrix. It will also work for all odd powers, but not for even powers). However, a spectral decomposition will work for all symmetric matrices (SVD based exponentiation doesn't work for any matrix). Here is the function to do exponentiation based on spectral decomposition: expM.sd - function(X,e){Xsd - eigen(X); Xsd$vec %*% diag(Xsd$val^e) %*% t(Xsd$vec)} P - matrix(c(.3,.7, .7, .3), ncol=2) P [,1] [,2] [1,] 0.3 0.7 [2,] 0.7 0.3 P %*% P [,1] [,2] [1,] 0.58 0.42 [2,] 0.42 0.58 expM(P,2) [,1] [,2] [1,] 0.42 0.58 [2,] 0.58 0.42 expM.sd(P,2) [,1] [,2] [1,] 0.58 0.42 [2,] 0.42 0.58 Exponentiation based on spectral decomposition should be generally more accurate (not sure about the speed). A caveat is that it will only work for real, symmetric or complex, hermitian matrices. It will not work for asymmetric matrices. It would work for asymmetric matrix if the eigenvectors are made to be orthonormal. Ravi. --- Ravi Varadhan, Ph.D. Assistant Professor, The Center on Aging and Health Division of Geriatric Medicine and Gerontology Johns Hopkins University Ph: (410) 502-2619 Fax: (410) 614-9625 Email: [EMAIL PROTECTED] Webpage: http://www.jhsph.edu/agingandhealth/People/Faculty/Varadhan.html -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Paul Gilbert Sent: Monday, May 07, 2007 5:16 PM To: Atte Tenkanen Cc: r-help@stat.math.ethz.ch Subject: Re: [R] A function for raising a matrix to a power? You might try this, from 9/22/2006 with subject line Exponentiate a matrix: I am getting a bit rusty on some of these things, but I seem to recall that there is a numerical advantage (speed and/or accuracy?) to diagonalizing: expM - function(X,e) { v - La.svd(X); v$u %*% diag(v$d^e) %*% v$vt } P - matrix(c(.3,.7, .7, .3), ncol=2) P %*% P %*% P [,1] [,2] [1,] 0.468 0.532 [2,] 0.532 0.468 expM(P,3) [,1] [,2] [1,] 0.468 0.532 [2,] 0.532 0.468 I think this also works for non-integer, negative, large, and complex exponents: expM(P, 1.5) [,1] [,2] [1,] 0.3735089 0.6264911 [2,] 0.6264911 0.3735089 expM(P, 1000) [,1] [,2] [1,] 0.5 0.5 [2,] 0.5 0.5 expM(P, -3) [,1][,2] [1,] -7.3125 8.3125 [2,] 8.3125 -7.3125 expM(P, 3+.5i) [,1] [,2] [1,] 0.4713+0.0141531i 0.5287-0.0141531i [2,] 0.5287-0.0141531i 0.4713+0.0141531i Paul Gilbert Doran, Harold wrote: Suppose I have a square matrix P P - matrix(c(.3,.7, .7, .3), ncol=2) I know that P * P Returns the element by element product, whereas P%*%P Returns the matrix product. Now, P2 also returns the element by element product. But, is there a slick way to write P %*% P %*% P Obviously, P3 does not return the result I expect. Thanks, Harold Atte Tenkanen wrote: Hi, Is there a function for raising a matrix to a power? For example if you like to compute A%*%A%*%A, is there an abbreviation similar to A^3? Atte Tenkanen A=rbind(c(1,1),c(-1,-2)) A [,1] [,2] [1,]11 [2,] -1 -2 A^3 [,1] [,2] [1,]11 [2,] -1 -8 But: A%*%A%*%A [,1] [,2] [1,]12 [2,] -2 -5 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. La version française suit le texte anglais. This email may contain privileged and/or confidential inform...{{dropped}} __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. La version française suit le texte anglais. This email may contain
Re: [R] Bad optimization solution
Thanks, Ravi, for your clear explanation! Paul On 5/8/07, RAVI VARADHAN [EMAIL PROTECTED] wrote: Paul, The problem lies neither with R nor with numercial methods. The onus is always on the user to understand what the numerical schemes can do and what they can't do. One should never blindly take the results given by a numerical scheme and run with it. In your example, the optimization method is doing what it was designed to do: to find a critical point of a function where the gradient is zero. It is your responsibility to ensure that the result makes sense, and if it doesn't, to understand why it doesn't make sense. In your problem, maxima ((1,0) and (0,1)) lie on the boundary of the parameter space, and the gradient at the maxima (defined as the limit from within the domain) are clearly not zero. Another problem with your example is that the hessian for your function is singular, it has eigenvalues of 0 and 2. In short, there is no substitute to using your analytic powers! Ravi. - Original Message - From: Paul Smith [EMAIL PROTECTED] Date: Tuesday, May 8, 2007 4:33 am Subject: Re: [R] Bad optimization solution To: R-help r-help@stat.math.ethz.ch It seems that there is here a problem of reliability, as one never knows whether the solution provided by R is correct or not. In the case that I reported, it is fairly simple to see that the solution provided by R (without any warning!) is incorrect, but, in general, that is not so simple and one may take a wrong solution as a correct one. Paul On 5/8/07, Ravi Varadhan [EMAIL PROTECTED] wrote: Your function, (x1-x2)^2, has zero gradient at all the starting values such that x1 = x2, which means that the gradient-based search methods will terminate there because they have found a critical point, i.e. a point at which the gradient is zero (which can be a maximum or a minimum or a saddle point). However, I do not why optim converges to the boundary maximum, when analytic gradient is supplied (as shown by Sundar). Ravi. --- Ravi Varadhan, Ph.D. Assistant Professor, The Center on Aging and Health Division of Geriatric Medicine and Gerontology Johns Hopkins University Ph: (410) 502-2619 Fax: (410) 614-9625 Email: [EMAIL PROTECTED] Webpage: -Original Message- From: [EMAIL PROTECTED] [ On Behalf Of Paul Smith Sent: Monday, May 07, 2007 6:26 PM To: R-help Subject: Re: [R] Bad optimization solution On 5/7/07, Paul Smith [EMAIL PROTECTED] wrote: I think the problem is the starting point. I do not remember the details of the BFGS method, but I am almost sure the (.5, .5) starting point is suspect, since the abs function is not differentiable at 0. If you perturb the starting point even slightly you will have no problem. Paul Smith [EMAIL PROTECTED] To Sent by: R-help r-help@stat.math.ethz.ch [EMAIL PROTECTED] cc at.math.ethz.ch Subject [R] Bad optimization solution 05/07/2007 04:30 PM Dear All I am trying to perform the below optimization problem, but getting (0.5,0.5) as optimal solution, which is wrong; the correct solution should be (1,0) or (0,1). Am I doing something wrong? I am using R 2.5.0 on Fedora Core 6 (Linux). Thanks in advance, Paul -- myfunc - function(x) { x1 - x[1] x2 - x[2] abs(x1-x2) } optim(c(0.5,0.5),myfunc,lower=c(0,0),upper=c(1,1),method=L-BFGS-B,control= list(fnscale=-1)) Yes, with (0.2,0.9), a correct solution comes out. However, how can one be sure in general that the solution obtained by optim is correct? In ?optim says: Method 'L-BFGS-B' is that of Byrd _et. al._ (1995) which allows _box constraints_, that is each variable can be given a lower and/or upper bound. The initial value must satisfy the constraints. This uses a limited-memory modification of the BFGS quasi-Newton method. If non-trivial bounds are supplied, this method will be selected, with a warning. which only demands that the initial value must satisfy the constraints. Furthermore, X^2 is everywhere differentiable
Re: [R] A function for raising a matrix to a power?
You might try this, from 9/22/2006 with subject line Exponentiate a matrix: I am getting a bit rusty on some of these things, but I seem to recall that there is a numerical advantage (speed and/or accuracy?) to diagonalizing: expM - function(X,e) { v - La.svd(X); v$u %*% diag(v$d^e) %*% v$vt } P - matrix(c(.3,.7, .7, .3), ncol=2) P %*% P %*% P [,1] [,2] [1,] 0.468 0.532 [2,] 0.532 0.468 expM(P,3) [,1] [,2] [1,] 0.468 0.532 [2,] 0.532 0.468 I think this also works for non-integer, negative, large, and complex exponents: expM(P, 1.5) [,1] [,2] [1,] 0.3735089 0.6264911 [2,] 0.6264911 0.3735089 expM(P, 1000) [,1] [,2] [1,] 0.5 0.5 [2,] 0.5 0.5 expM(P, -3) [,1][,2] [1,] -7.3125 8.3125 [2,] 8.3125 -7.3125 expM(P, 3+.5i) [,1] [,2] [1,] 0.4713+0.0141531i 0.5287-0.0141531i [2,] 0.5287-0.0141531i 0.4713+0.0141531i Paul Gilbert Doran, Harold wrote: Suppose I have a square matrix P P - matrix(c(.3,.7, .7, .3), ncol=2) I know that P * P Returns the element by element product, whereas P%*%P Returns the matrix product. Now, P2 also returns the element by element product. But, is there a slick way to write P %*% P %*% P Obviously, P3 does not return the result I expect. Thanks, Harold Atte Tenkanen wrote: Hi, Is there a function for raising a matrix to a power? For example if you like to compute A%*%A%*%A, is there an abbreviation similar to A^3? Atte Tenkanen A=rbind(c(1,1),c(-1,-2)) A [,1] [,2] [1,]11 [2,] -1 -2 A^3 [,1] [,2] [1,]11 [2,] -1 -8 But: A%*%A%*%A [,1] [,2] [1,]12 [2,] -2 -5 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. La version française suit le texte anglais. This email may contain privileged and/or confidential inform...{{dropped}} __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Bad optimization solution
Dear All I am trying to perform the below optimization problem, but getting (0.5,0.5) as optimal solution, which is wrong; the correct solution should be (1,0) or (0,1). Am I doing something wrong? I am using R 2.5.0 on Fedora Core 6 (Linux). Thanks in advance, Paul -- myfunc - function(x) { x1 - x[1] x2 - x[2] abs(x1-x2) } optim(c(0.5,0.5),myfunc,lower=c(0,0),upper=c(1,1),method=L-BFGS-B,control=list(fnscale=-1)) __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Bad optimization solution
On 5/7/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: I think the problem is the starting point. I do not remember the details of the BFGS method, but I am almost sure the (.5, .5) starting point is suspect, since the abs function is not differentiable at 0. If you perturb the starting point even slightly you will have no problem. Paul Smith [EMAIL PROTECTED] To Sent by: R-help r-help@stat.math.ethz.ch [EMAIL PROTECTED] cc at.math.ethz.ch Subject [R] Bad optimization solution 05/07/2007 04:30 PM Dear All I am trying to perform the below optimization problem, but getting (0.5,0.5) as optimal solution, which is wrong; the correct solution should be (1,0) or (0,1). Am I doing something wrong? I am using R 2.5.0 on Fedora Core 6 (Linux). Thanks in advance, Paul -- myfunc - function(x) { x1 - x[1] x2 - x[2] abs(x1-x2) } optim(c(0.5,0.5),myfunc,lower=c(0,0),upper=c(1,1),method=L-BFGS-B,control=list(fnscale=-1)) Yes, with (0.2,0.9), a correct solution comes out. However, how can one be sure in general that the solution obtained by optim is correct? In ?optim says: Method 'L-BFGS-B' is that of Byrd _et. al._ (1995) which allows _box constraints_, that is each variable can be given a lower and/or upper bound. The initial value must satisfy the constraints. This uses a limited-memory modification of the BFGS quasi-Newton method. If non-trivial bounds are supplied, this method will be selected, with a warning. which only demands that the initial value must satisfy the constraints. Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Bad optimization solution
On 5/7/07, Paul Smith [EMAIL PROTECTED] wrote: I think the problem is the starting point. I do not remember the details of the BFGS method, but I am almost sure the (.5, .5) starting point is suspect, since the abs function is not differentiable at 0. If you perturb the starting point even slightly you will have no problem. Paul Smith [EMAIL PROTECTED] To Sent by: R-help r-help@stat.math.ethz.ch [EMAIL PROTECTED] cc at.math.ethz.ch Subject [R] Bad optimization solution 05/07/2007 04:30 PM Dear All I am trying to perform the below optimization problem, but getting (0.5,0.5) as optimal solution, which is wrong; the correct solution should be (1,0) or (0,1). Am I doing something wrong? I am using R 2.5.0 on Fedora Core 6 (Linux). Thanks in advance, Paul -- myfunc - function(x) { x1 - x[1] x2 - x[2] abs(x1-x2) } optim(c(0.5,0.5),myfunc,lower=c(0,0),upper=c(1,1),method=L-BFGS-B,control=list(fnscale=-1)) Yes, with (0.2,0.9), a correct solution comes out. However, how can one be sure in general that the solution obtained by optim is correct? In ?optim says: Method 'L-BFGS-B' is that of Byrd _et. al._ (1995) which allows _box constraints_, that is each variable can be given a lower and/or upper bound. The initial value must satisfy the constraints. This uses a limited-memory modification of the BFGS quasi-Newton method. If non-trivial bounds are supplied, this method will be selected, with a warning. which only demands that the initial value must satisfy the constraints. Furthermore, X^2 is everywhere differentiable and notwithstanding the reported problem occurs with myfunc - function(x) { x1 - x[1] x2 - x[2] (x1-x2)^2 } optim(c(0.2,0.2),myfunc,lower=c(0,0),upper=c(1,1),method=L-BFGS-B,control=list(fnscale=-1)) Paul __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] ordered logistic regression with random effects. Howto?
I'd like to estimate an ordinal logistic regression with a random effect for a grouping variable. I do not find a pre-packaged algorithm for this. I've found methods glmmML (package: glmmML) and lmer (package: lme4) both work fine with dichotomous dependent variables. I'd like a model similar to polr (package: MASS) or lrm (package: Design) that allows random effects. I was thinking there might be a trick that might allow me to use a program written for a dichotomous dependent variable with a mixed effect to estimate such a model. The proportional odds logistic regression is often written as a sequence of dichotomous comparisons. But it seems to me that, if it would work, then somebody would have proposed it already. I've found some commentary about methods of fitting ordinal logistic regression with other procedures, however, and I would like to ask for your advice and experience with this. In this article, Ching-Fan Sheu, Fitting mixed-effects models for repeated ordinal outcomes with the NLMIXED procedure Behavior Research Methods, Instruments, Computers, 2002, 34(2): 151-157. the other gives an approach that works in SAS's NLMIXED procedure. In this approach, one explicitly writes down the probability that each level will be achieved (using the linear predictor and constants for each level). I THINK I could find a way to translate this approach into a model that can be fitted with either nlme or lmer. Has someone done it? What other strategies for fitting mixed ordinal models are there in R? Finally, a definitional question. Is a multi-category logistic regression (either ordered or not) a member of the glm family? I had thought the answer is no, mainly because glm and other R functions for glms never mention multi-category qualitative dependent variables and also because the distribution does not seem to fall into the exponential family. However, some textbooks do include the multicategory models in the GLM treatment. -- Paul E. Johnson Professor, Political Science 1541 Lilac Lane, Room 504 University of Kansas __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] to draw a smooth arc
Hi Paulo Barata wrote: Dr. Snow and Prof. Ripley, Dr. Snow's suggestion, using clipplot (package TeachingDemos), is maybe a partial solution to the problem of drawing an arc of a circle (as long as the line width of the arc is not that large, as pointed out by Prof. Ripley). If the arc is symmetrical around a vertical line, then it is not so difficult to draw it that way. But an arc that does not have this kind of symmetry would possibly require some geometrical computations to find the proper rectangle to be used for clipping. I would like to suggest that in a future version of R some function be included in the graphics package to draw smooth arcs with given center, radius, initial and final angles. I suppose that the basic ingredients are available in function symbols (graphics). Just to back up a few previous posts ... There is something like this facility already available via the grid.xspline() function in the grid package. This provides very flexible curve drawing (including curves very close to Bezier curves) based on the X-Splines implemented in xfig. The grid.curve() function provides a convenience layer that allows for at least certain parameterisations of arcs (you specify the arc end points and the angle). These functions are built on functionality within the core graphics engine, so exposing a similar interface (e.g., an xspline() function) within traditional graphics would be relatively straightforward. The core functionality draws the curves as line segments (but automatically figures out how many segments to use so that the curve looks smooth); it does NOT call curve-drawing primitives in the graphics device (like PostScript's curveto). In summary: there is some support for smooth curves, but we could still benefit from a specific arc() function with the standard centre-radius-angle parameterisation and we could also benefit from exposing the native strengths of different graphics devices (rather than the current lowest-common-denominator approach). Paul Thank you very much. Paulo Barata (Rio de Janeiro - Brazil) --- Prof Brian Ripley wrote: On Tue, 1 May 2007, Greg Snow wrote: Here is an approach that clips the circle you like from symbols down to an arc (this will work as long as the arc is less than half a circle, for arcs greater than half a circle, you could draw the whole circle then use this to draw an arc of the bacground color over the section you don't want): library(TeachingDemos) plot(-5:5, -5:5, type='n') clipplot( symbols(0,0,circles=2, add=TRUE), c(0,5), c(0,5) ) I had considered this approach: clipping a circle to a rectangle isn't strictly an arc, as will be clear if the line width is large. Consider clipplot(symbols(0, 0 ,circles=2, add=TRUE, lwd=5), c(-1,5), c(-1,5)) Note too that what happens with clipping is device-dependent. If R's internal clipping is used, the part-circle is converted to a polygon. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Dr Paul Murrell Department of Statistics The University of Auckland Private Bag 92019 Auckland New Zealand 64 9 3737599 x85392 [EMAIL PROTECTED] http://www.stat.auckland.ac.nz/~paul/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] to draw a smooth arc
Hi Paulo Barata wrote: Dr. Murrell and all, One final suggestion: a future function arc() in package graphics, with centre-radius-angle parameterisation, could also include an option to draw arrows at either end of the arc, as one can find in function arrows(). ... and in grid.xspline() and grid.curve(). Paul Thank you. Paulo Barata --- Paul Murrell wrote: Hi Paulo Barata wrote: Dr. Snow and Prof. Ripley, Dr. Snow's suggestion, using clipplot (package TeachingDemos), is maybe a partial solution to the problem of drawing an arc of a circle (as long as the line width of the arc is not that large, as pointed out by Prof. Ripley). If the arc is symmetrical around a vertical line, then it is not so difficult to draw it that way. But an arc that does not have this kind of symmetry would possibly require some geometrical computations to find the proper rectangle to be used for clipping. I would like to suggest that in a future version of R some function be included in the graphics package to draw smooth arcs with given center, radius, initial and final angles. I suppose that the basic ingredients are available in function symbols (graphics). Just to back up a few previous posts ... There is something like this facility already available via the grid.xspline() function in the grid package. This provides very flexible curve drawing (including curves very close to Bezier curves) based on the X-Splines implemented in xfig. The grid.curve() function provides a convenience layer that allows for at least certain parameterisations of arcs (you specify the arc end points and the angle). These functions are built on functionality within the core graphics engine, so exposing a similar interface (e.g., an xspline() function) within traditional graphics would be relatively straightforward. The core functionality draws the curves as line segments (but automatically figures out how many segments to use so that the curve looks smooth); it does NOT call curve-drawing primitives in the graphics device (like PostScript's curveto). In summary: there is some support for smooth curves, but we could still benefit from a specific arc() function with the standard centre-radius-angle parameterisation and we could also benefit from exposing the native strengths of different graphics devices (rather than the current lowest-common-denominator approach). Paul Thank you very much. Paulo Barata (Rio de Janeiro - Brazil) --- Prof Brian Ripley wrote: On Tue, 1 May 2007, Greg Snow wrote: Here is an approach that clips the circle you like from symbols down to an arc (this will work as long as the arc is less than half a circle, for arcs greater than half a circle, you could draw the whole circle then use this to draw an arc of the bacground color over the section you don't want): library(TeachingDemos) plot(-5:5, -5:5, type='n') clipplot( symbols(0,0,circles=2, add=TRUE), c(0,5), c(0,5) ) I had considered this approach: clipping a circle to a rectangle isn't strictly an arc, as will be clear if the line width is large. Consider clipplot(symbols(0, 0 ,circles=2, add=TRUE, lwd=5), c(-1,5), c(-1,5)) Note too that what happens with clipping is device-dependent. If R's internal clipping is used, the part-circle is converted to a polygon. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Dr Paul Murrell Department of Statistics The University of Auckland Private Bag 92019 Auckland New Zealand 64 9 3737599 x85392 [EMAIL PROTECTED] http://www.stat.auckland.ac.nz/~paul/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Perpendicular symbol in plotmath?
Hi Matthew Neilson wrote: Thanks for your response, Gabor. That works quite nicely. The documentation states that it is not possible to mix and match Hershey fonts with plotmath symbols. My *ideal* scenario would be to write the perpendicular symbol as a subscript (specifically, I would like to have \epsilon_{\perp} as an axis label). I have searched the help archive, and it turned up the following post from 2002: http://tinyurl.com/2m8n9c which explains a way of faking subscripts when using the Hershey fonts, though it does have several drawbacks. Have things moved on in the last five years, or is this still the best known solution? Unfortunately, you still cannot use Hershey fonts with plotmath (just lacking implementation). Also, the perpendicular symbol is not implemented in plotmath (yet). In this case though, there may be a possible workaround. Try the following ... plot(1, ann=FALSE) title(ylab=expression(epsilon[\136]), family=symbol) The plain text character \136 gets drawn using the symbol font and the perpendicular symbol is character 94 (Octal 136) in the Adobe Symbol Encoding and in the Windows symbol font encoding so this works for PDF, on Windows, and on X11 (though I had to switch to a single-byte encoding to get my system to pick up the symbol font). The drawback with this solution is that anything that is NOT a special mathematical symbol in the expression will come out in Greek letters. Paul Many thanks for your help, -Matt On Sat Apr 28 17:35 , 'Gabor Grothendieck' [EMAIL PROTECTED] sent: Its available in the Hershey fonts: plot(0, 0, type = n) text(0, 0, A \\pp B, vfont = c(serif, plain)) On 4/28/07, Matthew Neilson [EMAIL PROTECTED] wrote: Hey, Does anyone know of an equivalent to the LaTeX \perp (perpendicular) symbol for adding to R plots? Parallel is easy enough (||), but I haven't been able to find a way of adding perpendicular. The plotmath documentation doesn't mention how to do it, so I'm inclined to think that it doesn't exist - but surely there must be some way of achieving the desired result, right? Any help will be much appreciated, -Matt __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Dr Paul Murrell Department of Statistics The University of Auckland Private Bag 92019 Auckland New Zealand 64 9 3737599 x85392 [EMAIL PROTECTED] http://www.stat.auckland.ac.nz/~paul/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.