[R] Translate twobin in S-plus to R
Hi again, I have just learned that twobin is a function written by the author of the code I'm trying to translate, so I have to look elsewhere for the solution. Thanks for your time! Ulf __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] EM algorithm
I use the EM algorithm to treat missing values. I don't know anything about your contextual use nor much about the details of the EM alg for that matter, but the norm package may be of help. (prelim.norm() and em.norm() functions being useful) These of course assume normality. On 4/4/06, Marco Geraci [EMAIL PROTECTED] wrote: Hi Michela, I'd like to add to Ted's message that the statistical journals represent a good 'source' for R to look at. Sometimes the authors of the papers need to implement their own algorithms and they make them available upon request. You might want to check the literature and send some emails ;) and, if you're lucky, get some R codes (that you'd probably need to adapt to your model). Some journals (e.g., statistical modelling http://stat.uibk.ac.at/SMIJ/index.html) provide software and data related to the published papers. I personally implemented an EM algorithm for mixed models and it took me time and patience. As Ted already said, if you give more details about your model you might get better answers and/or tips. Marco --- [EMAIL PROTECTED] wrote: Dear R-Users, I have a model with a latent variable for a spatio-temporal process. I would like to use EM algorithm to estimate the parameters. Does anybody know how to implement the algorithm in R? Thank you very much in advance, Michela __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Uneven y-axis scale
Dear R-gurus! Is it possible within boxplot to break the y-scale into two (or anything)? I'd like to have a normal linear y-range from 0-10 and the next tick mark starting at, say 60 and continue to 90. The reason is for better visualising the outliers. All the best, Kare [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Reading xyz data from a file and plotting a contour plot
Hello , I am very new to R and I tried to find some solutions to my problem in the mail archives, but I couldnt. All I need to do is read a data file, which has data in X Y Z format and then I want to plot a surface plot or a contour plot of this. my data is like .. 1710 0.626938723432 18.786 1582 0.58561524019 9.01 1629 0.617393680032 6.075 1561 0.57943533994 3.436 1557 0.56416723609 9.985 1576 0.60443022033 7.71 1583 0.573542592476 2.743 1627 0.575233663156 2.821 1600 0.574245291511 -0.64 1658 0.587265947626 2.231 . . Any help is appriciated.. Thanks in advance, -- Abhi __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Reading xyz data from a file and plotting a contour plot
Hello , I am very new to R and I tried to find some solutions to my problem in the mail archives, but I couldnt. All I need to do is read a data file, which has data in X Y Z format and then I want to plot a surface plot or a contour plot of this. my data is like .. 1710 0.626938723432 18.786 1582 0.58561524019 9.01 1629 0.617393680032 6.075 1561 0.57943533994 3.436 1557 0.56416723609 9.985 1576 0.60443022033 7.71 1583 0.573542592476 2.743 1627 0.575233663156 2.821 1600 0.574245291511 -0.64 1658 0.587265947626 2.231 . . Any help is appriciated.. Thanks in advance, -- Abhi __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] lapply with mahalanobis function
Hi I am trying to use lapply with a function that requires two list type variables but I cannot find a way to program it and could not find any hints under lapply or searching the R-list. As an example I have used the mahalanobis function with the iris data. Is there a way to replace the for loop with the lapply function or similar? data(iris) set.seed(0) sa-sample(nrow(iris),2) train-iris[!(1:nrow(iris) %in% sa),] test-iris[sa,1:4] Sp.data-split(train[,1:4], train[,Species]) cov.mat-lapply(Sp.data, cov) centroids-lapply(Sp.data, function(x) apply(x,2,mean)) ## can the for loop be replaced by an lapply function? result-list() for (i in 1:length(Sp.data)){ result[[i]]-mahalanobis(test, center=centroids[[i]], cov=cov.mat[[i]]) } result [[1]] 135 40 620.9687898 0.6085936 [[2]] 13540 27.61433 104.97892 [[3]] 13540 10.96607 167.81203 Thanks Mike White __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] lapply with mahalanobis function
you could use mapply(), e.g., mapply(mahalanobis, center = centroids, cov = cov.mat, MoreArgs = list(x = test)) I hope it helps. Best, Dimitris Dimitris Rizopoulos Ph.D. Student Biostatistical Centre School of Public Health Catholic University of Leuven Address: Kapucijnenvoer 35, Leuven, Belgium Tel: +32/(0)16/336899 Fax: +32/(0)16/337015 Web: http://www.med.kuleuven.be/biostat/ http://www.student.kuleuven.be/~m0390867/dimitris.htm - Original Message - From: Mike White [EMAIL PROTECTED] To: R-help@stat.math.ethz.ch Sent: Wednesday, April 05, 2006 10:58 AM Subject: [R] lapply with mahalanobis function Hi I am trying to use lapply with a function that requires two list type variables but I cannot find a way to program it and could not find any hints under lapply or searching the R-list. As an example I have used the mahalanobis function with the iris data. Is there a way to replace the for loop with the lapply function or similar? data(iris) set.seed(0) sa-sample(nrow(iris),2) train-iris[!(1:nrow(iris) %in% sa),] test-iris[sa,1:4] Sp.data-split(train[,1:4], train[,Species]) cov.mat-lapply(Sp.data, cov) centroids-lapply(Sp.data, function(x) apply(x,2,mean)) ## can the for loop be replaced by an lapply function? result-list() for (i in 1:length(Sp.data)){ result[[i]]-mahalanobis(test, center=centroids[[i]], cov=cov.mat[[i]]) } result [[1]] 135 40 620.9687898 0.6085936 [[2]] 13540 27.61433 104.97892 [[3]] 13540 10.96607 167.81203 Thanks Mike White __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] trace of matrix product
Hi what is the best way to calculate the trace of a product of two matrices? If A - matrix(rnorm(48),nrow=6) B - matrix(rnorm(48),nrow=8) Is there a better (faster, more elegant) way to compute the trace of A%*%B than sum(diag(A %*% B))? I would call the above solution inelegant because all the elements of A %*% B are calculated, when one really only needs the elements on the diagonal. It also uses %*% instead of crossprod() or trcrossprod() -- Robin Hankin Uncertainty Analyst National Oceanography Centre, Southampton European Way, Southampton SO14 3ZH, UK tel 023-8059-7743 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] trace of matrix product
Try sum( rowSums( A * t(B)) ) Then you do not make any calculation you do not really need. Carlos J. Gil Bellosta http://www.datanalytics.com Quoting Robin Hankin [EMAIL PROTECTED]: Hi what is the best way to calculate the trace of a product of two matrices? If A - matrix(rnorm(48),nrow=6) B - matrix(rnorm(48),nrow=8) Is there a better (faster, more elegant) way to compute the trace of A%*%B than sum(diag(A %*% B))? I would call the above solution inelegant because all the elements of A %*% B are calculated, when one really only needs the elements on the diagonal. It also uses %*% instead of crossprod() or trcrossprod() -- Robin Hankin Uncertainty Analyst National Oceanography Centre, Southampton European Way, Southampton SO14 3ZH, UK tel 023-8059-7743 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] trace of matrix product
It is still better to do sum( A * t(B) ) Sorry!! Carlos J. Gil Bellosta http://www.datanalytics.com Quoting Robin Hankin [EMAIL PROTECTED]: Hi what is the best way to calculate the trace of a product of two matrices? If A - matrix(rnorm(48),nrow=6) B - matrix(rnorm(48),nrow=8) Is there a better (faster, more elegant) way to compute the trace of A%*%B than sum(diag(A %*% B))? I would call the above solution inelegant because all the elements of A %*% B are calculated, when one really only needs the elements on the diagonal. It also uses %*% instead of crossprod() or trcrossprod() -- Robin Hankin Uncertainty Analyst National Oceanography Centre, Southampton European Way, Southampton SO14 3ZH, UK tel 023-8059-7743 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Reading xyz data from a file and plotting a contour plot
Hi On 5 Apr 2006 at 10:44, Abhinav Verma wrote: Date sent: Wed, 5 Apr 2006 10:44:41 +0200 From: Abhinav Verma [EMAIL PROTECTED] To: r-help@stat.math.ethz.ch Subject:[R] Reading xyz data from a file and plotting a contour plot Hello , I am very new to R and I tried to find some solutions to my problem in the mail archives, but I couldnt. All I need to do is read a data file, which has data in X Y Z format and then I want to plot a surface plot or a contour plot of this. my data is like .. 1710 0.626938723432 18.786 1582 0.58561524019 9.01 1629 0.617393680032 6.075 1561 0.57943533994 3.436 1557 0.56416723609 9.985 1576 0.60443022033 7.71 1583 0.573542592476 2.743 1627 0.575233663156 2.821 1600 0.574245291511 -0.64 1658 0.587265947626 2.231 Something like tab-read.table(clipboard) tab V1V2 V3 1 1710 0.6269387 18.786 2 1582 0.5856152 9.010 3 1629 0.6173937 6.075 4 1561 0.5794353 3.436 5 1557 0.5641672 9.985 6 1576 0.6044302 7.710 7 1583 0.5735426 2.743 8 1627 0.5752337 2.821 9 1600 0.5742453 -0.640 10 1658 0.5872659 2.231 library(akima) contour(interp(tab$V1, tab$V2, tab$V3)) image(interp(tab$V1, tab$V2, tab$V3)) HTH Petr . . Any help is appriciated.. Thanks in advance, -- Abhi __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Petr Pikal [EMAIL PROTECTED] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] (Fwd) Re: Reading xyz data from a file and plotting a cont
BTW. I checked help page of contour and maybe it could mention a note about akima package or interp function. Petr --- Forwarded message follows --- From: Petr Pikal [EMAIL PROTECTED] To: Abhinav Verma [EMAIL PROTECTED], r-help@stat.math.ethz.ch Subject:Re: [R] Reading xyz data from a file and plotting a contour plot Date sent: Wed, 05 Apr 2006 13:39:38 +0200 Hi On 5 Apr 2006 at 10:44, Abhinav Verma wrote: Date sent: Wed, 5 Apr 2006 10:44:41 +0200 From: Abhinav Verma [EMAIL PROTECTED] To: r-help@stat.math.ethz.ch Subject:[R] Reading xyz data from a file and plotting a contour plot Petr Pikal [EMAIL PROTECTED] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] gsub in data frame
Hello, I have this data frame: ### begin d -data.frame(matrix(c(1,--,bla,2),2,2)) d # I want to replace the -- by \N and still get a data frame. # I tried: out -gsub(--,N,as.matrix(d)) #using as.matrix to get rid of factors out cat(out) # But I lost my data frame ### end Any idea? Regards, Pierre Lapointe ** AVIS DE NON-RESPONSABILITE: Ce document transmis par courrie...{{dropped}} __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] trace of matrix product
How about ; m - matrix(rnorm(9), ncol = 3) mm - matrix(rnorm(9), ncol = 3) sum(sapply(1:3, function(i, m, mm) m[i, ] %*% mm[ ,i], m, mm)) ** what is the best way to calculate the trace of a product of two matrices? If A - matrix(rnorm(48),nrow=6) B - matrix(rnorm(48),nrow=8) Is there a better (faster, more elegant) way to compute the trace of A%*%B than sum(diag(A %*% B))? I would call the above solution inelegant because all the elements of A %*% B are calculated, when one really only needs the elements on the diagonal. It also uses %*% instead of crossprod() or trcrossprod() -- Robin Hankin Uncertainty Analyst National Oceanography Centre, Southampton European Way, Southampton SO14 3ZH, UK tel 023-8059-7743 * -- Ken Knoblauch Inserm U371 Cerveau et Vision Dept. of Cognitive Neuroscience 18 avenue du Doyen Lépine 69500 Bron France tel: +33 (0)4 72 91 34 77 fax: +33 (0)4 72 91 34 61 portable: +33 (0)6 84 10 64 10 http://www.lyon.inserm.fr/371/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] gsub in data frame
Hi On 5 Apr 2006 at 7:48, Lapointe, Pierre wrote: From: Lapointe, Pierre [EMAIL PROTECTED] To: 'r-help@stat.math.ethz.ch' r-help@stat.math.ethz.ch Date sent: Wed, 5 Apr 2006 07:48:33 -0400 Subject:[R] gsub in data frame Hello, I have this data frame: ### begin d -data.frame(matrix(c(1,--,bla,2),2,2)) d # I want to replace the -- by \N and still get a data frame. # I tried: out -gsub(--,N,as.matrix(d)) #using as.matrix to get rid of factors out cat(out) # But I lost my data frame ### end Any idea? re formate it back? data.frame(matrix(out,2,2)) X1 X2 1 1 bla 2 \\N 2 HTH Petr Regards, Pierre Lapointe ** AVIS DE NON-RESPONSABILITE: Ce document transmis par courrie...{{dropped}} __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Petr Pikal [EMAIL PROTECTED] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] List to Array
Hi, this is probably the easiest thing to do but I manage not finding the answer: I have a list with matrices of exact same format and headers. Now I would like to transform the list into an normal array. What is the proper way to do this? as.array changes the entire format and right now I only found the method of creating a new array, going through the entire list and copy the matrix in each list element to the new array. Thanks a million for your help! Werner __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] List to Array
probably you're looking for either do.call(rbind, lis) or do.call(cbind, lis) where 'lis' is your list. I hope it helps. Best, Dimitris Dimitris Rizopoulos Ph.D. Student Biostatistical Centre School of Public Health Catholic University of Leuven Address: Kapucijnenvoer 35, Leuven, Belgium Tel: +32/(0)16/336899 Fax: +32/(0)16/337015 Web: http://www.med.kuleuven.be/biostat/ http://www.student.kuleuven.be/~m0390867/dimitris.htm - Original Message - From: Werner Wernersen [EMAIL PROTECTED] To: r-help@stat.math.ethz.ch Sent: Wednesday, April 05, 2006 2:55 PM Subject: [R] List to Array Hi, this is probably the easiest thing to do but I manage not finding the answer: I have a list with matrices of exact same format and headers. Now I would like to transform the list into an normal array. What is the proper way to do this? as.array changes the entire format and right now I only found the method of creating a new array, going through the entire list and copy the matrix in each list element to the new array. Thanks a million for your help! Werner __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] List to Array
Please supply some test data and the expected answer since its not clear what is desired here. On 4/5/06, Werner Wernersen [EMAIL PROTECTED] wrote: Hi, this is probably the easiest thing to do but I manage not finding the answer: I have a list with matrices of exact same format and headers. Now I would like to transform the list into an normal array. What is the proper way to do this? as.array changes the entire format and right now I only found the method of creating a new array, going through the entire list and copy the matrix in each list element to the new array. Thanks a million for your help! Werner __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Reading xyz data from a file and plotting a contour plot
Hello Petr, Thanks a lot for the reply, it works.. cheers, Abhi. On 4/5/06, Petr Pikal [EMAIL PROTECTED] wrote: Hi On 5 Apr 2006 at 10:44, Abhinav Verma wrote: Date sent: Wed, 5 Apr 2006 10:44:41 +0200 From: Abhinav Verma [EMAIL PROTECTED] To: r-help@stat.math.ethz.ch Subject:[R] Reading xyz data from a file and plotting a contour plot Hello , I am very new to R and I tried to find some solutions to my problem in the mail archives, but I couldnt. All I need to do is read a data file, which has data in X Y Z format and then I want to plot a surface plot or a contour plot of this. my data is like .. 1710 0.626938723432 18.786 1582 0.58561524019 9.01 1629 0.617393680032 6.075 1561 0.57943533994 3.436 1557 0.56416723609 9.985 1576 0.60443022033 7.71 1583 0.573542592476 2.743 1627 0.575233663156 2.821 1600 0.574245291511 -0.64 1658 0.587265947626 2.231 Something like tab-read.table(clipboard) tab V1V2 V3 1 1710 0.6269387 18.786 2 1582 0.5856152 9.010 3 1629 0.6173937 6.075 4 1561 0.5794353 3.436 5 1557 0.5641672 9.985 6 1576 0.6044302 7.710 7 1583 0.5735426 2.743 8 1627 0.5752337 2.821 9 1600 0.5742453 -0.640 10 1658 0.5872659 2.231 library(akima) contour(interp(tab$V1, tab$V2, tab$V3)) image(interp(tab$V1, tab$V2, tab$V3)) HTH Petr . . Any help is appriciated.. Thanks in advance, -- Abhi __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Petr Pikal [EMAIL PROTECTED] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] package docs: examples format
Hello, The package I'm finishing up contains a series of functions intended to be run one after the other, as in: do data-prep using A() run complicated, slow, analysis B() do complicated, slow additional analysis C() plot the results with D() and E() Now, in the documentation for each of those functions, I put the entire sequence to illustrate the proper sequence. But this means that the time-consuming B() and C() run several times during package checking, taking far too long. I know that I can use \dontrun{B()} to show the sequence in the help without actually executing it each time. Is that the preferred approach, or is there something nicer? Ideally, I'd like to include one global example that covers all related functions, but I can't find a way to do that neatly (other than possibly a vignette?). Thanks, Sarah -- Sarah Goslee http://www.stringpage.com [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] (Fwd) Re: Reading xyz data from a file and plotting a cont
On Wed, 5 Apr 2006, Petr Pikal wrote: BTW. I checked help page of contour and maybe it could mention a note about akima package or interp function. We don't generally link from standard packages to contributed ones. In this case, there is a long history of unreliability of the R version of akima (the MASS examples still do not work correctly on Windows), and there are also many other interpolation methods, and yet more smoothing ones. Petr --- Forwarded message follows --- From: Petr Pikal [EMAIL PROTECTED] To: Abhinav Verma [EMAIL PROTECTED], r-help@stat.math.ethz.ch Subject: Re: [R] Reading xyz data from a file and plotting a contour plot Date sent:Wed, 05 Apr 2006 13:39:38 +0200 Hi On 5 Apr 2006 at 10:44, Abhinav Verma wrote: Date sent:Wed, 5 Apr 2006 10:44:41 +0200 From: Abhinav Verma [EMAIL PROTECTED] To: r-help@stat.math.ethz.ch Subject: [R] Reading xyz data from a file and plotting a contour plot Petr Pikal [EMAIL PROTECTED] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] List to Array
Werner Wernersen pensterfuzzer at yahoo.de writes: Hi, this is probably the easiest thing to do but I manage not finding the answer: I have a list with matrices of exact same format and headers. Now I would like to transform the list into an normal array. What is the proper way to do this? I think if you really mean array (i.e. an n-dimensional table with n2) then something like the following will do it: ## create an example list of matrices z - replicate(5,matrix(runif(9),nrow=3),simplify=FALSE) library(abind) do.call(abind,c(z,list(along=3))) Ben Bolker __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] package docs: examples format
Sarah Goslee wrote: Hello, The package I'm finishing up contains a series of functions intended to be run one after the other, as in: do data-prep using A() run complicated, slow, analysis B() do complicated, slow additional analysis C() plot the results with D() and E() Now, in the documentation for each of those functions, I put the entire sequence to illustrate the proper sequence. But this means that the time-consuming B() and C() run several times during package checking, taking far too long. I know that I can use \dontrun{B()} to show the sequence in the help without actually executing it each time. Is that the preferred approach, or is there something nicer? Ideally, I'd like to include one global example that covers all related functions, but I can't find a way to do that neatly (other than possibly a vignette?). What about an additional help page that relates to the whole package rather than to a single function and contains the examples? Each function's help page can point with a \link{} to that help page. Uwe Ligges Thanks, Sarah -- Sarah Goslee http://www.stringpage.com [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] package docs: examples format
On Wed, 5 Apr 2006, Sarah Goslee wrote: Hello, The package I'm finishing up contains a series of functions intended to be run one after the other, as in: do data-prep using A() run complicated, slow, analysis B() do complicated, slow additional analysis C() plot the results with D() and E() Now, in the documentation for each of those functions, I put the entire sequence to illustrate the proper sequence. But this means that the time-consuming B() and C() run several times during package checking, taking far too long. I know that I can use \dontrun{B()} to show the sequence in the help without actually executing it each time. Is that the preferred approach, or is there something nicer? This is generally done by including example(B) or some such in the examples for C. (And you will want to mark them by \dontrun().) Ideally, I'd like to include one global example that covers all related functions, but I can't find a way to do that neatly (other than possibly a vignette?). A demo is an alternative approach. -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] gsub in data frame
On Wed, 5 Apr 2006, Lapointe, Pierre wrote: Hello, I have this data frame: ### begin d -data.frame(matrix(c(1,--,bla,2),2,2)) d So d is a two-column data frame with factor columns. # I want to replace the -- by \N and still get a data frame. levels(d$X1) - gsub(--,N, levels(d$X1)) # I tried: out -gsub(--,N,as.matrix(d)) #using as.matrix to get rid of factors out cat(out) # But I lost my data frame ### end Any idea? Regards, Pierre Lapointe ** AVIS DE NON-RESPONSABILITE: Ce document transmis par courrie...{{dropped}} __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] List to Array
On 4/5/06, Ben Bolker [EMAIL PROTECTED] wrote: Werner Wernersen pensterfuzzer at yahoo.de writes: Hi, this is probably the easiest thing to do but I manage not finding the answer: I have a list with matrices of exact same format and headers. Now I would like to transform the list into an normal array. What is the proper way to do this? I think if you really mean array (i.e. an n-dimensional table with n2) then something like the following will do it: ## create an example list of matrices z - replicate(5,matrix(runif(9),nrow=3),simplify=FALSE) library(abind) do.call(abind,c(z,list(along=3))) Note that if that is what he is looking for then abind(z, along = 3) will do it too. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] package docs: examples format
On 4/5/06, Uwe Ligges [EMAIL PROTECTED] wrote: Sarah Goslee wrote: Hello, The package I'm finishing up contains a series of functions intended to be run one after the other, as in: do data-prep using A() run complicated, slow, analysis B() do complicated, slow additional analysis C() plot the results with D() and E() Now, in the documentation for each of those functions, I put the entire sequence to illustrate the proper sequence. But this means that the time-consuming B() and C() run several times during package checking, taking far too long. I know that I can use \dontrun{B()} to show the sequence in the help without actually executing it each time. Is that the preferred approach, or is there something nicer? Ideally, I'd like to include one global example that covers all related functions, but I can't find a way to do that neatly (other than possibly a vignette?). What about an additional help page that relates to the whole package rather than to a single function and contains the examples? Each function's help page can point with a \link{} to that help page. And that page would be of the form: mypkg-package.Rd and you could use the R promptPackage command to generate an outline for it. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Time Series Objects/ MC Simulation
I am attempting to value convertible bonds through a Monte Carlo approach. I want to express call schedules as date-price tuples. Naturally, these tuples need to be expanded to match the frequency of the innovations in the MC process. 1. Is there a straigh-forward way to accomplish this expansion? 2. I have noted the existance of ts, its, zoo and fCalendar. Does anyone have an opion on the relative merits of these, particularly with respect to speed in a simulation framework? Your assistance and insights are appreciated. Keith __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] List to Array
Oh yes, I should give an example: m - matrix(1:6,nrow=3) L - list(m,m) Output of L: [[1]] [,1] [,2] [1,]14 [2,]25 [3,]36 [[2]] [,1] [,2] [1,]14 [2,]25 [3,]36 I would like to transform L to and array looking like this: , , 1 [,1] [,2] [1,]14 [2,]25 [3,]36 , , 2 [,1] [,2] [1,]14 [2,]25 [3,]36 Please supply some test data and the expected answer since its not clear what is desired here. On 4/5/06, Werner Wernersen [EMAIL PROTECTED] wrote: Hi, this is probably the easiest thing to do but I manage not finding the answer: I have a list with matrices of exact same format and headers. Now I would like to transform the list into an normal array. What is the proper way to do this? as.array changes the entire format and right now I only found the method of creating a new array, going through the entire list and copy the matrix in each list element to the new array. Thanks a million for your help! Werner __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] package docs: examples format
On 4/5/06, Gabor Grothendieck [EMAIL PROTECTED] wrote: On 4/5/06, Uwe Ligges [EMAIL PROTECTED] wrote: What about an additional help page that relates to the whole package rather than to a single function and contains the examples? Each function's help page can point with a \link{} to that help page. And that page would be of the form: mypkg-package.Rd and you could use the R promptPackage command to generate an outline for it. That would be perfect, thank you for the suggestion. The discussion of package help in the Writing R Extension manual led me to believe that this was the wrong approach: More extensive documentation is better placed into a package vignette (see Writing package vignetteshttp://cran.r-project.org/doc/manuals/R-exts.html#Writing-package-vignettes) and referenced from this page, or into individual man pages for the functions, datasets, or classes. The other suggestion given, by Prof. Ripley, was to include examples(B) as part of the documentation for C, and so on. Thank you all, Sarah -- Sarah Goslee http://www.stringpage.com [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] List to Array
then maybe this is what you're looking for: L - list(matrix(rnorm(6), nrow = 3), matrix(rnorm(6), nrow = 3)) L array(unlist(L), dim = c(nrow(L[[1]]), ncol(L[[1]]), length(L))) Best, Dimitris Dimitris Rizopoulos Ph.D. Student Biostatistical Centre School of Public Health Catholic University of Leuven Address: Kapucijnenvoer 35, Leuven, Belgium Tel: +32/(0)16/336899 Fax: +32/(0)16/337015 Web: http://www.med.kuleuven.be/biostat/ http://www.student.kuleuven.be/~m0390867/dimitris.htm - Original Message - From: Werner Wernersen [EMAIL PROTECTED] To: Gabor Grothendieck [EMAIL PROTECTED] Cc: r-help@stat.math.ethz.ch Sent: Wednesday, April 05, 2006 3:55 PM Subject: Re: [R] List to Array Oh yes, I should give an example: m - matrix(1:6,nrow=3) L - list(m,m) Output of L: [[1]] [,1] [,2] [1,]14 [2,]25 [3,]36 [[2]] [,1] [,2] [1,]14 [2,]25 [3,]36 I would like to transform L to and array looking like this: , , 1 [,1] [,2] [1,]14 [2,]25 [3,]36 , , 2 [,1] [,2] [1,]14 [2,]25 [3,]36 Please supply some test data and the expected answer since its not clear what is desired here. On 4/5/06, Werner Wernersen [EMAIL PROTECTED] wrote: Hi, this is probably the easiest thing to do but I manage not finding the answer: I have a list with matrices of exact same format and headers. Now I would like to transform the list into an normal array. What is the proper way to do this? as.array changes the entire format and right now I only found the method of creating a new array, going through the entire list and copy the matrix in each list element to the new array. Thanks a million for your help! Werner __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] List to Array
Which can be written: array(unlist(L), dim = c(dim(L[[1]]), length(L))) On 4/5/06, Dimitris Rizopoulos [EMAIL PROTECTED] wrote: then maybe this is what you're looking for: L - list(matrix(rnorm(6), nrow = 3), matrix(rnorm(6), nrow = 3)) L array(unlist(L), dim = c(nrow(L[[1]]), ncol(L[[1]]), length(L))) Best, Dimitris Dimitris Rizopoulos Ph.D. Student Biostatistical Centre School of Public Health Catholic University of Leuven Address: Kapucijnenvoer 35, Leuven, Belgium Tel: +32/(0)16/336899 Fax: +32/(0)16/337015 Web: http://www.med.kuleuven.be/biostat/ http://www.student.kuleuven.be/~m0390867/dimitris.htm - Original Message - From: Werner Wernersen [EMAIL PROTECTED] To: Gabor Grothendieck [EMAIL PROTECTED] Cc: r-help@stat.math.ethz.ch Sent: Wednesday, April 05, 2006 3:55 PM Subject: Re: [R] List to Array Oh yes, I should give an example: m - matrix(1:6,nrow=3) L - list(m,m) Output of L: [[1]] [,1] [,2] [1,]14 [2,]25 [3,]36 [[2]] [,1] [,2] [1,]14 [2,]25 [3,]36 I would like to transform L to and array looking like this: , , 1 [,1] [,2] [1,]14 [2,]25 [3,]36 , , 2 [,1] [,2] [1,]14 [2,]25 [3,]36 Please supply some test data and the expected answer since its not clear what is desired here. On 4/5/06, Werner Wernersen [EMAIL PROTECTED] wrote: Hi, this is probably the easiest thing to do but I manage not finding the answer: I have a list with matrices of exact same format and headers. Now I would like to transform the list into an normal array. What is the proper way to do this? as.array changes the entire format and right now I only found the method of creating a new array, going through the entire list and copy the matrix in each list element to the new array. Thanks a million for your help! Werner __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] package docs: examples format
On Wed, 5 Apr 2006, Sarah Goslee wrote: Hello, The package I'm finishing up contains a series of functions intended to be run one after the other, as in: do data-prep using A() run complicated, slow, analysis B() do complicated, slow additional analysis C() plot the results with D() and E() Now, in the documentation for each of those functions, I put the entire sequence to illustrate the proper sequence. But this means that the time-consuming B() and C() run several times during package checking, taking far too long. I know that I can use \dontrun{B()} to show the sequence in the help without actually executing it each time. Is that the preferred approach, or is there something nicer? Ideally, I'd like to include one global example that covers all related functions, but I can't find a way to do that neatly (other than possibly a vignette?). Why not use a vignette? Documenting a process involving multiple functions is exactly what vignettes are designed for. -thomas Thomas Lumley Assoc. Professor, Biostatistics [EMAIL PROTECTED] University of Washington, Seattle __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] model comparison with mixed effects glm
Another thought on checking the validity of the suggested 2*log(likelihood ratio) procedure I suggested: If it were my problem, I think I would do some checking using Monte Carlo, e.g., as described in sec. 2.6 of the vignette MlmSoftRev in the mlmRev package. This is particularly relevant for testing a parameter at a boundary, e.g., whether a particular variance component is 0, because the assumptions for the traditional chi-square approximation to 2*log(LR) do not hold in that case, as documented in sec. 2.4 of Pinheiro and Bates (2000) Mixed-Effects Models in S and S-Plus (Springer). Spencer Graves wrote: You are correct on both counts. The exta line is inserted below; obviously, I had it but failed to copy it into the email. And you are also correct that one needs to be careful that both glm and lmer are using comparable definitions for the log(likelihood). My crude check on that was just to look compare the lglk0 and lglk.ID1.; the numbers seemed too close to be based on different definitions. In addition, I think I may have checked this once before, but my memory could be faulty on that point. Thanks for pointing out both deficiencies in my reply. spencer graves hadley wickham wrote: ### To get around that, I computed 2*log(likelihood ratio) manually: lglk0 - logLik(fit0) lglk.ID1. - logLik(Fit.ID1.) chisq.ID. - 2*(lglk.ID1.-lglk0) pchisq(as.numeric(chisq.ID.), 1, lower=FALSE) [1] 0.008545848 (I think you're missing a line in there) But isn't this rather perilous unless you are confident that the two models are using exactly the same formulation of the likelihood? (ie. that they are truly nested) Hadley __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Combination of Bias and MSE ?
Dear R Users, My question is overall and not necessarily related to R. Suppose we face to a situation in which MSE( Mean Squared Error) shows desired results but Bias shows undesired ones, Or in advers. How can we evaluate the results. And suppose, Both MSE and Bias are important for us. The ecact question is that, whether there is any combined measure of two above metrics. Thank you so much for any reply. Amir Safari - [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] List to Array
That's great, thanks a lot! :) Which can be written: array(unlist(L), dim = c(dim(L[[1]]), length(L))) On 4/5/06, Dimitris Rizopoulos [EMAIL PROTECTED] wrote: then maybe this is what you're looking for: L - list(matrix(rnorm(6), nrow = 3), matrix(rnorm(6), nrow = 3)) L array(unlist(L), dim = c(nrow(L[[1]]), ncol(L[[1]]), length(L))) Best, Dimitris Dimitris Rizopoulos Ph.D. Student Biostatistical Centre School of Public Health Catholic University of Leuven Address: Kapucijnenvoer 35, Leuven, Belgium Tel: +32/(0)16/336899 Fax: +32/(0)16/337015 Web: http://www.med.kuleuven.be/biostat/ http://www.student.kuleuven.be/~m0390867/dimitris.htm - Original Message - From: Werner Wernersen [EMAIL PROTECTED] To: Gabor Grothendieck [EMAIL PROTECTED] Cc: r-help@stat.math.ethz.ch Sent: Wednesday, April 05, 2006 3:55 PM Subject: Re: [R] List to Array Oh yes, I should give an example: m - matrix(1:6,nrow=3) L - list(m,m) Output of L: [[1]] [,1] [,2] [1,]14 [2,]25 [3,]36 [[2]] [,1] [,2] [1,]14 [2,]25 [3,]36 I would like to transform L to and array looking like this: , , 1 [,1] [,2] [1,]14 [2,]25 [3,]36 , , 2 [,1] [,2] [1,]14 [2,]25 [3,]36 Please supply some test data and the expected answer since its not clear what is desired here. On 4/5/06, Werner Wernersen [EMAIL PROTECTED] wrote: Hi, this is probably the easiest thing to do but I manage not finding the answer: I have a list with matrices of exact same format and headers. Now I would like to transform the list into an normal array. What is the proper way to do this? as.array changes the entire format and right now I only found the method of creating a new array, going through the entire list and copy the matrix in each list element to the new array. Thanks a million for your help! Werner __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Problems in package management after Linux system upgrade
I upgraded from Fedora Core 4 to Fedora Core 5 and I find a lot of previously installed packages won't run because shared libraries or other system things have changed out from under the installed R libraries. I do not know for sure if the R version now from Fedora-Extras (2.2.1) is exactly the same one I was using in FC4. I see problems in many packages. Example, Hmisc: unable to load shared library '/usr/lib/R/library/Hmisc/libs/Hmisc.so': libgfortran.so.0: cannot open shared object file: No such file or directory Error in library(Hmisc) : .First.lib failed for 'Hmisc' If I manually do a re-install, then Hmisc and the other packages are fine. I ?THINK? that if I had been using version 2.2.0, and now I have 2.2.1, then the checkBuilt option would force a rebuild on all packages. Right? I'm pasting in below the script that I run nightly to update all packages and install any new ones in CRAN. Can anybody suggest changes that might cause a rebuild of packages that need rebuilding? ## PJ 2005-11-05 options(repos = http://cran.cnr.berkeley.edu/;) #failPackages is the black list. Things get inserted for various reasons #rejected because they don't build on my system as of July, 2005, or are obviously not needed failPackages1 - c(BRugs,tclkt2,cyclones,rpvm,ncdf,gtkDevice,gap,gnomeGUI,mimR,pathmix,rcdd,rgdal,rpvm,Rmpi,RQuantLib,RMySQL, RNetCDF,RODBC,ROracle,rsprng,RWinEdt,taskPR) #rejected because I subjectively think we don't need them failPackages2 - c(aaMI,AlgDesign,bim,caMassClass,CGIwithR,CDNmoney,clac,clim.pact,compositions,cyclones,hapassoc,haplo.score,haplo.stats,hapsim,httpRequest, labdsv,kza,LMGene,Malmig,magic,negenes,oz,papply,spe,wavethresh,waveslim,tdthap) failPackages3 - c(rcom,Rlsf) #put the 3 sets of rejects together failPackages - union(failPackages1,union(failPackages2,failPackages3)) #list of all currently installed packages installedPackages - rownames (installed.packages() ) #do any installed packages need removal because they are on the blacklist? needRemoval - installedPackages %in% failPackages # remove any blacklisted packages if they are already installed. if (sum(needRemoval) 0) remove.packages(installedPackages[needRemoval] ) #update the ones you want to keep update.packages(ask=F, checkBuilt=T) #get list of all new packages on CRAN theNew - new.packages() #do any of the new packages belong to the black list? shouldFail - theNew %in% failPackages #install non blacklisted packages that are in theNew list if (sum(!shouldFail) 0) install.packages( theNew[!shouldFail],dependencies=T) # VGAM is not in CRAN yet, but Zelig will use it. if ( VGAM %in% installedPackages) update.packages(repos=http://www.stat.auckland.ac.nz/~yee,ask=F) else install.packages(VGAM, repos=http://www.stat.auckland.ac.nz/~yee;) -- Paul E. Johnson Professor, Political Science 1541 Lilac Lane, Room 504 University of Kansas __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Combination of Bias and MSE ?
The MSE of an estimator X for a parameter theta is defined as E(X - theta)^2, which is equal to Var[X] + (Bias[X])^2, so in that sense, the MSE is already taking the bias of X into account. Hope this helps, -- Wolfgang Viechtbauer Department of Methodology and Statistics University of Maastricht, The Netherlands http://www.wvbauer.com/ -Original Message- From: [EMAIL PROTECTED] [mailto:r-help- [EMAIL PROTECTED] On Behalf Of Amir Safari Sent: Wednesday, April 05, 2006 5:20 PM To: R-help@stat.math.ethz.ch Subject: [R] Combination of Bias and MSE ? Dear R Users, My question is overall and not necessarily related to R. Suppose we face to a situation in which MSE( Mean Squared Error) shows desired results but Bias shows undesired ones, Or in advers. How can we evaluate the results. And suppose, Both MSE and Bias are important for us. The ecact question is that, whether there is any combined measure of two above metrics. Thank you so much for any reply. Amir Safari - [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting- guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] determine dimension on which by() applies
Hi, having solved one problem with your kindest help I directly ran into the next one: I now have a 4-dimensional array and I want to aggregate subsets (sum up over) for some of the dimensions. This sort of aggregating works beautifully for matrices with by(). Is there a similar function where I additionally can choose the dimension to do the group calculations over? Thanks again! Werner __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Problems in package management after Linux system upgrade
On Wed, 5 Apr 2006, Paul Johnson wrote: I upgraded from Fedora Core 4 to Fedora Core 5 and I find a lot of previously installed packages won't run because shared libraries or other system things have changed out from under the installed R libraries. I do not know for sure if the R version now from Fedora-Extras (2.2.1) is exactly the same one I was using in FC4. I see problems in many packages. Example, Hmisc: unable to load shared library '/usr/lib/R/library/Hmisc/libs/Hmisc.so': libgfortran.so.0: cannot open shared object file: No such file or directory Error in library(Hmisc) : .First.lib failed for 'Hmisc' You have a later compiler, gcc 4.1.0 not 4.0.x. Is there no back-compatibility package with the older compiler's runtime? (I would compile gcc 4.0.3 and install that file manually, but that's not for those unused to building gcc.) If I manually do a re-install, then Hmisc and the other packages are fine. I ?THINK? that if I had been using version 2.2.0, and now I have 2.2.1, then the checkBuilt option would force a rebuild on all packages. Right? Sorry, no. That is just a patchlevel change. The docs say checkBuilt: If 'TRUE', a package built under an earlier minor version of R is considered to be 'old'. and the minor version is '2' (the second '2'). I'm pasting in below the script that I run nightly to update all packages and install any new ones in CRAN. Can anybody suggest changes that might cause a rebuild of packages that need rebuilding? Well, we are in alpha of 2.3.0, so what I would do is to install 2.3.0 alpha and then your script will do all the updates for you. You can then remove it when a 2.3.0 RPM is available. ## PJ 2005-11-05 options(repos = http://cran.cnr.berkeley.edu/;) #failPackages is the black list. Things get inserted for various reasons #rejected because they don't build on my system as of July, 2005, or are obviously not needed failPackages1 - c(BRugs,tclkt2,cyclones,rpvm,ncdf,gtkDevice,gap,gnomeGUI,mimR,pathmix,rcdd,rgdal,rpvm,Rmpi,RQuantLib,RMySQL, RNetCDF,RODBC,ROracle,rsprng,RWinEdt,taskPR) #rejected because I subjectively think we don't need them failPackages2 - c(aaMI,AlgDesign,bim,caMassClass,CGIwithR,CDNmoney,clac,clim.pact,compositions,cyclones,hapassoc,haplo.score,haplo.stats,hapsim,httpRequest, labdsv,kza,LMGene,Malmig,magic,negenes,oz,papply,spe,wavethresh,waveslim,tdthap) failPackages3 - c(rcom,Rlsf) #put the 3 sets of rejects together failPackages - union(failPackages1,union(failPackages2,failPackages3)) #list of all currently installed packages installedPackages - rownames (installed.packages() ) #do any installed packages need removal because they are on the blacklist? needRemoval - installedPackages %in% failPackages # remove any blacklisted packages if they are already installed. if (sum(needRemoval) 0) remove.packages(installedPackages[needRemoval] ) #update the ones you want to keep update.packages(ask=F, checkBuilt=T) #get list of all new packages on CRAN theNew - new.packages() #do any of the new packages belong to the black list? shouldFail - theNew %in% failPackages #install non blacklisted packages that are in theNew list if (sum(!shouldFail) 0) install.packages( theNew[!shouldFail],dependencies=T) # VGAM is not in CRAN yet, but Zelig will use it. if ( VGAM %in% installedPackages) update.packages(repos=http://www.stat.auckland.ac.nz/~yee,ask=F) else install.packages(VGAM, repos=http://www.stat.auckland.ac.nz/~yee;) -- Paul E. Johnson Professor, Political Science 1541 Lilac Lane, Room 504 University of Kansas __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Bin by bin histogram comparisons
Hi, Assuming you mean the hist2d function from the gplots package, you can save the results of the histogram as an object, and do whatever you want with the counts. myhist - hist2d(gps2, nbins=200) myhist$counts # shows you the values of the counts for each bin You can for example take the difference between counts from your two histograms and plot them as an image, or whatever. Sarah On 4/5/06, Philipp H. Mohr [EMAIL PROTECTED] wrote: Hello, I have created two histograms with: hist2d(gps2, nbins=200, col = c(white,heat.colors(16))) Both of them have the same range and the same number of bins. Now I would like to compare them bin by bin and plot the results. Could someone please tell me how to do that. I searched the man pages and the web, but couldn't find anything. Thank you very much. Phil -- Sarah Goslee http://www.stringpage.com [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] data.frame to list
Larry Howe [EMAIL PROTECTED] wants to: 1. read in a 2-column data file, e.g. status tab new db tab green title tab Most Significant Excursions 2. end up with an R list such that I can write e.g. lst$title and have R return Most Significant Excursions. I call this reading a hash table (because it consists of name/value pairs), and here's a function to do it. Note this function allows you to input a hash if you wish, and then override its values with the data file. The argument defaults assume a tab-delimited text file with a header row, which is ignored. read.hash - function(file, defaults=list(), header=TRUE, sep=\t, ...) { pl - read.table(file, as.is=TRUE, header=header, sep=sep, ...) for (i in seq(pl[[1]])) defaults[[pl[[1]][i]]] - pl[[2]][i] defaults } Also see the built-in read.dcf, which uses a different input format, but might suit your purpose. -- David Brahm ([EMAIL PROTECTED]) __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] hist function: freq=FALSE for standardised histograms
Dear All, I am a undergraduate using R for the first time. It seems like an excellent program and one that I look forward to using a lot over the next few years, but I have hit a very basic problem that I can't solve. I want to produce a standardised histogram, i.e. one where the area under the graph is equal to 1. I look at the manual for the histogram function and find this: freq: logical; if 'TRUE', the histogram graphic is a representation of frequencies, the 'counts' component of the result; if 'FALSE', probability densities, component 'density', are plotted (so that the histogram has a total area of one). Defaults to 'TRUE' _iff_ 'breaks' are equidistant (and 'probability' is not specified). I therefore expect that the following command: h - hist(StockReturns, freq=FALSE) where StockReturns has the following data in it: sourcedata$StockReturns [1] -0.006983 0.111565 0.053782 0.027966 0.068956 0.165424 -0.022133 [8] -0.001910 0.052174 0.072589 -0.023002 0.000521 -0.015688 0.148459 [15] 0.054111 0.141044 0.096686 -0.012256 -0.030397 0.039365 0.021407 [22] -0.175750 0.053901 -0.095730 0.129717 0.33 0.061563 0.085052 [29] 0.072295 -0.008500 0.10 0.02 -0.199763 0.081856 0.013636 [36] 0.007812 0.038647 -0.026945 0.037965 -0.079889 0.056234 -0.08 [43] -0.012792 0.131711 0.015996 0.008149 0.104568 0.004046 -0.027750 [50] 0.050802 0.045714 0.092327 -0.017857 0.022574 0.08 0.051366 [57] 0.004215 0.083228 0.046803 0.021335 0.023797 0.094891 0.036541 [64] 0.016423 -0.126365 0.034219 0.098330 0.079292 -0.009901 0.021559 [71] -0.039414 0.114286 0.101856 -0.010452 0.11 0.097274 0.104843 [78] 0.144439 0.021868 0.106667 0.081250 0.002097 0.073302 0.087889 [85] -0.145165 0.014592 0.035000 0.131711 -0.126937 0.133989 would result in a graph that has an area of equal to 1.000. However, it does not - it produces frequency densities not standardized frequency densities. Can someone point me in the right direction here - I know I am being fantastically thick but can't find out how to do such a simple operation! My complete set of commands looks like this: sourcedata - read.table(c:/data.dat,header=T) attach(sourcedata) h - hist(StockReturns, col='red', labels=TRUE, ylab=Frequency Density, probability=TRUE) Where c:\data.dat is a file with the numbers above it, one per line, and the first line containing the string StockReturns. Many thanks, Alex Davies [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] using latex() in R for Unix
I am using R for Unix and want to make some LaTeX tables. I have already played around in R for Windows and have succeeded in making tables that I want using the following code: latex(Estimates, file='out.tex', rowlabel='',digits=3) However, when I use this code in Unix, I can never find the file out.tex. I assumed that R would send the file to whatever directory I was working in, but that does not seem to be the case. So, I tried specifiying the exact location where I want the file: latex(Estimates, file='/home/b/bquinif/bq/9095/out.tex', rowlabel='',digits=3) When I do that, I get an error message sayin that the file/directory does not exist. I have written lots of files from Stata to locations specified like above, so I don't understand what's going on. Can anyone help me straighten this out? Thanks, Brian __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] partitioning cluster function
Hi All, For the function bclust(e1071), the argument base.method is explained as must be the name of a partitioning cluster function returning a list with the same components as the return value of 'kmeans'. In my understanding, there are three partitioning cluster functions in R, which are clara, pam, fanny. Then I check each of them to see which of them can get same components as the return value of kmeans, by using the following codes: x - rbind(matrix(rnorm(100, sd = 0.3), ncol = 2), matrix(rnorm(100, mean = 1, sd = 0.3), ncol = 2)) cl - kmeans(x, 2) cl.clara-clara(x,2) cl.pam-pam(x,2) cl.fanny-fanny(x,2) But it seems like none of them have exactly the same components as kmeans (Please see P.S.). Could you please help me clearify which methods included in the argument base.method? Thank you! P.S.: For example clara: Medoids: Objective function: Clustering vector: Cluster sizes: Best sample: Available components: [1] sample medoidsi.med clustering objective clusinfo diss [8] call silinfodata But kmeans has: K-means clustering with 2 clusters of sizes 50, 50 Cluster means: Clustering vector: Within cluster sum of squares by cluster: Available components: [1] cluster centers withinss size [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] data.frame to list
On Tuesday April 4 2006 21:56, Gabor Grothendieck wrote: Try this: dd - read.table(myfile, as.is = TRUE) lst - as.list(dd[,2]) names(lst) - dd[,1] lst$title This (almost) works. I did this instead: p = read.delim(params.txt, as.is=T) lst = as.list(p[ ,1]) names(lst) = rownames(p) I know that in some cases column 1 of the data file becomes column 1 of the frame, and in other cases it becomes the rownames. I think that is the difference. To respond to David Brahms' mail, yes, a hash is what I'm after. I guess my perl orientation is evident. And to help all those who come after and may be googling for this, I add the following keywords. like a hash in perl similar to a perl hash perl hash in R That ought to do it. Thanks everyone for your fast and insightful help. Larry Howe __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] hist function: freq=FALSE for standardised histograms
Hi, how did you evaluate the total area? Here is a simple example ### set.seed(100) x - rnorm(100) x.h - hist(x, freq=F, plot=F) x.h $breaks [1] -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 $counts [1] 3 4 9 14 22 20 13 7 5 2 1 $intensities [1] 0.0599 0.0800 0.1800 0.2800 0.4400 0.4000 [7] 0.2600 0.1400 0.1000 0.0400 0.0200 $density [1] 0.0599 0.0800 0.1800 0.2800 0.4400 0.4000 [7] 0.2600 0.1400 0.1000 0.0400 0.0200 $mids [1] -2.25 -1.75 -1.25 -0.75 -0.25 0.25 0.75 1.25 1.75 2.25 2.75 $xname [1] x $equidist [1] TRUE attr(,class) [1] histogram sum(diff(x.h$breaks)*x.h$density) [1] 1 # Also, you can verify diff(x.h$breaks)*x.h$density*100 [1] 2.99 4.00 9.00 14.00 22.00 20.00 13.00 [8] 7.00 5.00 2.00 1.00 HTH Marco --- Alex Davies [EMAIL PROTECTED] wrote: Dear All, I am a undergraduate using R for the first time. It seems like an excellent program and one that I look forward to using a lot over the next few years, but I have hit a very basic problem that I can't solve. I want to produce a standardised histogram, i.e. one where the area under the graph is equal to 1. I look at the manual for the histogram function and find this: freq: logical; if 'TRUE', the histogram graphic is a representation of frequencies, the 'counts' component of the result; if 'FALSE', probability densities, component 'density', are plotted (so that the histogram has a total area of one). Defaults to 'TRUE' _iff_ 'breaks' are equidistant (and 'probability' is not specified). I therefore expect that the following command: h - hist(StockReturns, freq=FALSE) where StockReturns has the following data in it: sourcedata$StockReturns [1] -0.006983 0.111565 0.053782 0.027966 0.068956 0.165424 -0.022133 [8] -0.001910 0.052174 0.072589 -0.023002 0.000521 -0.015688 0.148459 [15] 0.054111 0.141044 0.096686 -0.012256 -0.030397 0.039365 0.021407 [22] -0.175750 0.053901 -0.095730 0.129717 0.33 0.061563 0.085052 [29] 0.072295 -0.008500 0.10 0.02 -0.199763 0.081856 0.013636 [36] 0.007812 0.038647 -0.026945 0.037965 -0.079889 0.056234 -0.08 [43] -0.012792 0.131711 0.015996 0.008149 0.104568 0.004046 -0.027750 [50] 0.050802 0.045714 0.092327 -0.017857 0.022574 0.08 0.051366 [57] 0.004215 0.083228 0.046803 0.021335 0.023797 0.094891 0.036541 [64] 0.016423 -0.126365 0.034219 0.098330 0.079292 -0.009901 0.021559 [71] -0.039414 0.114286 0.101856 -0.010452 0.11 0.097274 0.104843 [78] 0.144439 0.021868 0.106667 0.081250 0.002097 0.073302 0.087889 [85] -0.145165 0.014592 0.035000 0.131711 -0.126937 0.133989 would result in a graph that has an area of equal to 1.000. However, it does not - it produces frequency densities not standardized frequency densities. Can someone point me in the right direction here - I know I am being fantastically thick but can't find out how to do such a simple operation! My complete set of commands looks like this: sourcedata - read.table(c:/data.dat,header=T) attach(sourcedata) h - hist(StockReturns, col='red', labels=TRUE, ylab=Frequency Density, probability=TRUE) Where c:\data.dat is a file with the numbers above it, one per line, and the first line containing the string StockReturns. Many thanks, Alex Davies [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] hist function: freq=FALSE for standardised histograms
Dear Marco, I compared the maximum values with what I was expecting based on a calculation in Excel however I've just run the set of commands to calculate the area: out - hist(StockReturns, probability=TRUE, plot=FALSE) out sum(diff(out$breaks) * out$intensities) And it seems to have worked (its come out as 1). Looking at it in more detail I have found a mistake in the excel sheet I had that I was calculating in parallel (ironically to try to make sure that I did not make any errors with R, which is new to me!). So, as expected, I was being fantastically thick and actually had it all working about 4 hours ago and have been trying to fix a non broken thing since then! Many thanks for your help and sorry for wastig your time, Alex On 05/04/06, Marco Geraci [EMAIL PROTECTED] wrote: Hi, how did you evaluate the total area? Here is a simple example ### set.seed(100) x - rnorm(100) x.h - hist(x, freq=F, plot=F) x.h $breaks [1] -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 $counts [1] 3 4 9 14 22 20 13 7 5 2 1 $intensities [1] 0.0599 0.0800 0.1800 0.2800 0.4400 0.4000 [7] 0.2600 0.1400 0.1000 0.0400 0.0200 $density [1] 0.0599 0.0800 0.1800 0.2800 0.4400 0.4000 [7] 0.2600 0.1400 0.1000 0.0400 0.0200 $mids [1] -2.25 -1.75 -1.25 -0.75 -0.25 0.25 0.75 1.25 1.75 2.25 2.75 $xname [1] x $equidist [1] TRUE attr(,class) [1] histogram sum(diff(x.h$breaks)*x.h$density) [1] 1 # Also, you can verify diff(x.h$breaks)*x.h$density*100 [1] 2.99 4.00 9.00 14.00 22.00 20.00 13.00 [8] 7.00 5.00 2.00 1.00 HTH Marco --- Alex Davies [EMAIL PROTECTED] wrote: Dear All, I am a undergraduate using R for the first time. It seems like an excellent program and one that I look forward to using a lot over the next few years, but I have hit a very basic problem that I can't solve. I want to produce a standardised histogram, i.e. one where the area under the graph is equal to 1. I look at the manual for the histogram function and find this: freq: logical; if 'TRUE', the histogram graphic is a representation of frequencies, the 'counts' component of the result; if 'FALSE', probability densities, component 'density', are plotted (so that the histogram has a total area of one). Defaults to 'TRUE' _iff_ 'breaks' are equidistant (and 'probability' is not specified). I therefore expect that the following command: h - hist(StockReturns, freq=FALSE) where StockReturns has the following data in it: sourcedata$StockReturns [1] -0.006983 0.111565 0.053782 0.027966 0.068956 0.165424 -0.022133 [8] -0.001910 0.052174 0.072589 -0.023002 0.000521 -0.015688 0.148459 [15] 0.054111 0.141044 0.096686 -0.012256 -0.030397 0.039365 0.021407 [22] -0.175750 0.053901 -0.095730 0.129717 0.33 0.061563 0.085052 [29] 0.072295 -0.008500 0.10 0.02 -0.199763 0.081856 0.013636 [36] 0.007812 0.038647 -0.026945 0.037965 -0.079889 0.056234 -0.08 [43] -0.012792 0.131711 0.015996 0.008149 0.104568 0.004046 -0.027750 [50] 0.050802 0.045714 0.092327 -0.017857 0.022574 0.08 0.051366 [57] 0.004215 0.083228 0.046803 0.021335 0.023797 0.094891 0.036541 [64] 0.016423 -0.126365 0.034219 0.098330 0.079292 -0.009901 0.021559 [71] -0.039414 0.114286 0.101856 -0.010452 0.11 0.097274 0.104843 [78] 0.144439 0.021868 0.106667 0.081250 0.002097 0.073302 0.087889 [85] -0.145165 0.014592 0.035000 0.131711 -0.126937 0.133989 would result in a graph that has an area of equal to 1.000. However, it does not - it produces frequency densities not standardized frequency densities. Can someone point me in the right direction here - I know I am being fantastically thick but can't find out how to do such a simple operation! My complete set of commands looks like this: sourcedata - read.table(c:/data.dat,header=T) attach(sourcedata) h - hist(StockReturns, col='red', labels=TRUE, ylab=Frequency Density, probability=TRUE) Where c:\data.dat is a file with the numbers above it, one per line, and the first line containing the string StockReturns. Many thanks, Alex Davies [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com -- Alex Davies // http://www.davz.net This email and any files transmitted with it are
Re: [R] using latex() in R for Unix
Brian Quinif [EMAIL PROTECTED] writes: I am using R for Unix and want to make some LaTeX tables. I have already played around in R for Windows and have succeeded in making tables that I want using the following code: latex(Estimates, file='out.tex', rowlabel='',digits=3) However, when I use this code in Unix, I can never find the file out.tex. I assumed that R would send the file to whatever directory I was working in, but that does not seem to be the case. So, I tried specifiying the exact location where I want the file: latex(Estimates, file='/home/b/bquinif/bq/9095/out.tex', rowlabel='',digits=3) When I do that, I get an error message sayin that the file/directory does not exist. I have written lots of files from Stata to locations specified like above, so I don't understand what's going on. Can anyone help me straighten this out? I assume you mean latex() from the Hmisc package? You might want to ask its maintainer. I can't reproduce your problem though; I get the file nicely in the current directory. (I got slightly bugged by Frank's idea of having the print method run latex on the file and fire up a viewer - didn't play well with a terminal login. However, that's completely unrelated). Can you give a completely reproducible example that others might try? -- O__ Peter Dalgaard Øster Farimagsgade 5, Entr.B c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K (*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918 ~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] using latex() in R for Unix
Yes, Peter, I do mean the later() function from the Hmsic package. Below is a reproducible example. Also, does anyone know how to stop R from automatically running LaTeX on the file produced by latex() library(Matching) library(Hmisc) #make up some data X - matrix(rnorm(1000*5), ncol=5) Y1 - as.vector(rnorm(1000)) Tr - c(rep(1,500),rep(0,500)) #estimate nearest neighbor, 1-1 matching, ATT a - Match(Y=Y1, X=X, Tr=Tr, M=1) summary(a) #make up some more data Y2 - as.vector(rnorm(1000)) #estimate nearest neighbor, 1-1 matching, ATT b - Match(Y=Y2, X=X, Tr=Tr, M=1) summary(b) #Generate table of estimates Estimates - matrix(c(a$est, b$est, a$se, b$se),nrow=2) rownames(Estimates) = c(a,b) colnames(Estimates) = c(Estimate,Standard Error) print(Estimates) #I use the next line of code for running in Windows--it works, and I can find the file out.tex #latex(Estimates, file='out.tex', rowlabel='',digits=3) #I use the next line of code for running in Unix--I get an error message saying the file/directory does not exist #latex(Estimates, file='/home/b/bquinif/bq/9095/out.tex', rowlabel='',digits=3) 05 Apr 2006 22:29:54 +0200, Peter Dalgaard [EMAIL PROTECTED]: Brian Quinif [EMAIL PROTECTED] writes: I am using R for Unix and want to make some LaTeX tables. I have already played around in R for Windows and have succeeded in making tables that I want using the following code: latex(Estimates, file='out.tex', rowlabel='',digits=3) However, when I use this code in Unix, I can never find the file out.tex. I assumed that R would send the file to whatever directory I was working in, but that does not seem to be the case. So, I tried specifiying the exact location where I want the file: latex(Estimates, file='/home/b/bquinif/bq/9095/out.tex', rowlabel='',digits=3) When I do that, I get an error message sayin that the file/directory does not exist. I have written lots of files from Stata to locations specified like above, so I don't understand what's going on. Can anyone help me straighten this out? I assume you mean latex() from the Hmisc package? You might want to ask its maintainer. I can't reproduce your problem though; I get the file nicely in the current directory. (I got slightly bugged by Frank's idea of having the print method run latex on the file and fire up a viewer - didn't play well with a terminal login. However, that's completely unrelated). Can you give a completely reproducible example that others might try? -- O__ Peter Dalgaard Øster Farimagsgade 5, Entr.B c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K (*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918 ~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] R2WinBUGS error
Dear R-help, I'm using the R2WinBUGS package and getting an error message: Error in file(file, r) : unable to open connection In addition: Warning message: cannot open file 'codaIndex.txt', reason 'No such file or directory' I'm using R 2.2.1 and WinBUGS 1.4.1 on a windows machine (XP). My R code and WinBUGS code is given below. The complete WinBUGS program executes correctly in WinBUGS however I'm generating some of my inits using WinBUGS so I may be making an error there. On the other hand, the error generated in R seems to imply it cannot locate a file. I've checked my paths and they are correct. Also, my data is loading correctly. Many thanks, Joe # # R code # Runs Bayesian Ordered Logit by calling WinBUGS from R # ologit2.txt: WinBUGS commands library(R2WinBUGS) setwd(c:/docume~1/admini~1/mydocu~1/r_tuto~1) load(oldat.Rdata) # R data file containing data frame ol.dat # with vars: q02, bf23f, bf23b, bf22, bf34a, bf34.1, bf34.2 q02 -ol.dat$q02 bf23f - ol.dat$bf23f bf23b - ol.dat$bf23b bf22 - ol.dat$bf22 bf34a - ol.dat$bf34a bf34.1 - ol.dat$bf34.1 bf34.2 - ol.dat$bf34.2 N=nrow(ol.dat) Ncut=5 data - list(N, q02, bf23f, bf23b, bf22, bf34a, bf34.1, bf34.2, Ncut) inits - function() { list(k=c(-5, -4, -3, -2, -1), tau=2, theta=rnorm(7, -1, 100)) } parameters - c(k) olog.out - bugs(data, inits, parameters, model.file=c:/Documents and Settings/Administrator/My Documents/r_tutorial/ologit2.txt, n.chains = 2, n.iter = 1000, bugs.directory = c:/Program Files/WinBUGS14/) # WinBUGS code model exec; { # Priors on regression coefficients theta[1] ~ dnorm( -1,1.0) ; theta[2] ~ dnorm(-1,1.0) theta[3] ~ dnorm( 1,1.0) ; theta[4] ~ dnorm(-1,1.0) theta[5] ~ dnorm( -1,1.0) ; theta[6] ~ dnorm( 1,1.0) theta[7] ~ dnorm( -1,1.0) # Priors on latent variable cutpoints k[1] ~ dnorm(0, 0.1)I(, k[2]); k[2] ~ dnorm(0, 0.1)I(k[1], k[3]) k[3] ~ dnorm(0, 0.1)I(k[2], k[4]); k[4] ~ dnorm(0, 0.1)I(k[3], k[5]) k[5] ~ dnorm(0, 0.1)I(k[4], ) # Prior on precision tau ~ dgamma(0.001, 0.001) # Some defs sigma - sqrt(1 / tau); log.sigma - log(sigma); for (i in 1 : N) { # Prior on b[i] ~ dnorm(0.0, tau) # Model Mean mu[i] - theta[1] + theta[2]*bf22[i] + theta[3]*bf23b[i] + theta[4]*bf23f[i] + theta[5]*bf34a[i] + theta[6]*bf34.1[i] + theta[7]*bf34.2[i] for (j in 1 : Ncut) { # Logit Model # cumulative probability of lower response than j logit(Q[i, j]) - -(k[j] + mu[i] + b[i]) } # probability that response = j p[i,1] - max( min(1 - Q[i,1], 1), 0) for (j in 2 : Ncut) { p[i,j] - max( min(Q[i,j-1] - Q[i,j],1), 0) } p[i,(Ncut+1)] - max( min(Q[i,Ncut], 1), 0) q02[i] ~ dcat(p[i, ]) } } [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] using latex() in R for Unix
Brian Quinif [EMAIL PROTECTED] writes: Yes, Peter, I do mean the later() function from the Hmsic package. Below is a reproducible example. Also, does anyone know how to stop R from automatically running LaTeX on the file produced by latex() The last bit is easy. It's the print method that does that, so just don't print. E.g. invisible(latex()) -- O__ Peter Dalgaard Øster Farimagsgade 5, Entr.B c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K (*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918 ~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] using latex() in R for Unix
I seem to have straightened out the problem I was having by typing ls -ald /home/b/bquinif/bq/9095/ before running R. Thanks again to the ever helpful people on this list. BQ before you start 05 Apr 2006 23:11:12 +0200, Peter Dalgaard [EMAIL PROTECTED]: Brian Quinif [EMAIL PROTECTED] writes: Yes, Peter, I do mean the later() function from the Hmsic package. Below is a reproducible example. Also, does anyone know how to stop R from automatically running LaTeX on the file produced by latex() The last bit is easy. It's the print method that does that, so just don't print. E.g. invisible(latex()) -- O__ Peter Dalgaard Øster Farimagsgade 5, Entr.B c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K (*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918 ~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] using latex() in R for Unix
Your command latex(Estimates, file='out.tex', rowlabel='',digits=3) works for me in a unix environment, and produces the file out.tex in the current working directory. (your second example is not reproducible, in general, since I don't have directory named /home/b/bquinif/bq/9095 ) Your first message indicated that the command latex(Estimates, file='out.tex', rowlabel='',digits=3) did *not* return an error, just that you could not find the file. If that's the case, then try getwd() to find out what R thinks your current directory is. That is where R should be putting the file. You can also try, from within R, list.files() list.files(patt='tex') to see what is there. -Don At 4:47 PM -0400 4/5/06, Brian Quinif wrote: Yes, Peter, I do mean the later() function from the Hmsic package. Below is a reproducible example. Also, does anyone know how to stop R from automatically running LaTeX on the file produced by latex() library(Matching) library(Hmisc) #make up some data X - matrix(rnorm(1000*5), ncol=5) Y1 - as.vector(rnorm(1000)) Tr - c(rep(1,500),rep(0,500)) #estimate nearest neighbor, 1-1 matching, ATT a - Match(Y=Y1, X=X, Tr=Tr, M=1) summary(a) #make up some more data Y2 - as.vector(rnorm(1000)) #estimate nearest neighbor, 1-1 matching, ATT b - Match(Y=Y2, X=X, Tr=Tr, M=1) summary(b) #Generate table of estimates Estimates - matrix(c(a$est, b$est, a$se, b$se),nrow=2) rownames(Estimates) = c(a,b) colnames(Estimates) = c(Estimate,Standard Error) print(Estimates) #I use the next line of code for running in Windows--it works, and I can find the file out.tex #latex(Estimates, file='out.tex', rowlabel='',digits=3) #I use the next line of code for running in Unix--I get an error message saying the file/directory does not exist #latex(Estimates, file='/home/b/bquinif/bq/9095/out.tex', rowlabel='',digits=3) 05 Apr 2006 22:29:54 +0200, Peter Dalgaard [EMAIL PROTECTED]: Brian Quinif [EMAIL PROTECTED] writes: I am using R for Unix and want to make some LaTeX tables. I have already played around in R for Windows and have succeeded in making tables that I want using the following code: latex(Estimates, file='out.tex', rowlabel='',digits=3) However, when I use this code in Unix, I can never find the file out.tex. I assumed that R would send the file to whatever directory I was working in, but that does not seem to be the case. So, I tried specifiying the exact location where I want the file: latex(Estimates, file='/home/b/bquinif/bq/9095/out.tex', rowlabel='',digits=3) When I do that, I get an error message sayin that the file/directory does not exist. I have written lots of files from Stata to locations specified like above, so I don't understand what's going on. Can anyone help me straighten this out? I assume you mean latex() from the Hmisc package? You might want to ask its maintainer. I can't reproduce your problem though; I get the file nicely in the current directory. (I got slightly bugged by Frank's idea of having the print method run latex on the file and fire up a viewer - didn't play well with a terminal login. However, that's completely unrelated). Can you give a completely reproducible example that others might try? -- O__ Peter Dalgaard Øster Farimagsgade 5, Entr.B c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K (*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918 ~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- -- Don MacQueen Environmental Protection Department Lawrence Livermore National Laboratory Livermore, CA, USA __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Correlation of coefficients?
Hi R users! One thing I cannot understand with R is the frequently appearing Correlation of coefficients. I do not find a through going explanation of this concept either on the help pages of R or on the internet. A Google search has shown many examples of R/S-Plus outputs but no explanations of meaning. I am afraid this turns out to be a silly and unworthy question. Would anyone be kind enough to show relevant literature or a text book to a layperson studying on his/her own? Thanks in advance. -- Yukihiro Ishii [EMAIL PROTECTED] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] using latex() in R for Unix
Peter Dalgaard wrote: Brian Quinif [EMAIL PROTECTED] writes: Yes, Peter, I do mean the later() function from the Hmsic package. Below is a reproducible example. Also, does anyone know how to stop R from automatically running LaTeX on the file produced by latex() The last bit is easy. It's the print method that does that, so just don't print. E.g. invisible(latex()) Or do w - latex(.., file='...') Frank -- Frank E Harrell Jr Professor and Chair School of Medicine Department of Biostatistics Vanderbilt University __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] determine dimension on which by() applies
By works on the rows or first dimension. See ?by, ?apply, ?aperm ?tapply On 4/5/06, Werner Wernersen [EMAIL PROTECTED] wrote: Hi, having solved one problem with your kindest help I directly ran into the next one: I now have a 4-dimensional array and I want to aggregate subsets (sum up over) for some of the dimensions. This sort of aggregating works beautifully for matrices with by(). Is there a similar function where I additionally can choose the dimension to do the group calculations over? Thanks again! Werner __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] R performance: different CPUs
Hi, 64bit CPUs, such as opterons, help significantly with large databases or if you are running multiple processes. But there is a speed penalty if you are not. Some packages can make use of multiple processors, such as my rgenoud (genetic optimization using derivatives) and Matching packages, but most do not. For these packages the speed up is significant. There are also multithreaded BLAS which can be used reliably under LINUX, but the speed benefit is usually small. You may want to check out some benchmarks at: http://sekhon.berkeley.edu/macosx/ (linux does very well). Cheers, JS. === Jasjeet S. Sekhon Associate Professor Survey Research Center UC Berkeley http://sekhon.berkeley.edu/ V: 510-642-9974 F: 617-507-5524 === Toby Muhlhofer writes: Hello! I need to purchase a new box, which I would like to optimize for good R performance. For the record, I will run Fedora Core 5 as and OS, and I wanted to know if anyone has experience with how the following affects R performance: - Is there a big advantage to having a 64-bit CPU over having a 32-bit? - Does an Opteron offer any advantages over an Athlon, and if yes, does it justify an investment of about US $75 more for equivalent listed speeds? - Have people successfully multithreaded R computations, such as to justify a dual-core CPU? I understand R itself does not multithread, but of course it should be possible to write code that paralellizes computations and I wanted to know if anyone has experience doing so and gained large speed advantages by it. Thanks, Toby Muhlhfoer __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Multivariate linear regression
Hi, I am working on a multivariate linear regression of the form y = Ax. I am seeing a great dispersion of y w.r.t x. For example, the correlations between y and x are very small, even after using some typical transformations like log, power. I tried with simple linear regression, robust regression and ace and avas package in R (or splus). I didn't see an improvement in the fit and predictions over simple linear regression. (I also tried this with transformed variables) I am sure that some of you came across such data. How did you deal with it? Linear regressions are good for the data like y = x + 0.01Normal(mu,sigma2) i.e. a small noise (data observed in a lab). But linear regressions are bad for large noise, like typical market (or survey) data. Thank you, Nagu __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] R performance: different CPUs
On Wed, 5 Apr 2006, Jasjeet Singh Sekhon wrote: Hi, 64bit CPUs, such as opterons, help significantly with large databases or if you are running multiple processes. But there is a speed penalty if you are not. This would be true of 64-bit builds of R, not 64-bit CPUs. On a 64-bit processor you can usually run either 64-bit or 32-bit builds of R, and the 64-bit one will be able to access more memory but will be slower. This doesn't mean that a 32-bit build of R on a 64-bit processor will be slower than a 32-bit build of R on a 32-bit processor. -thomas Thomas Lumley Assoc. Professor, Biostatistics [EMAIL PROTECTED] University of Washington, Seattle __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] R performance: different CPUs
This would be true of 64-bit builds of R, not 64-bit CPUs. [...] This doesn't mean that a 32-bit build of R on a 64-bit processor will be slower than a 32-bit build of R on a 32-bit processor. There is the issue, however, of running a 32bit application on a 64bit OS. Under RedHat and SuSE this works transparently and I've not noticed a performance issue, but under Debian's or Ubuntu's chroot setup there is in my experience a measurable performance hit. Of course, one could simply run 32bit Debian along with 32bit R on an Opteron. Cheers, Jas. === Jasjeet S. Sekhon Associate Professor Travers Department of Political Science Survey Research Center UC Berkeley http://sekhon.berkeley.edu/ V: 510-642-9974 F: 617-507-5524 === Thomas Lumley writes: On Wed, 5 Apr 2006, Jasjeet Singh Sekhon wrote: Hi, 64bit CPUs, such as opterons, help significantly with large databases or if you are running multiple processes. But there is a speed penalty if you are not. This would be true of 64-bit builds of R, not 64-bit CPUs. On a 64-bit processor you can usually run either 64-bit or 32-bit builds of R, and the 64-bit one will be able to access more memory but will be slower. This doesn't mean that a 32-bit build of R on a 64-bit processor will be slower than a 32-bit build of R on a 32-bit processor. -thomas Thomas LumleyAssoc. Professor, Biostatistics [EMAIL PROTECTED]University of Washington, Seattle __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Correlation of coefficients?
If you're talking about regression models, then I'm puzzled that this occurs frequently in R, since the summary.lm() function defaults to 'correlation = FALSE'. S-PLUS defaults to TRUE. Can you give an example where R gives you the correlations of coefficients? Anyway, parameter estimators are correlated random variables and, in the case of regression, summary(model, correlation = TRUE) gives the estimated correlation matrix. This must be somewhere in MASS, the book; any decent linear models text should will have a discussion, e.g. Draper and Smith, Applied Regression Analysis. Peter Ehlers asako Ishii wrote: Hi R users! One thing I cannot understand with R is the frequently appearing Correlation of coefficients. I do not find a through going explanation of this concept either on the help pages of R or on the internet. A Google search has shown many examples of R/S-Plus outputs but no explanations of meaning. I am afraid this turns out to be a silly and unworthy question. Would anyone be kind enough to show relevant literature or a text book to a layperson studying on his/her own? Thanks in advance. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Multivariate linear regression
Ummm... If y is unrelated to x, then why would one expect any reasonable method to show a greater or lesser relationship than any other? It's all random. Of course, put enough random regressors into/tune the parameters enough of any regression methodology and you'll be able to precisely predict the data at hand -- but **only** the data at hand. I should note that such work apparently frequently appears in various sorts of informatics/data mining/omics/etc. journals these days, as various papers demonstrating the irreproducibility of numerous purported discoveries have infamously demonstrated. Let us not forget Occam! Just being cranky ... -- Bert Gunter -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Nagu Sent: Wednesday, April 05, 2006 3:52 PM To: r-help@stat.math.ethz.ch Subject: [R] Multivariate linear regression Hi, I am working on a multivariate linear regression of the form y = Ax. I am seeing a great dispersion of y w.r.t x. For example, the correlations between y and x are very small, even after using some typical transformations like log, power. I tried with simple linear regression, robust regression and ace and avas package in R (or splus). I didn't see an improvement in the fit and predictions over simple linear regression. (I also tried this with transformed variables) I am sure that some of you came across such data. How did you deal with it? Linear regressions are good for the data like y = x + 0.01Normal(mu,sigma2) i.e. a small noise (data observed in a lab). But linear regressions are bad for large noise, like typical market (or survey) data. Thank you, Nagu __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Multivariate linear regression
Hi Bert, Thank you for your prompt reply. I understand your point. But randomness is just a matter of scale of the object (Ramsey Theory) . The X matrix does not explain the complete variation in Y due to a large noise in X or simply the mapping f: X-Y is many valued (or due to other finite number of reasons). Theoretically inverse does not exist for many valued functions. In regression type problems, we are evaluating the pseudoinverse of data space. To estimate the inverses of many valued functions, theoretically, we may have to use branch cuts method or something called Riemann surfaces, they are partition of the domain of connected sheets. As I am not a qualified statistician or have a good experience in building statistical models for highly noisy data, I am wondering how did you deal with such situations, if any exist, in your working experience? I will try your idea of feeding some random variables as predictors in X. Thank you again, Nagu P.S. Why is that pattern recognition is all about finding patterns that can not be seen easily, huh? On 4/5/06, Berton Gunter [EMAIL PROTECTED] wrote: Ummm... If y is unrelated to x, then why would one expect any reasonable method to show a greater or lesser relationship than any other? It's all random. Of course, put enough random regressors into/tune the parameters enough of any regression methodology and you'll be able to precisely predict the data at hand -- but **only** the data at hand. I should note that such work apparently frequently appears in various sorts of informatics/data mining/omics/etc. journals these days, as various papers demonstrating the irreproducibility of numerous purported discoveries have infamously demonstrated. Let us not forget Occam! Just being cranky ... -- Bert Gunter -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Nagu Sent: Wednesday, April 05, 2006 3:52 PM To: r-help@stat.math.ethz.ch Subject: [R] Multivariate linear regression Hi, I am working on a multivariate linear regression of the form y = Ax. I am seeing a great dispersion of y w.r.t x. For example, the correlations between y and x are very small, even after using some typical transformations like log, power. I tried with simple linear regression, robust regression and ace and avas package in R (or splus). I didn't see an improvement in the fit and predictions over simple linear regression. (I also tried this with transformed variables) I am sure that some of you came across such data. How did you deal with it? Linear regressions are good for the data like y = x + 0.01Normal(mu,sigma2) i.e. a small noise (data observed in a lab). But linear regressions are bad for large noise, like typical market (or survey) data. Thank you, Nagu __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Nested error structure in nonlinear model
Have you read Pinheiro and Bates (2000) Mixed-Effects Models in S and S-Plus (Springer). If no, I believe that book should help you. If you have access to the library at the University of Waikato, as suggested by your email address, then I suggest you try the library there. The electronic catalog just told me it had a paper copy listed as available as well as an electronic copy. If you have read Pinheiro and Bates, then please prepare another post more consistent with the posting guide www.R-project.org/posting-guide.html. Pinheiro and Bates and the documentation for nlme contain several examples that should help you describe succintly where you are encountering difficulties. If someone like me can copy a few lines of code from an email, paste it into R and reproduce what you are seeing in a few seconds, you are much more likely to get quality help quickly than if I have to guess -- and at least one of the leading contributors to this list has announced a personal policy of rarely replying to posts whose style is not consistent with that guide. hope this helps, spencer graves Murray Jorgensen wrote: I am trying to fit a nonlinear regression model to data. There are several predictor variables and 8 parameters. I will write the model as Y ~ Yhat(theta1,...,theta8) OK, I can do this using nls() - but only just as there are not as many observations as might be desired. Now the problem is that we have a factor Site and I want to include a corresponding error component. I tried something like (excuse the doctored R output): mod0.nlme - nlme(Y ~ Yhat(theta1,...,theta8) + site.err , +start = c(mod0.st,0), +groups = ~ Site, + fixed = theta1 + ... + theta8 ~ 1, + random = site.err ~ 1) Error in nlme.formula(Y ~ Yhat(theta1,...,theta8 : starting values for the fixed component are not the correct length and then mod0.nlme - nlme(Y ~ Yhat(theta1,...,theta8) + site.err , +start = mod0.st, +groups = ~ Site, + fixed = theta1 + ... + theta8 ~ 1, + random = site.err ~ 1) Error: Singularity in backsolve at level 0, block 1 The straightforward way would be to go mod0.nlme - nlme(Y ~ Yhat(theta1,...,theta8) , +start = mod0.st, +groups = ~ Site, + fixed = theta1 + ... + theta8 ~ 1) making all 8 parameters random, but there is little hope of getting that to work. (I did try it and it does not crash straight away but settles down to run forever.) Any comments welcome. I regret that I cannot go into much detail about the actual problem. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] R performance: different CPUs
From: Jasjeet Singh Sekhon This would be true of 64-bit builds of R, not 64-bit CPUs. [...] This doesn't mean that a 32-bit build of R on a 64-bit processor will be slower than a 32-bit build of R on a 32-bit processor. There is the issue, however, of running a 32bit application on a 64bit OS. Under RedHat and SuSE this works transparently and I've not noticed a performance issue, but under Debian's or Ubuntu's chroot setup there is in my experience a measurable performance hit. Of course, one could simply run 32bit Debian along with 32bit R on an Opteron. It's my recollection that about a year ago there was still fierce debate about Debian on x86_64: One group got a pure 64-bit version to a usable state sooner than another group that was trying to make a standard conforming version that would run both 64- and 32-bit apps transparently. I don't know how that was resolved in the end, but that basically was enough for me to steer clear of any Debian-based distro for x86_64. Andy Cheers, Jas. === Jasjeet S. Sekhon Associate Professor Travers Department of Political Science Survey Research Center UC Berkeley http://sekhon.berkeley.edu/ V: 510-642-9974 F: 617-507-5524 === Thomas Lumley writes: On Wed, 5 Apr 2006, Jasjeet Singh Sekhon wrote: Hi, 64bit CPUs, such as opterons, help significantly with large databasesor if you are running multiple processes. But there is a speedpenalty if you are not. This would be true of 64-bit builds of R, not 64-bit CPUs. On a 64-bit processor you can usually run either 64-bit or 32-bit builds of R, and the 64-bit one will be able to access more memory but will be slower. This doesn't mean that a 32-bit build of R on a 64-bit processor will be slower than a 32-bit build of R on a 32-bit processor. -thomas Thomas Lumley Assoc. Professor, Biostatistics [EMAIL PROTECTED] University of Washington, Seattle __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] How to implement an iterative unit root test
Hello, How can an interative unit root test be implemented in R? More specifically, given a time series, I wish to perform the Dickey Fuller Test on a daily basis for say the last 100 observations. It would be interative in the sense that this test would be repeated each day for the last 100 observations. Given the daily Dickey Fuller estimates of delta for the autoregressive process d(Y(t)) = delta * Y(t-1) + u(t) , the significance of delta would be computed. If possible, I would like to extract that value and record it in a table, that is a table containing the tau-values of a each day's calculations. How can such a test be done in R? More specifically, how can it be programmed to iteratively perform the test and also how to extract the t-values on a daily basis? Thank you. Sincerely, Bernd Dittmann __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] R2WinBUGS error
Dear R-help, I'm using the R2WinBUGS package and getting an error message: Error in file(file, r) : unable to open connection In addition: Warning message: cannot open file 'codaIndex.txt', reason 'No such file or directory' I'm using R 2.2.1 and WinBUGS 1.4.1 on a windows machine (XP). My R code and WinBUGS code is given below. The complete WinBUGS program executes correctly in WinBUGS however I'm generating some of my inits using WinBUGS so I may be making an error there. On the other hand, the error generated in R seems to imply it cannot locate a file. I've checked my paths and they are correct. Also, my data is loading correctly. Many thanks, Joe # # R code # Runs Bayesian Ordered Logit by calling WinBUGS from R # ologit2.txt: WinBUGS commands library(R2WinBUGS) setwd(c:/docume~1/admini~1/mydocu~1/r_tuto~1) load(oldat.Rdata) # R data file containing data frame ol.dat # with vars: q02, bf23f, bf23b, bf22, bf34a, bf34.1, bf34.2 q02 -ol.dat$q02 bf23f - ol.dat$bf23f bf23b - ol.dat$bf23b bf22 - ol.dat$bf22 bf34a - ol.dat$bf34a bf34.1 - ol.dat$bf34.1 bf34.2 - ol.dat$bf34.2 N=nrow(ol.dat) Ncut=5 data - list(N, q02, bf23f, bf23b, bf22, bf34a, bf34.1, bf34.2, Ncut) inits - function() { list(k=c(-5, -4, -3, -2, -1), tau=2, theta=rnorm(7, -1, 100)) } parameters - c(k) olog.out - bugs(data, inits, parameters, model.file=c:/Documents and Settings/Administrator/My Documents/r_tutorial/ologit2.txt, n.chains = 2, n.iter = 1000, bugs.directory = c:/Program Files/WinBUGS14/) # WinBUGS code model exec; { # Priors on regression coefficients theta[1] ~ dnorm( -1,1.0) ; theta[2] ~ dnorm(-1,1.0) theta[3] ~ dnorm( 1,1.0) ; theta[4] ~ dnorm(-1,1.0) theta[5] ~ dnorm( -1,1.0) ; theta[6] ~ dnorm( 1,1.0) theta[7] ~ dnorm( -1,1.0) # Priors on latent variable cutpoints k[1] ~ dnorm(0, 0.1)I(, k[2]); k[2] ~ dnorm(0, 0.1)I(k[1], k[3]) k[3] ~ dnorm(0, 0.1)I(k[2], k[4]); k[4] ~ dnorm(0, 0.1)I(k[3], k[5]) k[5] ~ dnorm(0, 0.1)I(k[4], ) # Prior on precision tau ~ dgamma(0.001, 0.001) # Some defs sigma - sqrt(1 / tau); log.sigma - log(sigma); for (i in 1 : N) { # Prior on b[i] ~ dnorm(0.0, tau) # Model Mean mu[i] - theta[1] + theta[2]*bf22[i] + theta[3]*bf23b[i] + theta[4]*bf23f[i] + theta[5]*bf34a[i] + theta[6]*bf34.1[i] + theta[7]*bf34.2[i] for (j in 1 : Ncut) { # Logit Model # cumulative probability of lower response than j logit(Q[i, j]) - -(k[j] + mu[i] + b[i]) } # probability that response = j p[i,1] - max( min(1 - Q[i,1], 1), 0) for (j in 2 : Ncut) { p[i,j] - max( min(Q[i,j-1] - Q[i,j],1), 0) } p[i,(Ncut+1)] - max( min(Q[i,Ncut], 1), 0) q02[i] ~ dcat(p[i, ]) } } [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] R2WinBUGS erro
Dear R-help, I'm using the R2WinBUGS package and getting an error message: Error in file(file, r) : unable to open connection In addition: Warning message: cannot open file 'codaIndex.txt', reason 'No such file or directory' I'm using R 2.2.1 and WinBUGS 1.4.1 on a windows machine (XP). My R code and WinBUGS code is given below. The complete WinBUGS program executes correctly in WinBUGS however I'm generating some of my inits using WinBUGS so I may be making an error there. On the other hand, the error generated in R seems to imply it cannot locate a file. I've checked my paths and they are correct. Also, my data is loading correctly. Many thanks, Joe # # R code # Runs Bayesian Ordered Logit by calling WinBUGS from R # ologit2.txt: WinBUGS commands library(R2WinBUGS) setwd(c:/docume~1/admini~1/mydocu~1/r_tuto~1) load(oldat.Rdata) # R data file containing data frame ol.dat # with vars: q02, bf23f, bf23b, bf22, bf34a, bf34.1, bf34.2 q02 -ol.dat$q02 bf23f - ol.dat$bf23f bf23b - ol.dat$bf23b bf22 - ol.dat$bf22 bf34a - ol.dat$bf34a bf34.1 - ol.dat$bf34.1 bf34.2 - ol.dat$bf34.2 N=nrow(ol.dat) Ncut=5 data - list(N, q02, bf23f, bf23b, bf22, bf34a, bf34.1, bf34.2, Ncut) inits - function() { list(k=c(-5, -4, -3, -2, -1), tau=2, theta=rnorm(7, -1, 100)) } parameters - c(k) olog.out - bugs(data, inits, parameters, model.file=c:/Documents and Settings/Administrator/My Documents/r_tutorial/ologit2.txt, n.chains = 2, n.iter = 1000, bugs.directory = c:/Program Files/WinBUGS14/) # WinBUGS code model exec; { # Priors on regression coefficients theta[1] ~ dnorm( -1,1.0) ; theta[2] ~ dnorm(-1,1.0) theta[3] ~ dnorm( 1,1.0) ; theta[4] ~ dnorm(-1,1.0) theta[5] ~ dnorm( -1,1.0) ; theta[6] ~ dnorm( 1,1.0) theta[7] ~ dnorm( -1,1.0) # Priors on latent variable cutpoints k[1] ~ dnorm(0, 0.1)I(, k[2]); k[2] ~ dnorm(0, 0.1)I(k[1], k[3]) k[3] ~ dnorm(0, 0.1)I(k[2], k[4]); k[4] ~ dnorm(0, 0.1)I(k[3], k[5]) k[5] ~ dnorm(0, 0.1)I(k[4], ) # Prior on precision tau ~ dgamma(0.001, 0.001) # Some defs sigma - sqrt(1 / tau); log.sigma - log(sigma); for (i in 1 : N) { # Prior on b[i] ~ dnorm(0.0, tau) # Model Mean mu[i] - theta[1] + theta[2]*bf22[i] + theta[3]*bf23b[i] + theta[4]*bf23f[i] + theta[5]*bf34a[i] + theta[6]*bf34.1[i] + theta[7]*bf34.2[i] for (j in 1 : Ncut) { # Logit Model # cumulative probability of lower response than j logit(Q[i, j]) - -(k[j] + mu[i] + b[i]) } # probability that response = j p[i,1] - max( min(1 - Q[i,1], 1), 0) for (j in 2 : Ncut) { p[i,j] - max( min(Q[i,j-1] - Q[i,j],1), 0) } p[i,(Ncut+1)] - max( min(Q[i,Ncut], 1), 0) q02[i] ~ dcat(p[i, ]) } } [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] skipping rows in trellis key
Hi, I would like to add a key to my trellis plot using draw.key. Here is what I want: 3 x 6 key where the first row is a header. row 1: empty, S-R Mapping, R^2 row 2: pch=17, Color, 0.951 row 2: pch=17, Shape, 0.934 ect... The problem is that I would like the cell in the upper left corner to be empty (a placeholder) with the remaining entries in that column the appropriate pch symbols. Is there a way to do that using draw.key? My other option is to create a grid table from scratch. But, I'd like to capitalize on draw.key if I can. Suggestions? If draw.key won't work, suggestions on how to do it with grid? Thanks, Steve [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] skipping rows in trellis key
On 4/5/06, Steven Lacey [EMAIL PROTECTED] wrote: Hi, I would like to add a key to my trellis plot using draw.key. Here is what I want: 3 x 6 key where the first row is a header. row 1: empty, S-R Mapping, R^2 row 2: pch=17, Color, 0.951 row 2: pch=17, Shape, 0.934 ect... The problem is that I would like the cell in the upper left corner to be empty (a placeholder) with the remaining entries in that column the appropriate pch symbols. Is there a way to do that using draw.key? My other option is to create a grid table from scratch. But, I'd like to capitalize on draw.key if I can. Suggestions? If draw.key won't work, suggestions on how to do it with grid? Does col = c('transparent', ...) not work? Deepayan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] skipping rows in trellis key
Yes, that works! Thanks! On a related note... When I draw my key there is not sufficient padding between the last line of the key and the border line. For instance, the vertical line in p often drops below the border. Is there an easy way to add padding between the last row of the key and the border? Thanks, Steve -Original Message- From: Deepayan Sarkar [mailto:[EMAIL PROTECTED] Sent: Wednesday, April 05, 2006 10:35 PM To: Steven Lacey Cc: r-help@stat.math.ethz.ch Subject: Re: [R] skipping rows in trellis key On 4/5/06, Steven Lacey [EMAIL PROTECTED] wrote: Hi, I would like to add a key to my trellis plot using draw.key. Here is what I want: 3 x 6 key where the first row is a header. row 1: empty, S-R Mapping, R^2 row 2: pch=17, Color, 0.951 row 2: pch=17, Shape, 0.934 ect... The problem is that I would like the cell in the upper left corner to be empty (a placeholder) with the remaining entries in that column the appropriate pch symbols. Is there a way to do that using draw.key? My other option is to create a grid table from scratch. But, I'd like to capitalize on draw.key if I can. Suggestions? If draw.key won't work, suggestions on how to do it with grid? Does col = c('transparent', ...) not work? Deepayan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] rounding of voronoi vertices using deldir()
Hello list, I'm just getting started with using R - I have been trying over the past day or so to work out a method for generating voronoi polygons for PostGIS using SQL. I was able to put together a procedure which works relatively well, but is somewhat inefficient. Someone on the PostGIS list pointed me to the deldir() function in R, for which I can export a text file with x/y coordinates from a PostGIS table, and write an SQL script with insert statements containing PostGIS ewkt-compatible geometries representing the voronoi polygons generated by deldir(). The script I'm using is below. I've added a very small frac parameter to ensure none of the points are dropped from my dataset (which was actually happening for me with some fairly dispersed clusters of points), and a rectangular window that is much larger than the dataset itself to ensure that the voronoi polygons extend far enough to actually cover all of the points in my dataset. The problem I'm having is that the vertices at the corners of these polygons seem to have their coordinates rounded to one decimal place. Based on the documentation, the 'digits' option should allow me to change this, but I'm not having getting anything different no matter what I set it to. Is there something obvious that I've missed? I realize that in most practical applications it's not a significant issue, but I'd prefer more accuracy as I may be using voronoi polygons to evaluate some statistical methods. Below are some sample coordinates from my points.txt file, and the script I'm running in R to write out the voronoi polygons. If anyone can see what's going, I'd be happy to hear it, as the R calculations are much faster than the method I'm using within PostGIS/SQL: points.txt: 22.462042329 | 8665540.49905558 270171.836250559 | 8667802.6446983 268895.572741816 | 8674257.75469324 270054.378262961 | 8666483.37597101 268402.641255299 | 8664853.87941629 265707.056272354 | 8665434.09025432 269985.118229025 | 8667743.14071004 269282.034045422 | 8665403.39312076 R library(deldir) points = scan(file=points.txt,what=list(x=0.0,y=0.0),sep=|) voro = deldir(points$x,points$y,digits=10,frac=0.001,rw=c(min(points$x)-abs(min(points$x)-max(points$x)),max(points$x)+abs(min(points$x)-max(points$x)),min(points$y)-abs(min(points$y)-max(points$y)),max(points$y)+abs(min(points$y)-max(points$y # generate voronoi edges tiles = tile.list(voro)# combine edges into polygons sink(voronoi.sql)# redirect output to file for (i in 1:length(tiles)) { # write out polygons tile = tiles[[i]] cat(insert into mytable (the_geom) values(geomfromtext('POLYGON(() for (j in 1:length(tile$x)) { cat (tile$x[[j]],' ',tile$y[[j]],,) } cat (tile$x[[1]],' ',tile$y[[1]]) #close polygon cat ())',32718));\n) # add SRID and newline } sink() # output to terminal q() n Regards, Mike __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] skipping rows in trellis key
On 4/5/06, Steven Lacey [EMAIL PROTECTED] wrote: Yes, that works! Thanks! On a related note... When I draw my key there is not sufficient padding between the last line of the key and the border line. For instance, the vertical line in p often drops below the border. Is there an easy way to add padding between the last row of the key and the border? Example please. Deepayan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] skipping rows in trellis key
Try this... xyplot(y~x,data=data.frame(x=1:10,y=1:10)) keyArgs - list() keyArgs - list(points=list(pch=c(NA,rep(17,5)),lwd=2,col=c(NA,c(red, chartreuse3, black, cyan, blue))), text=list(lab=c(S-R Mapping, Color,Shape,Letter,Compatible,Incompatible),cex=c(1.2,1,1,1,1,1)), text=list(lab=c(expression(R^2),as.character(rep(0.999,5))),cex=c(1.2,1,1,1, 1,1))) keyArgs$between - c(1,0,5) keyArgs$columns - 1 keyArgs$column.between - 1 keyArgs$border - TRUE drawKeyArgs - list(key=keyArgs,draw=FALSE) keyArgs$draw - FALSE key - do.call(draw.key,drawKeyArgs) vp1-viewport(x=1,y=1,height=unit(1,grobheight,key),width=unit(1,grobwidt h,key),just=c(right,top)) pushViewport(vp1) grid.draw(key) popViewport() Steve -Original Message- From: Deepayan Sarkar [mailto:[EMAIL PROTECTED] Sent: Wednesday, April 05, 2006 11:53 PM To: Steven Lacey Cc: r-help@stat.math.ethz.ch Subject: Re: [R] skipping rows in trellis key On 4/5/06, Steven Lacey [EMAIL PROTECTED] wrote: Yes, that works! Thanks! On a related note... When I draw my key there is not sufficient padding between the last line of the key and the border line. For instance, the vertical line in p often drops below the border. Is there an easy way to add padding between the last row of the key and the border? Example please. Deepayan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html